0213F01	
  
	
  
ICT	
  Seventh	
  Framework	
  Programme	
  (ICT	
  FP7)	
  
	
  
	
  
Grant	
  Agreement	
  No:	
  288828	
  
Bridging	
  Communities	
  for	
  Next	
  Generation	
  Policy-­‐Making	
  
	
  
	
  
	
  
Towards	
  Policy-­‐making	
  2.0:	
  
The	
  International	
  Research	
  Roadmap	
  on	
  	
  
ICT	
  for	
  Governance	
  and	
  Policy	
  Modelling	
  	
  
	
  
Internal	
  Deliverable	
  Form	
  
Project	
  Reference	
  No.	
   ICT	
  FP7	
  288828	
  
Deliverable	
  No.	
  	
   D2.2.2	
  
Relevant	
  Workpackage:	
   WP2	
  
Nature:	
   Report	
  
Dissemination	
  Level:	
   Restricted	
  
Document	
  version:	
   FINAL	
  1.0	
  
Date:	
   31	
  July	
  2013	
  
Authors:	
   David	
   Osimo	
   &	
   Francesco	
   Mureddu	
   (T4I2),	
   Riccardo	
   Onori	
   &	
  
Stefano	
  Armenia	
  (CATTID),	
  Gianluca	
  Carlo	
  Misuraca	
  (IPTS)	
  
Reviewers:	
   	
  
Document	
  description:	
   This	
  deliverable	
  describes	
  the	
  final	
  version	
  of	
  the	
  new	
  International	
  
Research	
   Roadmap	
   on	
   ICT	
   Tools	
   for	
   Governance	
   and	
   Policy	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
2	
  |	
  P a g e 	
  
Modelling	
  
	
  
	
  
	
  
	
  
History	
  
Version	
   Date	
   Reason	
   Revised	
  by	
  
1.0	
   30/06/2013	
   1ST
	
  VERSION	
   	
  
	
   	
   	
   	
  
	
   	
   	
   	
  
	
   	
   	
   	
  
	
   	
   	
   	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
3	
  |	
  P a g e 	
  
TABLE	
  OF	
  CONTENTS	
  	
  
EXECUTIVE	
  SUMMARY...................................................................................................................................5	
  
1.	
   BACKGROUND:	
  WHY	
  A	
  ROADMAP?........................................................................................................8	
  
1.1.	
   The	
  rationale	
  of	
  the	
  roadmap:	
  what	
  is	
  the	
  problem? ............................................................................. 8	
  
1.2.	
   An	
  open	
  and	
  recursive	
  methodology ...................................................................................................... 9	
  
1.3.	
   Scope	
  and	
  definition.............................................................................................................................. 16	
  
1.4.	
   Policy:	
  Between	
  politics	
  and	
  services ....................................................................................................19	
  
2.	
   NOT	
  JUST	
  ANOTHER	
  HYPE:	
  THE	
  DEMAND	
  SIDE	
  OF	
  POLICY-­‐MAKING	
  2.0................................................ 20	
  
2.1.	
   The	
  typical	
  tasks	
  of	
  policy-­‐makers:	
  the	
  policy	
  cycle ..............................................................................21	
  
2.2.	
   The	
  traditional	
  tools	
  of	
  policy-­‐making...................................................................................................22	
  
2.3.	
   The	
  key	
  challenges	
  of	
  policy-­‐makers.....................................................................................................23	
  
2.3.1.	
   Detect	
  and	
  understand	
  problems	
  before	
  they	
  become	
  unsolvable............................................... 24	
  
2.3.2.	
   Generate	
  high	
  involvement	
  of	
  citizens	
  in	
  policy-­‐making................................................................ 24	
  
2.3.3.	
   Identify	
  “good	
  ideas”	
  and	
  innovative	
  solutions	
  to	
  long-­‐standing	
  problems ..................................24	
  
2.3.4.	
   Reduce	
  uncertainty	
  on	
  the	
  possible	
  impacts	
  of	
  policies ................................................................ 25	
  
2.3.5.	
   Ensure	
  long	
  -­‐	
  term	
  thinking ............................................................................................................28	
  
2.3.6.	
   Encourage	
  behavioural	
  change	
  and	
  uptake ................................................................................... 28	
  
2.3.7.	
   Manage	
  crisis	
  and	
  the	
  “unknown	
  unknown” ................................................................................. 28	
  
2.3.8.	
   Moving	
  from	
  conversations	
  to	
  action ............................................................................................ 29	
  
2.3.9.	
   Detect	
  non-­‐compliance	
  and	
  mis-­‐spending	
  through	
  better	
  transparency ......................................29	
  
2.3.10.	
   Understand	
  the	
  impact	
  of	
  policies ............................................................................................... 29	
  
2.4.	
   When	
  policy-­‐making	
  2.0	
  becomes	
  a	
  reality:	
  a	
  tentative	
  vision	
  for	
  2030............................................... 30	
  
2.4.1.	
   Agenda	
  setting	
  phase:	
  recognizing	
  the	
  problem............................................................................30	
  
2.4.2.	
   Policy	
  design...................................................................................................................................31	
  
2.4.3.	
   Implementation.............................................................................................................................. 32	
  
2.4.4.	
   Evaluation.......................................................................................................................................32	
  
2.5.	
   The	
  key	
  challenges	
  for	
  policy	
  makers	
  and	
  the	
  corresponding	
  phases	
  in	
  the	
  policy	
  cycle ..................... 32	
  
3.	
   THE	
  SUPPLY	
  SIDE:	
  CURRENT	
  STATUS	
  AND	
  THE	
  RESEARCH	
  CHALLENGES................................................ 34	
  
3.1.	
   Policy	
  Modelling ....................................................................................................................................34	
  
3.1.1.	
   Systems	
  of	
  Atomized	
  Models .........................................................................................................34	
  
3.1.2.	
   Collaborative	
  Modelling ................................................................................................................. 43	
  
3.1.3.	
   Easy	
  Access	
  to	
  Information	
  and	
  Knowledge	
  Creation ....................................................................54	
  
3.1.4.	
   Model	
  Validation ............................................................................................................................ 57	
  
3.1.5.	
   Immersive	
  Simulation..................................................................................................................... 60	
  
3.1.6.	
   Output	
  Analysis	
  and	
  Knowledge	
  Synthesis..................................................................................... 62	
  
3.2.	
   Data-­‐powered	
  Collaborative	
  Governance............................................................................................. 65	
  
3.2.1.	
   Big	
  Data ..........................................................................................................................................65	
  
3.2.2.	
   Opinion	
  Mining	
  and	
  Sentiment	
  Analysis......................................................................................... 79	
  
3.2.3.	
   Visual	
  Analytics	
  for	
  collaborative	
  governance:	
  the	
  opportunities	
  and	
  the	
  research	
  challenges....86	
  
3.2.4.	
   Serious	
  Gaming	
  for	
  Behavioural	
  Change ........................................................................................ 99	
  
3.2.5.	
   Linked	
  Open	
  Government	
  Data....................................................................................................104	
  
3.2.6.	
   Collaborative	
  Governance ............................................................................................................110	
  
3.2.7.	
   Participatory	
  Sensing....................................................................................................................114	
  
3.2.8.	
   Identity	
  Management...................................................................................................................118	
  
3.2.9.	
   Global	
  Systems	
  Science ................................................................................................................121	
  
4.	
   THE	
  CASE	
  FOR	
  POLICY-­‐MAKING	
  2.0:	
  EVALUATING	
  THE	
  IMPACT .......................................................... 128	
  
4.1.	
   Cross	
  analysis	
  of	
  case	
  studies..............................................................................................................128	
  
4.1.1.	
   Global	
  Epidemic	
  and	
  Mobility	
  Model ...........................................................................................129	
  
Impact	
  of	
  Gleam.........................................................................................................................................129	
  
4.1.2.	
   UrbanSim......................................................................................................................................130	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
4	
  |	
  P a g e 	
  
4.1.3.	
   Opinion	
  Space...............................................................................................................................131	
  
4.1.4.	
   2050	
  Pathways	
  Analysis................................................................................................................133	
  
4.1.5.	
   Cross	
  analysis	
  of	
  the	
  case	
  studies.................................................................................................135	
  
4.2.	
   Survey	
  of	
  Users’	
  needs	
  results.............................................................................................................137	
  
4.3.	
   Analysis	
  of	
  the	
  prize	
  winners...............................................................................................................139	
  
4.4.	
   Lessons	
  learnt	
  from	
  cases	
  and	
  prize....................................................................................................143	
  
4.5.	
   An	
  additional	
  research	
  challenge:	
  counterfactual	
  impact	
  evaluation	
  of	
  Policy	
  Making	
  2.0................144	
  
5.	
   CONCLUSIONS:	
  POLICY-­‐MAKING	
  2.0	
  BETWEEN	
  HYPE	
  AND	
  REALITY .................................................... 149	
  
6.	
   REFERENCES....................................................................................................................................... 153	
  
7.	
   LIST	
  OF	
  ACRONYMS............................................................................................................................ 157	
  
	
  	
  
	
  
	
  
	
  
LIST	
  OF	
  FIGURES	
  
Figure	
  1:	
  the	
  fragmentation	
  of	
  policy-­‐making	
  2.0.................................................................................................. 8	
  
Figure	
  2	
  Outline	
  of	
  the	
  participatory	
  process ......................................................................................................10	
  
Figure	
  3:	
  Policy	
  Cycle	
  and	
  Related	
  Activities ........................................................................................................22	
  
Figure	
  4:	
  Total	
  Disasters	
  Reported...................................................................................................................... 29	
  
Figure	
  5:	
  Agricultural	
  Production	
  and	
  Externalities	
  Simulator	
  (APES)............................................................... 37	
  
Figure	
  6:	
  Conversational	
  Modelling	
  Interface ....................................................................................................46	
  
Figure	
  7:	
  the	
  PADGET	
  Framework....................................................................................................................... 47	
  
Figure	
  8:	
  the	
  Time-­‐Space	
  Matrix ......................................................................................................................... 50	
  
Figure	
  9:	
  COMA,	
  COllaborative	
  Modelling	
  Architecture .................................................................................... 51	
  
Figure	
  10:	
  OCOPOMO	
  eParticipation	
  Platform...................................................................................................52	
  
Figure	
  11:	
  Twitrratr..............................................................................................................................................82	
  
Figure	
  12:	
  Wordclouds.........................................................................................................................................83	
  
Figure	
  13:	
  UserVoice............................................................................................................................................83	
  
Figure	
  14	
  	
  Open	
  Data	
  Business	
  Model	
  (source:	
  Istituto	
  Superiore	
  Mario	
  Boella)..............................................107	
  
Figure	
  15	
  -­‐LOD	
  providers	
  and	
  their	
  linkages ......................................................................................................108	
  
Figure	
  16	
  Rating	
  other	
  opinions'	
  in	
  Opinion	
  Space ............................................................................................132	
  
Figure	
  17	
  Playing	
  the	
  My2050	
  game	
  for	
  the	
  demand	
  side.................................................................................134	
  
Figure	
  18	
  Adoption	
  of	
  ICT	
  Tools	
  and	
  Methodologies	
  for	
  policy-­‐making	
  (source:	
  CROSSOVER	
  Survey	
  of	
  Users’	
  
Needs	
  2012) .......................................................................................................................................................137	
  
Figure	
   19	
   Needs	
   and	
   Challenges	
   in	
   the	
   Policy	
   Making	
   Process	
   (source:	
   CROSSOVER	
   Survey	
   of	
   Users’	
   Needs	
  
2012) ..................................................................................................................................................................138	
  
Figure	
  20:	
  a	
  proposed	
  evaluation	
  framework	
  for	
  policy-­‐making	
  2.0 .................................................................144	
  
Figure	
  21:	
  Relation	
  Between	
  Policy-­‐Making	
  Needs	
  and	
  Research	
  Challenges...................................................149	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
5	
  |	
  P a g e 	
  
Executive	
  Summary	
  
This	
   deliverable	
   introduces	
   and	
   describes	
   the	
   interim	
   version	
   of	
   the	
   new	
   International	
   Research	
  
Roadmap	
  on	
  ICT	
  tools	
  for	
  Governance	
  and	
  Policy	
  Modelling,	
  renamed	
  by	
  the	
  project	
  team	
  as	
  “Policy-­‐
Making	
   2.0”,	
   one	
   of	
   the	
   core	
   outputs	
   of	
   the	
   Crossover	
   project,	
   which	
   is	
   developed	
   under	
   WP2	
  
Content	
  Production.	
  	
  
The	
   roadmap	
   aims	
   to	
   establish	
   the	
   scientific	
   and	
   political	
   basis	
   for	
   long-­‐lasting	
   interest	
   and	
  
commitment	
   to	
   next	
   generation	
   policy-­‐making	
   by	
   researchers	
   and	
   policy-­‐makers.	
   In	
   doing	
   so,	
   it	
  
contains	
  an	
  analysis	
  of	
  what	
  technologies	
  are	
  currently	
  available,	
  for	
  what	
  concrete	
  purposes,	
  and	
  
what	
  could	
  become	
  available	
  in	
  the	
  future.	
  The	
  main	
  rationale	
  for	
  such	
  a	
  document	
  is	
  the	
  current	
  
fragmentation	
   of	
   the	
   landscape	
   between	
   different	
   stakeholders,	
   disciplines,	
   policy	
   domains	
   and	
  
geographical	
  areas.	
  
	
  
The	
  document	
  is	
  the	
  result	
  of	
  a	
  highly	
  participative	
  process	
  undergone	
  between	
  the	
  first	
  draft	
  and	
  
the	
  final	
  roadmap,	
  with	
  the	
  involvement	
  of	
  hundreds	
  of	
  people	
  through	
  11	
  different	
  input	
  methods,	
  
from	
  live	
  workshops	
  to	
  online	
  discussion.	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
6	
  |	
  P a g e 	
  
After	
  a	
  brief	
  introduction	
  of	
  the	
  background,	
  the	
  document	
  analyses	
  the	
  demand	
  side:	
  the	
  current	
  
status	
  of	
  policy-­‐making,	
  with	
  the	
  key	
  tasks	
  (illustrated	
  by	
  the	
  traditional	
  policy	
  cycle)	
  and	
  existing	
  
challenges:	
  
a. Detect	
  and	
  understand	
  problems	
  before	
  they	
  become	
  unsolvable
b. Generate	
  high	
  involvement	
  of	
  citizens	
  in	
  policy-­‐making
c. Identify	
  “good	
  ideas”	
  and	
  innovative	
  solutions	
  to	
  long-­‐standing	
  problems
d. Reduce	
  uncertainty	
  on	
  the	
  possible	
  impacts	
  of	
  policies
e. Ensure	
  long	
  -­‐	
  term	
  thinking
f. Encourage	
  behavioural	
  change	
  and	
  uptake
g. Manage	
  crisis	
  and	
  the	
  “unknown	
  unknown”
h. Moving	
  from	
  conversations	
  to	
  action
i. Detect	
  non-­‐compliance	
  and	
  mis-­‐spending	
  through	
  better	
  transparency
j. Understand	
  the	
  impact	
  of	
  policies
It	
   then	
   presents	
   a	
   concrete	
   tentative	
   vision	
   of	
   how	
   policy-­‐making	
   could	
   look	
   in	
   2030,	
   if	
   these	
  
challenges	
  were	
  overcome.	
  
Section	
   3	
   represents	
   the	
   core	
   of	
   the	
   roadmap	
   and	
   presents	
   the	
   key	
   research	
   challenges	
   to	
   be	
  
addressed	
   to	
   achieve	
   this	
   vision,	
   updating	
   the	
   original	
   version	
   based	
   on	
   the	
   input	
   of	
   the	
  
consultation.	
  For	
  each	
  research	
  challenge,	
  it	
  presents	
  the	
  current	
  status,	
  the	
  existing	
  gaps,	
  and	
  short	
  
and	
  long	
  term	
  research	
  perspectives.	
  The	
  key	
  research	
  challenges	
  are:	
  
1. Policy	
  Modelling
1.1. Systems	
  of	
  Atomized	
  Models
1.2. Collaborative	
  Modelling
1.3. Easy	
  Access	
  to	
  Information	
  and	
  Knowledge	
  Creation
1.4. Model	
  Validation
1.5. Immersive	
  Simulation
1.6. Output	
  Analysis	
  and	
  Knowledge	
  Synthesis
2. Data-­‐powered	
  Collaborative	
  Governance
2.1. Big	
  Data
2.2. Opinion	
  Mining	
  and	
  Sentiment	
  Analysis
2.3. Visual	
  Analytics	
  for	
  collaborative	
  governance:	
  the	
  opportunities	
  and	
  the	
  research	
  challenges
2.4. Serious	
  Gaming	
  for	
  Behavioural	
  Change
2.5. Linked	
  Open	
  Government	
  Data
2.6. Collaborative	
  Governance
2.7. Participatory	
  Sensing
2.8. Identity	
  Management
2.9. Global	
  Systems	
  Science	
  
But	
   to	
   what	
   extent	
   policy-­‐making	
   2.0	
   can	
   be	
   said	
   to	
   genuinely	
   improve	
   policy-­‐making?	
   Section	
   4	
  
looks	
  at	
  the	
  available	
  evidence	
  about	
  the	
  impact	
  of	
  policy-­‐making	
  2.0,	
  across	
  case	
  studies,	
  the	
  survey	
  
and	
  the	
  prize.	
  As	
  it	
  emerges	
  that	
  no	
  robust	
  impact	
  evaluation	
  is	
  available,	
  we	
  propose	
  an	
  additional	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
7	
  |	
  P a g e 	
  
research	
  challenge	
  on	
  impact	
  evaluation	
  of	
  policy-­‐making	
  accompanied	
  by	
  a	
  proposed	
  evaluation	
  
framework.	
  
	
  
Finally,	
   we	
   summarize	
   the	
   findings	
   of	
   the	
   document	
   bringing	
   together	
   the	
   different	
   sections,	
  
suggesting	
   that	
   policy-­‐making	
   2.0	
   cannot	
   be	
   considered	
   the	
   panacea	
   for	
   all	
   issues	
   related	
   to	
   bad	
  
public	
   policies,	
   but	
   that	
   at	
   the	
   same	
   time	
   it	
   is	
   more	
   than	
   just	
   a	
   neutral	
   set	
   of	
   disparate	
   tools.	
   It	
  
provides	
  an	
  integrated	
  and	
  mutually	
  reinforcing	
  set	
  of	
  methods	
  that	
  share	
  a	
  similar	
  vision	
  of	
  policy-­‐
making	
   and	
   that	
   should	
   be	
   addressed	
   in	
   an	
   integrated	
   and	
   strategic	
   way;	
   and	
   it	
   provides	
  
opportunities	
  to	
  improve	
  the	
  checks	
  and	
  balances	
  systems	
  behind	
  decision	
  making	
  in	
  government,	
  
and	
  as	
  such	
  it	
  should	
  be	
  further	
  pursued.	
  
	
  
	
  
and	
  as	
  such	
  it	
  should	
  be	
  further	
  pursued.	
  
	
  
Context'
• Socio'
poli.cal'
factors'
Interven.on'
• Design'of'
technology'
• Design'of'
methods'
• Cost'
Uptake'
• More'
par.cipants'
• More'
diverse'
par.cipa.on'
Impact'
efficiency'
• High'quality'
of'ideas'
• Impact'on'
actual'
decisions'
• BeCer'
predic.ons'
Impact'
effec.veness'
• Improved'
performanc
e'of'public'
sector'
• Improved'
empowerme
nt'of'ci.zens'
Design
Implement
Monitor &
evaluate
Agenda
setting
Identify possible
policy options
Develop
preferred
option Revise
option
Induce
behavioural
change
Generate
collaboration
Ensure
Buy-in
Monitor
execution
Collect
feedback
Identify
problems
Collect
evidence
Understand
causal
relationship
Analyze data
Collaborative
governance
(e.g. ideascale)
Collaborative
governance
(e.g. co-ment)
Social
network
analysis
Serious
gaming
Crowd
sourcing
Open data
Sentiment
analysis
Open Data
visualization
Visualizati
on /
opinion
mining
Modeling
Policy
cycle
Tools
Simulate impact
of options
Immersive
simulation
ADOPTION
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
8	
  |	
  P a g e 	
  
1. 	
   BACKGROUND:	
  WHY	
  A	
  ROADMAP?	
  
1.1. The	
  rationale	
  of	
  the	
  roadmap:	
  what	
  is	
  the	
  problem?	
  
	
  
The	
   CROSSOVER	
   project	
   aims	
   to	
   consolidate	
   and	
   expand	
   the	
   existing	
   community	
   on	
   ICT	
   for	
  
Governance	
  and	
  Policy	
  Modelling	
  (built	
  largely	
  within	
  FP7)	
  by:	
  	
  
-­‐	
   Bringing	
   together	
   and	
   reinforcing	
   the	
   links	
   between	
   the	
   different	
   global	
   communities	
   of	
  
researchers	
  and	
  experts:	
  it	
  will	
  create	
  directories	
  of	
  experts	
  and	
  solutions,	
  and	
  animate	
  knowledge	
  
exchange	
  across	
  communities	
  of	
  practice	
  both	
  offline	
  and	
  online;	
  
-­‐	
   Reaching	
   out	
   and	
   raising	
   the	
   awareness	
   of	
   non-­‐experts	
   and	
   potential	
   users,	
   with	
   special	
  
regard	
  to	
  high-­‐level	
  policy-­‐makers	
  and	
  policy	
  advisors:	
  it	
  will	
  produce	
  multimedia	
  content,	
  a	
  practical	
  
handbook	
  and	
  high-­‐level	
  policy	
  conferences	
  with	
  competition	
  for	
  prizes;	
  
-­‐	
   Establishing	
  the	
  scientific	
  and	
  political	
  basis	
  for	
  long-­‐lasting	
  interest	
  and	
  commitment	
  to	
  next	
  
generation	
  policy-­‐making,	
  beyond	
  the	
  mere	
  availability	
  of	
  FP7	
  funding:	
  it	
  will	
  focus	
  on	
  use	
  cases	
  and	
  
a	
  demand-­‐driven	
  approach,	
  involving	
  policy-­‐makers	
  and	
  advisors.	
  
The	
  CROSSOVER	
  project	
  pursues	
  this	
  goal	
  through	
  a	
  combination	
  of	
  content	
  production,	
  ad	
  hoc	
  and	
  
well-­‐designed	
  online	
  and	
  offline	
  animation;	
  as	
  well	
  as	
  strong	
  links	
  with	
  existing	
  communities	
  outside	
  
the	
  CROSSOVER	
  project	
  and	
  outside	
  the	
  realm	
  of	
  e-­‐Government.	
  
	
  
The	
   present	
   deliverable	
   is	
   one	
   of	
   the	
   core	
   outputs	
   of	
   the	
   project:	
   the	
   International	
   Research	
  
Roadmap	
  on	
  ICT	
  Tools	
  for	
  Governance	
  and	
  Policy	
  Modelling.	
  It	
  aims	
  to	
  create	
  a	
  common	
  platform	
  
between	
  actors	
  fragmented	
  in	
  different	
  disciplines,	
  policy	
  domains,	
  organisations	
  and	
  geographical	
  
areas,	
  as	
  illustrated	
  in	
  the	
  figure	
  below.	
  
	
  
Figure	
  1:	
  the	
  fragmentation	
  of	
  policy-­‐making	
  2.0	
  
	
  
But	
  most	
  of	
  all,	
  it	
  aims	
  to	
  provide	
  a	
  clear	
  outline	
  of	
  what	
  technologies	
  are	
  available	
  now	
  for	
  policy-­‐
makers	
  to	
  improve	
  their	
  work,	
  and	
  what	
  could	
  become	
  available	
  tomorrow.	
  	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
9	
  |	
  P a g e 	
  
CROSSOVER	
  builds	
  on	
  the	
  results	
  of	
  the	
  CROSSROAD	
  project1
,	
  which	
  elaborated	
  a	
  research	
  roadmap	
  
on	
  the	
  same	
  topic	
  along	
  the	
  whole	
  of	
  2010.	
  With	
  respect	
  to	
  the	
  previous	
  roadmap,	
  this	
  document	
  is	
  
firstly	
  a	
  revised	
  and	
  updated	
  version.	
  Beside	
  this,	
  it	
  contains	
  some	
  fundamental	
  novelties:	
  
-­‐ A	
  demand-­‐driven	
  approach:	
  rather	
  than	
  focussing	
  on	
  the	
  technology,	
  the	
  present	
  roadmap	
  
starts	
   from	
   the	
   needs	
   and	
   the	
   activities	
   of	
   policy-­‐making	
   and	
   then	
   links	
   the	
   research	
  
challenges	
  to	
  them.	
  	
  
-­‐ An	
  additional	
  emphasis	
  on	
  cases	
  and	
  applications:	
  for	
  each	
  research	
  challenge,	
  we	
  indicate	
  
relevant	
  cases	
  and	
  practical	
  solutions	
  
-­‐ A	
   clearer	
   thematic	
   focus	
   on	
   ICT	
   for	
   Governance	
   and	
   Policy-­‐Modelling,	
   by	
   dropping	
   more	
  
peripheral	
   grand	
   challenges	
   of	
   Government	
   Service	
   Utility	
   and	
   Scientific	
   Base	
   for	
   ICT-­‐
enabled	
  Governance	
  
-­‐ A	
  global	
  coverage:	
  while	
  CROSSROAD	
  focussed	
  on	
  Europe,	
  CROSSOVER	
  includes	
  cases	
  and	
  
experiences	
  from	
  all	
  over	
  the	
  world	
  
-­‐ A	
  living	
  roadmap:	
  the	
  present	
  deliverable	
  is	
  accompanied	
  by	
  an	
  online	
  repositories	
  of	
  tools,	
  
people	
  and	
  applications	
  
1.2. An	
  open	
  and	
  recursive	
  methodology	
  	
  
The	
  present	
  Research	
  Roadmap	
  on	
  Policy-­‐Making	
  2.0	
  is	
  developed	
  with	
  a	
  sequential	
  approach	
  based	
  
on	
  the	
  existing	
  research	
  roadmap	
  developed	
  by	
  the	
  CROSSROAD	
  project.	
  In	
  order	
  to	
  achieve	
  the	
  
goals	
  of	
  overcoming	
  the	
  fragmentation,	
  an	
  open	
  and	
  inclusive	
  approach	
  was	
  necessary.	
  
In	
  the	
  initial	
  phase	
  of	
  the	
  project,	
  up	
  to	
  M6	
  (March	
  2012),	
  the	
  consortium	
  started	
  a	
  collection	
  of	
  
literature,	
  information	
  about	
  software	
  tools	
  and	
  applications	
  cases.	
  In	
  addition	
  to	
  this	
  desk-­‐based	
  
review,	
  the	
  document	
  has	
  benefited	
  from	
  the	
  informal	
  discussions	
  being	
  held	
  on	
  the	
  LinkedIn	
  group	
  
of	
  the	
  project	
  (Policy-­‐making	
  2.0),	
  where	
  more	
  than	
  800	
  practitioners	
  and	
  researchers	
  are	
  discussing	
  
the	
  practices	
  and	
  the	
  challenges	
  of	
  policy-­‐making.	
  
The	
   first	
   draft	
   of	
   the	
   roadmap	
   was	
   then	
   released	
   in	
   M9	
   (June	
   2012)	
   of	
   the	
   project,	
   for	
   public	
  
feedback.	
   The	
   publication	
   of	
   the	
   deliverable	
   kicked	
   off	
   the	
   engagement	
   activities	
   of	
   the	
   project,	
  
designed	
  to	
  provide	
  further	
  input	
  and	
  to	
  improve	
  the	
  roadmap:	
  
-­‐ As	
   soon	
   as	
   it	
   was	
   released,	
   the	
   preliminary	
   version	
   of	
   the	
   roadmap	
   was	
   published	
   in	
  
commentable	
   format	
   on	
   the	
   project	
   website	
   http://www.CROSSOVER-­‐project.eu/.	
  
Animators	
   stimulated	
   discussion	
   about	
   it	
   and	
   generated	
   comments	
   by	
   researchers	
   and	
  
practitioners	
  alike.	
  This	
  participatory	
  process	
  helped	
  enriching	
  the	
  roadmap,	
  which	
  was	
  then	
  
published	
  in	
  its	
  final	
  version	
  after	
  validation	
  by	
  the	
  community/ies	
  of	
  practitioners	
  and	
  policy	
  
makers	
  
-­‐ Two	
   workshops	
   organised	
   by	
   the	
   project	
   aimed	
   at	
   gathering	
   input	
   on	
   the	
   research	
  
challenges	
  and	
  feedback	
  on	
  the	
  proposed	
  roadmap	
  	
  
-­‐ An	
  online	
  survey,	
  as	
  well	
  as	
  several	
  focus	
  groups	
  and	
  meetings	
  with	
  practitioners	
  from	
  civil	
  
society	
  and	
  government	
  helped	
  to	
  focus	
  the	
  roadmap	
  on	
  the	
  actual	
  needs	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
1
	
  http://CROSSROAD.epu.ntua.gr/	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
10	
  |	
  P a g e 	
  
	
  
Figure	
  2	
  Outline	
  of	
  the	
  participatory	
  process	
  
The	
  process	
  for	
  updating	
  the	
  roadmap	
  included	
  therefore	
  a	
  wide	
  set	
  of	
  contributions.	
  Firstly,	
  the	
  
Crossroad	
  roadmap	
  was	
  enriched	
  with	
  desk-­‐based	
  research:	
  202	
  cases	
  collected	
  in	
  the	
  platform	
  +	
  4	
  
cases	
  collected	
  and	
  described	
  in	
  the	
  case	
  studies	
  performed	
  by	
  the	
  National	
  Technical	
  University	
  of	
  
Athens	
  (NTUA),	
  and	
  the	
  50	
  applications	
  to	
  the	
  prize.	
  	
  
This	
  first	
  draft	
  was	
  then	
  published	
  for	
  comments	
  by	
  some	
  of	
  the	
  800	
  members	
  of	
  the	
  LinkedIn	
  group	
  
who	
  also	
  provided	
  relevant	
  cases.	
  An	
  additional	
  survey	
  of	
  users’	
  needs	
  provides	
  provided	
  insights	
  
from	
   240	
   respondents	
   and	
   over	
   200	
   people	
   presents	
   presented	
   at	
   focus	
   groups.	
   Additional	
  
discussions	
   with	
   Global	
   Systems	
   Science	
   	
   community,	
   third	
   party	
   workshops	
   and	
   the	
   US	
   Policy	
  
Informatics	
  Network	
  	
  helped	
  in	
  refine	
  refining	
  further	
  the	
  roadmap.	
  
The	
   two	
   workshops	
   provided	
   high-­‐quality	
   insight	
   that	
   enriched	
   the	
   roadmap	
   with	
   specific	
  
contributions.	
  
	
  
In	
  the	
  table	
  below	
  we	
  outline	
  in	
  detail	
  the	
  specific	
  contribution	
  of	
  each	
  section	
  of	
  the	
  roadmap,	
  that	
  
is	
  described	
  in	
  full	
  in	
  the	
  following	
  section.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
11	
  |	
  P a g e 	
  
	
  
	
  
Type	
  of	
  contribution	
   Extent	
  of	
  the	
  contribution	
   Contribution	
  to	
  the	
  roadmap	
  
1) Comments to the roadmap • 40	
  comments	
  
• 9	
  different	
  experts	
  
• Visual	
  Analytics	
  
• Systems	
  of	
  Atomized	
  Models	
  
• Model	
  Validation	
  
• Serious	
  Gaming	
  
2) Presentations in the PMOD
workshop
• Papers	
  received:	
  42	
  
• Registered	
  participants:	
  70	
  	
  
• No.	
  Countries’	
  citizens	
  present:	
  
20	
  
• Linked	
  Open	
  Government	
  Data	
  
3) Presentations in the
Transatlantic workshop
• 16	
  presentations	
  
• 30	
  participants	
  
• Collaborative	
  Modelling	
  
• Systems	
  of	
  Atomized	
  Models	
  
• Opinion	
  Mining	
  
4) Survey of User’s Needs 	
  
• 236	
  respondents	
  
• 33%	
  engaged	
  in	
  policy	
  design	
  
• 27%	
  engaged	
  in	
  monitoring	
  and	
  
evaluation	
  
• 22%	
  engaged	
  in	
  agenda	
  setting	
  
• 18%	
  engaged	
  in	
  policy	
  
implementation	
  
• Impact	
  of	
  policy	
  making	
  2.0	
  
• Roadmap	
  methodology	
  
• Linked	
  Open	
  Government	
  Data	
  
• Opinion	
  Mining	
  
• Collaborative	
  Governance
5) Focus groups
	
  
139	
  attendants	
  -­‐	
  Forum	
  PA,	
  the	
  
Italian	
  leading	
  conference	
  on	
  e-­‐
government	
  	
  
• 35	
  attendants-­‐	
  INSITE	
  event	
  on	
  
sustainability	
  	
  
• 40	
  attendants	
  -­‐	
  Webinar	
  for	
  the	
  
United	
  Nations	
  Development	
  
Programme	
  
	
  
• Impact	
  of	
  policy	
  making	
  2.0	
  
• Roadmap	
  methodology	
  
6) Case studies • Collection	
  of	
  202	
  tools	
  and	
  
practices	
  
• Elicitation	
  of	
  20	
  best	
  practices	
  
• Further	
  elicitation	
  of	
  4	
  best	
  
practices	
  for	
  in-­‐depth	
  case	
  
study	
  
• Impact	
  of	
  policy	
  making	
  2.0	
  
• Roadmap	
  methodology	
  
• Annex	
  with	
  a	
  repository	
  of	
  cases	
  
7) Analysis of the prize • 47	
  submission	
  received	
  
• 10	
  short	
  listed	
  
• 3	
  winners	
  
• Analysis	
  of	
  the	
  prize	
  process	
  on	
  the	
  
Impact	
  Chapter	
  
8) LinkedIn group • 840	
  participants	
   • Comments	
  to	
  the	
  roadmap	
  
• Increased	
  attendance	
  to	
  the	
  
workshops	
  
• Collection	
  of	
  practices	
  and	
  tools	
  
Table	
  1	
  Contributions	
  to	
  the	
  roadmap	
  
1) Comments	
  to	
  the	
  Roadmap	
  
The	
  roadmap	
  has	
  been	
  published	
  in	
  commentable	
  format	
  in	
  two	
  different	
  versions:	
  a	
  short	
  one	
  on	
  
Makingspeechtalk2
,	
   and	
   a	
   full	
   version	
   (downloadable	
   after	
   answering	
   the	
   survey	
   on	
   the	
   needs	
   of	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
2
	
  http://makingspeechestalk.com/CROSSOVER/	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
12	
  |	
  P a g e 	
  
policy-­‐makers)	
   available	
   in	
   the	
   CROSSOVER	
   website3
.	
   Everybody	
   was	
   able	
   to	
   comment	
   on	
   single	
  
parts	
  of	
  the	
  roadmap	
  or	
  to	
  propose	
  new	
  topics,	
  application	
  cases	
  and	
  research	
  challenges.	
  The	
  aim	
  
of	
   publishing	
   the	
   document	
   in	
   commentable	
   format	
   was	
   to	
   get	
   the	
   input	
   from	
   experts	
   for	
   co-­‐
creating	
  the	
  roadmap.	
  More	
  specifically	
  we	
  were	
  interested	
  in	
  knowing	
  if	
  the	
  current	
  formulation	
  of	
  
the	
   research	
   challenge	
   was	
   acceptable,	
   and	
   we	
   wanted	
   to	
   collect	
   best	
   practices	
   and	
   application	
  
cases	
  from	
  the	
  community	
  of	
  experts	
  and	
  practitioners	
  at	
  large.	
  As	
  already	
  mentioned,	
  the	
  roadmap	
  
received	
  over	
  40	
  useful	
  and	
  detailed	
  comments	
  from	
  a	
  number	
  of	
  experts	
  in	
  the	
  different	
  domains.	
  
2) PMOD	
  Workshop	
  
The	
   June	
   2012	
   workshop	
   was	
   the	
   first	
   of	
   three	
   to	
   be	
   organised	
   under	
   the	
   CROSSOVER	
   project.	
  
Formally	
   titled	
   "Using	
   Open	
   Data:	
   policy	
   modelling,	
   citizen	
   empowerment,	
   data	
   journalism"	
   but	
  
generally	
  referred	
  to	
  by	
  the	
  term	
  PMOD	
  (policy	
  modelling),	
  it	
  set	
  out	
  to	
  explore	
  whether	
  advocates'	
  
claims	
   of	
   the	
   huge	
   potential	
   for	
   open	
   data	
   as	
   an	
   engine	
   for	
   a	
   new	
   economy,	
   as	
   an	
   aid	
   to	
  
transparency	
   and,	
   of	
   particular	
   relevance	
   to	
   CROSSOVER,	
   as	
   an	
   aid	
   to	
   evidence-­‐based	
   policy	
  
modelling,	
   were	
   justified.	
   In	
   terms	
   of	
   organization,	
   the	
   event	
   was	
   run	
   as	
   a	
   W3C/CROSSOVER	
  
workshop	
  and	
  held	
  at	
  the	
  European	
  Commission's	
  Albert	
  Borschette	
  Conference	
  Centre	
  in	
  the	
  two	
  
days	
  immediately	
  prior	
  to	
  the	
  Digital	
  Agenda	
  Assembly.	
  That	
  combination	
  helped	
  to	
  secure	
  good	
  
support	
  from	
  a	
  high	
  calibre	
  audience.	
  42	
  papers	
  were	
  received	
  and	
  the	
  majority	
  was	
  accepted	
  by	
  the	
  
programme	
  committee	
  for	
  full	
  presentation.	
  Authors	
  of	
  several	
  other	
  papers	
  plus	
  members	
  of	
  the	
  
programme	
  committee,	
  the	
  CROSSOVER	
  animators	
  and	
  a	
  small	
  number	
  of	
  invited	
  guests	
  comprised	
  
the	
  70	
  registered	
  attendees	
  of	
  which	
  67	
  turned	
  up.	
  The	
  event	
  reached	
  a	
  larger	
  audience	
  through	
  
organising	
  a	
  networking	
  event	
  on	
  the	
  evening	
  following	
  the	
  workshop	
  to	
  which	
  attendees	
  of	
  the	
  
data	
   workshop	
   at	
   the	
   Digital	
   Agenda	
   Assembly	
   were	
   invited.	
   Furthermore,	
   through	
   the	
   live	
   IRC	
  
channel	
   and	
   Tweets	
   using	
   the	
   #pmod	
   hashtag,	
   others	
   were	
   able	
   to	
   monitor	
   proceedings.	
   The	
  
agenda,	
  attendee	
  list	
  and	
  final	
  report	
  are	
  all	
  available	
  on	
  the	
  W3C	
  	
  Web	
  site	
  which	
  provides	
  a	
  high	
  
profile	
  for	
  the	
  workshop	
  and	
  the	
  project.	
  
Most	
  of	
  the	
  results	
  of	
  the	
  workshop	
  were	
  used	
  to	
  improve	
  the	
  research	
  challenge	
  on	
  Linked	
  Open	
  
Government	
  Data.	
  
	
  
3) Transatlantic	
  Workshop	
  
The	
   Transatlantic	
   Research	
   on	
   Policy	
   Modelling	
   Workshop	
   that	
   was	
   held	
   in	
   Washington,	
   DC	
   on	
  
January	
   28th
	
   and	
   29th
,	
  
2013.	
   It	
   was	
   organized	
   by	
   the	
   Millennium	
   Institute	
   and	
   the	
   New	
   America	
  
Foundation	
  (NAF),	
  Washington,	
  DC,	
  USA.	
  NAF	
  is	
  a	
  nonprofit,	
  nonpartisan	
  public	
  policy	
  institute	
  that	
  
invests	
  in	
  new	
  thinkers	
  and	
  new	
  ideas	
  to	
  address	
  the	
  next	
  generation	
  of	
  challenges	
  facing	
  the	
  United	
  
States.	
  This	
  event	
  brought	
  together	
  speakers	
  and	
  attendees	
  working	
  and/or	
  interested	
  in	
  improving	
  
ICT	
   tools	
   for	
   education	
   and	
   policy	
   makers.	
   The	
   speakers	
   and	
   attendees	
   came	
   from	
   a	
   diverse	
  
background,	
  both	
  technical	
  and	
  non-­‐technical	
  to	
  share	
  experiences	
  and	
  knowledge	
  and	
  discuss	
  ways	
  
to	
  make	
  the	
  current	
  state	
  of	
  modelling	
  and	
  ICT	
  more	
  accessible	
  and	
  attractive	
  for	
  decision	
  makers	
  
on	
  both	
  sides	
  of	
  the	
  Atlantic	
  Ocean.	
  The	
  models	
  presented	
  in	
  the	
  workshop	
  have	
  been	
  integrated	
  in	
  
the	
   “Collaborative	
   Modelling”,	
   “Systems	
   of	
   Atomized	
   Models”	
   and	
   “Opinion	
   Mining”	
   research	
  
challenges.	
  
	
  
4) Survey	
  of	
  User’s	
  Needs	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
3
	
  http://www.CROSSOVER-­‐project.eu/ResearchRoadmap.aspx	
  	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
13	
  |	
  P a g e 	
  
The	
   Survey	
   of	
   Users’	
   Needs	
   performed	
   within	
   the	
   scope	
   of	
   the	
   CROSSOVER	
   project	
   aimed	
   at	
  
collecting	
   the	
   views	
   and	
   the	
   requirements	
   of	
   policy-­‐making	
   stakeholders.	
   More	
   in	
   particular	
   the	
  
survey	
   intended	
   to	
   stimulate	
   actual	
   and	
   potential	
   practitioners,	
   such	
   as	
   decision	
   makers	
  
(government	
   official	
   involved	
   in	
   the	
   policy-­‐making	
   process)	
   or	
   policy	
   advisors	
   (technical	
   expert	
  
advising	
  decision-­‐makers	
  from	
  outside	
  government)	
  to	
  provide	
  input,	
  feedback	
  and	
  validation	
  to	
  the	
  
new	
   research	
   roadmap	
   on	
   ICT	
   tools	
   for	
   Governance	
   and	
   Policy	
   Modelling	
   under	
   development	
  
(CROSSOVER,	
  2012b).	
  About	
  450	
  people	
  took	
  part	
  in	
  the	
  overall	
  exercise,	
  combining	
  live	
  meetings	
  
(214)	
  and	
  online	
  survey	
  (240+	
  answers),	
  providing	
  concrete	
  elements	
  to	
  improve	
  the	
  CROSSOVER	
  
roadmap	
  and	
  the	
  other	
  activities	
  to	
  be	
  carried	
  out	
  by	
  the	
  project.	
  	
  
5) Focus	
  groups	
  
In	
   addition	
   to	
   the	
   survey,	
   Tech4i2	
   ran	
   a	
   series	
   of	
   dedicated	
   meetings	
   where	
   the	
   roadmap	
   was	
  
presented	
   and	
   followed	
   up	
   by	
   intense	
   dedicated	
   discussion.	
   These	
   events	
   where	
   all	
   high-­‐profile,	
  
attended	
  by	
  policy-­‐makers	
  in	
  the	
  broad	
  sense:	
  not	
  only	
  government	
  officials,	
  but	
  also	
  policy	
  advisors	
  
and	
  civil	
  society	
  organisations.	
  More	
  precisely	
  three	
  events	
  have	
  been	
  run:	
  
• On	
  the	
  17th	
  of	
  May	
  2012	
  CROSSOVER	
  was	
  invited	
  to	
  give	
  a	
  keynote	
  speech	
  to	
  ForumPA	
  
on	
   the	
   CROSSOVER	
   Research	
   Roadmap.	
   FORUM	
   PA	
   is	
   a	
   leading	
   European	
   exhibition	
  
exploring	
  innovation	
  in	
  Public	
  Administration	
  and	
  local	
  systems.	
  For	
  22	
  years,	
  FORUM	
  
PA	
  has	
  attracted	
  thousands	
  of	
  visitors	
  and	
  hundreds	
  of	
  exhibitors	
  (public	
  authorities,	
  
private	
   companies	
   and	
   citizens)	
   to	
   come	
   together	
   and	
   learn	
   and	
   the	
   participation	
   of	
  
important	
   leaders:	
   ministers,	
   Nobel	
   prize	
   winners	
   (Amartya	
   Sen,	
   Edward	
   Prescott),	
  
industry	
  leaders	
  (Luca	
  Cordero	
  di	
  Montezemolo)	
  and	
  hundreds	
  of	
  speakers.	
  
• On	
  May	
  24th
	
  2012,	
  CROSSOVER	
  was	
  invited	
  to	
  attend	
  the	
  HUB/Insite	
  project	
  meeting	
  of	
  
sustainability	
   practitioners	
   from	
   all	
   over	
   Europe.	
   The	
   Hub	
   and	
   the	
   INSITE	
   Project	
  
brought	
  together	
  more	
  than	
  25	
  sustainability	
  practitioners	
  working	
  at	
  the	
  cutting	
  edge	
  
of	
  innovation	
  within	
  industry,	
  urban	
  development,	
  energy,	
  technology	
  and	
  policy	
  across	
  
Europe.	
  This	
  includes	
  people	
  tackling	
  today’s	
  key	
  challenges	
  in	
  carbon	
  reduction,	
  smart	
  
cities,	
  governance	
  and	
  behavioural	
  change	
  across	
  all	
  these	
  areas.	
  Tech4i2	
  presented	
  the	
  
Research	
   Roadmap,	
   and	
   facilitated	
   a	
   dedicated	
   session	
   CROSSOVER	
   was	
   invited	
   to	
  
attend	
   the	
   HUB/Insite	
   project	
   meeting	
   of	
   sustainability	
   practitioners	
   from	
   all	
   over	
  
Europe.	
  	
  
• On	
  March	
  22nd	
  2012,	
  CROSSOVER	
  was	
  invited	
  to	
  present	
  the	
  policy-­‐making	
  2.0	
  model	
  
to	
   the	
   practitioners	
   of	
   the	
   “governance”	
   network	
   of	
   UNDP	
   –	
   Europe	
   and	
   CIS,	
   which	
  
included	
   about	
   40	
   people	
   from	
   Central	
   and	
   Eastern	
   Europe.	
   Webinar	
   for	
   the	
   United	
  
Nations	
  Development	
  Programme	
  –	
  Europe	
  and	
  CIS	
  
6) Case	
  Studies	
  
Within	
   the	
   scope	
   of	
   the	
   CROSSOVER	
   project,	
   the	
   European	
   Commission's	
   Joint	
   Research	
   Centre,	
  
Institute	
  for	
  Prospective	
  Technological	
  Studies	
  (JRC-­‐IPTS),	
  in	
  collaboration	
  with	
  a	
  team	
  of	
  experts	
  of	
  
the	
   National	
   Technical	
   University	
   of	
   Athens	
   (NTUA)	
   carried	
   out	
   the	
   activity	
   of	
   mapping	
   and	
  
identification	
   of	
   Case	
   Studies	
   on	
   ICT	
   solutions	
   for	
   governance	
   and	
   policy	
   modelling	
   (CROSSOVER,	
  
2013).	
   The	
   research	
   design	
   envisaged	
   a	
   set	
   of	
   macro	
   phases.	
   The	
   initial	
   phase	
   consisted	
   in	
   the	
  
creation	
  of	
  a	
  case	
  study	
  repository	
  through	
  the	
  identification	
  and	
  prioritization	
  of	
  potential	
  sources	
  
of	
  information,	
  an	
  open	
  invitation	
  for	
  proposal	
  of	
  cases	
  through	
  web2.0	
  channels,	
  followed	
  by	
  the	
  
definition	
   of	
   the	
   1st-­‐round	
   criteria	
   for	
   selecting	
   at	
   least	
   twenty	
   practices	
   and	
   the	
   information-­‐
oriented	
  selection	
  of	
  the	
  corresponding	
  case	
  studies	
  on	
  applications	
  of	
  ICT	
  solutions	
  for	
  governance	
  
and	
  policy	
  modelling.	
  In	
  the	
  second	
  phase,	
  case	
  studies	
  have	
  been	
  elicited	
  through	
  the	
  definition	
  of	
  
the	
  2nd-­‐round	
  criteria	
  for	
  selecting	
  eight	
  promising	
  practices	
  and	
  the	
  application	
  of	
  a	
  multi-­‐criteria	
  
method,	
  followed	
  by	
  further	
  elaboration	
  on	
  the	
  eight	
  case	
  studies	
  that	
  have	
  been	
  selected	
  by	
  the	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
14	
  |	
  P a g e 	
  
multi-­‐criteria	
   method	
   based	
   on	
   desk	
   research.	
   In	
   the	
   third	
   phase	
   the	
   final	
   four	
   cases	
   have	
   been	
  
selected	
  and	
  subjected	
  to	
  an	
  in-­‐depth	
  analysis	
  carried	
  out	
  through	
  meticulous	
  study	
  of	
  the	
  available	
  
public	
  documentation	
  and	
  the	
  conduction	
  of	
  interviews	
  with	
  key	
  involved	
  stakeholders.	
  After	
  the	
  
final	
  selection	
  of	
  cases	
  and	
  the	
  in	
  depth	
  analysis,	
  the	
  findings	
  have	
  been	
  synthesized	
  through	
  the	
  
analysis	
   of	
   the	
   emerging	
   trends	
   from	
   applications	
   of	
   ICT	
   solutions	
   for	
   governance	
   and	
   policy	
  
modelling	
  as	
  well	
  as	
  the	
  development	
  of	
  key	
  considerations	
  for	
  the	
  CROSSOVER	
  roadmap	
  for	
  the	
  
themes	
  that	
  refer	
  to	
  its	
  scope.	
  Finally	
  the	
  key	
  findings	
  of	
  the	
  analysis	
  of	
  the	
  four	
  cases	
  have	
  been	
  
shared	
  with	
  the	
  CROSSOVER	
  partners	
  and	
  the	
  community	
  that	
  follows	
  closely	
  the	
  Policy	
  Making	
  2.0	
  
domain	
  over	
  various	
  Web	
  2.0	
  channels,	
  to	
  provide	
  feedback	
  and	
  validation.	
  The	
  key	
  results	
  of	
  the	
  
case	
  studies	
  are	
  described	
  later	
  in	
  the	
  impact	
  section.	
  
	
  
7) Analysis	
  of	
  the	
  Prize	
  
This	
   prize	
   was	
   given	
   to	
   the	
   best	
   policy-­‐making	
   2.0	
   applications,	
   that	
   is	
   are	
   for	
   the	
   best	
   use	
   of	
  
technology	
  to	
  improve	
  the	
  design,	
  delivery	
  and	
  evaluation	
  of	
  Government	
  policy.	
  The	
  focus	
  of	
  the	
  
jury	
  has	
  been	
  on	
  implementations	
  that	
  can	
  show	
  a	
  real	
  impact	
  on	
  policy	
  making,	
  either	
  in	
  terms	
  of	
  
better	
  policy	
  or	
  wider	
  participation.	
  These	
  technologies	
  included,	
  but	
  are	
  not	
  limited	
  to:	
  
• Visual	
  analytics	
  
• Open	
  and	
  big	
  data	
  
• Modelling	
  and	
  simulation	
  (beyond	
  general	
  equilibrium	
  models)	
  
• Collaborative	
  governance	
  and	
  crowdsourcing	
  
• Serious	
  gaming	
  
• Opinion	
  mining	
  
An	
  important	
  condition	
  for	
  participating	
  to	
  the	
  selection	
  has	
  been	
  the	
  real-­‐life	
  implementation	
  of	
  
technology	
  to	
  policy	
  issues.	
  
	
  
Out	
  of	
  50	
  applications,	
  the	
  jury	
  selected	
  the	
  best	
  12	
  and	
  eventually	
  the	
  3	
  winners,	
  which	
  received	
  an	
  
IPAD	
  mini.	
  	
  The	
  principal	
  domains	
  of	
  the	
  applications	
  were	
  as	
  follow:	
  
• 23	
  in	
  the	
  “Collaborative	
  Governance	
  and	
  Crowd-­‐sourcing”	
  domain	
  
• 13	
  in	
  the	
  “Open	
  and	
  Big	
  Data”	
  domain	
  
• 4	
  in	
  the	
  “Visual	
  Analytics”	
  domain	
  
• 2	
  in	
  the	
  “Modelling	
  and	
  Simulation	
  (beyond	
  general	
  equilibrium	
  models)”	
  domain	
  
• 2	
  in	
  the	
  “Serious	
  Gaming”	
  domain	
  
• 1	
   in	
   each	
   of	
   the	
   following	
   domains:	
   “Open	
   Source	
   Governance”,	
   “Opinion	
   Mining”,	
  
“Participatory	
  Policy	
  Making”	
  
	
  
All	
  the	
  relevant	
  applications	
  received	
  have	
  been	
  integrated	
  in	
  the	
  roadmap.	
  The	
  criteria	
  for	
  judging	
  
the	
  applications	
  were:	
  
• Impact	
  on	
  the	
  quality	
  of	
  policies	
  
• Openness,	
  scalability	
  and	
  replicability	
  
• Extensiveness	
  of	
  public	
  and	
  policymakers’	
  take	
  up	
  
• Technological	
  innovativeness	
  
To	
  this	
  respect,	
  the	
  applicants	
  to	
  the	
  prize	
  were	
  required	
  to	
  provide	
  the	
  following	
  information:	
  
• Name	
  of	
  the	
  application	
   	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
15	
  |	
  P a g e 	
  
• Year	
  of	
  launch	
   	
  
• Short	
  description	
  of	
  the	
  technological	
  domain	
  
• Link	
  to	
  the	
  application	
   	
  
• Describe	
  the	
  impact	
  of	
  the	
  application	
  on	
  the	
  quality	
  of	
  policies	
   	
  
• Describe	
  the	
  public	
  and	
  policymaker	
  take	
  up	
  of	
  the	
  application	
  
• Describe	
  to	
  what	
  extent	
  the	
  application	
  was	
  technologically	
  innovative	
  
• Contact	
  details	
  of	
  the	
  applicant	
  
	
  
	
  
8) LinkedIn	
  Group	
  Policy-­‐Making	
  2.0	
  	
  
A	
  crucial	
  element	
  in	
  the	
  engagement	
  of	
  stakeholders	
  is	
  given	
  by	
  the	
  creation	
  of	
  a	
  group	
  on	
  LinkedIn	
  
called	
   Policy	
   Making	
   2.04
,	
   which	
   is	
   a	
   virtual	
   place	
   where	
   actual	
   and	
   potential	
   practitioners	
   of	
  
advanced	
  ICT	
  tools	
  for	
  policy-­‐making	
  can	
  exchange	
  experiences.	
  The	
  group	
  displays	
  a	
  high	
  selected	
  
pool	
  of	
  high	
  level	
  members	
  (over	
  840)	
  engaging	
  in	
  discussions	
  and	
  exchange	
  of	
  views.	
  In	
  order	
  to	
  
foster	
  debate	
  in	
  the	
  group,	
  the	
  CROSSOVER	
  consortium	
  posts	
  on	
  a	
  regular	
  base	
  info	
  about	
  the	
  new	
  
cases	
  and	
  tools	
  to	
  be	
  integrated	
  in	
  the	
  knowledge	
  repository.	
  Some	
  other	
  discussion	
  topics	
  relate	
  to	
  
the	
  best	
  ways	
  to	
  engage	
  the	
  government	
  in	
  online	
  policy	
  making,	
  the	
  posting	
  of	
  third	
  parties	
  content	
  
and	
   info	
   about	
   incoming	
   CROSSOVER	
   workshops.	
   In	
   particular	
   the	
   group	
   is	
   being	
   used	
   for	
  
disseminating	
  the	
  Survey	
  on	
  the	
  ICT	
  Needs	
  of	
  Policy	
  Makers,	
  as	
  well	
  as	
  the	
  roadmap	
  in	
  commentable	
  
format.	
  The	
  Policy	
  Making	
  2.0	
  group	
  also	
  serves	
  as	
  a	
  liaison	
  channel	
  with	
  similar	
  projects	
  such	
  as	
  
eGvoPoliNet	
   and	
   OCOPOMO.	
   As	
   agreed	
   the	
   eGovPoliNet	
   LinkedIn	
   group	
   has	
   merged	
   with	
   the	
  
CROSSOVER	
  Policy	
  Making	
  2.0	
  group,	
  and	
  after	
  the	
  end	
  of	
  the	
  CROSSOVER	
  project	
  the	
  interaction	
  
will	
  continue	
  led	
  by	
  the	
  eGovPoliNet	
  consortium.	
  Moreover	
  as	
  we	
  are	
  approaching	
  the	
  end	
  of	
  the	
  
project	
  we	
  decided	
  to	
  shift	
  from	
  a	
  closed	
  LinkedIn	
  group	
  to	
  an	
  open	
  one.	
  
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
4
	
  http://www.linkedin.com/groups?home=&gid=4165795	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
16	
  |	
  P a g e 	
  
	
  
	
  
	
  
1.3. Scope	
  and	
  definition	
  
Policy-­‐making	
  2.0	
  refers	
  to	
  a	
  set	
  of	
  methodologies	
  and	
  technological	
  solutions	
  aimed	
  at	
  innovating	
  
policy-­‐making.	
  As	
  we	
  will	
  describe	
  in	
  section	
  2.1,	
  the	
  scope	
  goes	
  well	
  beyond	
  the	
  focus	
  on	
  “Decision-­‐
making”	
  notion	
  typical	
  of	
  eParticipation,	
  and	
  encompasses	
  all	
  phases	
  of	
  the	
  policy	
  cycle.	
  The	
  main	
  
goal	
   is	
   limited	
   to	
   improving	
   the	
   quality	
   of	
   policies,	
   not	
   of	
   making	
   them	
   more	
   consensual	
   or	
  
representative.	
  
Policy-­‐making	
  2.0	
  is	
  a	
  new	
  term	
  that	
  we	
  have	
  coined	
  to	
  express	
  in	
  more	
  understandable	
  terms	
  the	
  
somehow	
  technical	
  notion	
  of	
  “ICT	
  for	
  governance	
  and	
  policy	
  modelling”.	
  Its	
  usage	
  in	
  the	
  course	
  of	
  
the	
  project	
  proved	
  more	
  effective	
  than	
  the	
  latter	
  when	
  discussing	
  with	
  stakeholders.	
  Thereby	
  from	
  
now	
  on	
  we	
  will	
  refer	
  to	
  the	
  roadmap	
  as	
  the	
  Research	
  Roadmap	
  on	
  Policy-­‐Making	
  2.0.	
  
The	
  full	
  set	
  of	
  methodologies	
  and	
  tools	
  has	
  been	
  spelled	
  out	
  in	
  the	
  taxonomy	
  in	
  WP15
:	
  
1.1.	
   Open	
  government	
  information	
  &	
  intelligence	
  for	
  transparency	
  
1.1.1.	
   Open	
  &	
  Transparent	
  Information	
  Management	
  
1.1.1.1.	
  Open	
  data	
  policy	
  
1.1.1.2.	
  Open	
  data	
  licence	
  
1.1.1.3.	
  Open	
  data	
  portal	
  
1.1.1.4.	
  Code	
  list	
  
1.1.1.5.	
  Vocabulary/ontology	
  
1.1.1.6.	
  Reference	
  data	
  
1.1.1.7.	
  Data	
  cleaning	
  and	
  reconciliation	
  tool	
  
1.1.2.	
   Data	
  published	
  on	
  the	
  Web	
  under	
  an	
  open	
  licence	
  
1.1.2.1.	
  Human-­‐readable	
  data	
  
1.1.2.2.	
  Machine	
  readable	
  data	
  in	
  proprietary	
  format	
  
1.1.2.3.	
  Machine-­‐readable	
  data	
  published	
  in	
  a	
  non-­‐proprietary	
  format	
  
1.1.2.4.	
  Data	
  published	
  in	
  RDF	
  
1.1.2.5.	
  SPARQL	
  endpoint	
  for	
  querying	
  RDF	
  data	
  
1.1.2.6.	
  RDF	
  data	
  linked	
  to	
  other	
  data	
  sets	
  
1.1.3.	
   Visual	
  Analytics	
  
1.1.3.1.	
  Visualisation	
  of	
  a	
  single,	
  static,	
  embedded	
  data	
  set	
  
1.1.3.2.	
  Visualisation	
  of	
  multiple	
  static	
  data	
  sets	
  
1.1.3.3.	
  Visualisation	
  of	
  a	
  single	
  live	
  data	
  feed	
  or	
  updating	
  data	
  set	
  
1.1.3.4.	
  Visualisation	
  of	
  multiple	
  data	
  points,	
  including	
  live	
  feeds	
  or	
  updates	
  
1.2.	
   Social	
  computing,	
  citizen	
  engagement	
  and	
  inclusion	
  
1.2.1.	
   Social	
  Computing	
  
1.2.1.1.	
  Collaborative	
  writing	
  and	
  annotation	
  
1.2.1.2.	
  Content	
  syndication	
  
1.2.1.3.	
  Feedback	
  and	
  reputation	
  management	
  systems	
  
1.2.1.4.	
  Social	
  Network	
  Analysis	
  
1.2.1.5.	
  Participatory	
  sensing	
  
1.2.2.	
   Citizen	
  Engagement	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
5
	
  The	
  taxonomy	
  presented	
  here	
  builds	
  on	
  CROSSROAD	
  taxonomy,	
  which	
  has	
  been	
  expanded,	
  reviewed	
  and	
  updated	
  by	
  the	
  
members	
  of	
  the	
  Consortium	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
17	
  |	
  P a g e 	
  
1.2.2.1.	
  Online	
  deliberation	
  
1.2.2.2.	
  Argumentation	
  support	
  
1.2.2.3.	
  Petition,	
  Polling	
  and	
  voting	
  
1.2.2.4.	
  Serious	
  games	
  
1.2.2.5.	
  Opinion	
  mining	
  
1.2.3.	
   Public	
  Opinion-­‐Mining	
  &	
  Sentiment	
  Analysis	
  
1.2.3.1.	
  Opinion	
  tracking	
  
1.2.3.2.	
  Multi-­‐lingual	
  and	
  Multi-­‐Cultural	
  opinion	
  extraction	
  and	
  filtering	
  
1.2.3.3.	
  Real-­‐time	
  opinion	
  visualisation	
  
1.2.3.4.	
  Collective	
  Wisdom	
  Analysis	
  and	
  Exploitation	
  
1.3.	
   Policy	
  Assessment	
  
1.3.1.	
   Policy	
  Context	
  Analysis	
  
1.3.1.1.	
  Forecasting	
  
1.3.1.2.	
  Foresight	
  
1.3.1.3.	
  Back-­‐Casting	
  
1.3.1.4.	
  Now-­‐Casting	
  
1.3.1.5.	
  Early	
  Warning	
  Systems	
  
1.3.1.6.	
  Technology	
  Road-­‐Mapping	
  (TRM)	
  
1.3.2.	
   Policy	
  Modelling	
  
1.3.2.1.	
  Group	
  Model	
  Building	
  
1.3.2.2.	
  Systems	
  Thinking	
  &	
  Behavioural	
  Modelling	
  
1.3.2.3.	
  System	
  Dynamics	
  
1.3.2.4.	
  Agent-­‐Based	
  Modelling	
  
1.3.2.5.	
  Stochastic	
  Modelling	
  
1.3.2.6.	
  Cellular	
  Automata	
  
1.3.3.	
   Policy	
  Simulation	
  
1.3.3.1.	
  Multi-­‐level	
  &	
  micro-­‐simulation	
  models	
  
1.3.3.2.	
  Discrete	
  Event	
  Simulation	
  
1.3.3.3.	
  Autonomous	
  Agents,	
  ABM	
  Simulation,	
  Multi-­‐Agent	
  Systems	
  (MAS)	
  
1.3.3.4.	
  Virtual	
  Worlds,	
  Virtual	
  Reality	
  &	
  Gaming	
  Simulation	
  
1.3.3.5.	
  Model	
  Integration	
  
1.3.3.6.	
  Model	
  Calibration	
  &	
  Validation	
  
1.3.4.	
   Policy	
  Evaluation	
  
1.3.4.1.	
  Impact	
  Assessment	
  
1.3.4.2.	
  Scenarios	
  
1.3.4.3.	
  Model	
  Quality	
  Evaluation	
  
1.3.4.4.	
  Multi-­‐Criteria	
  Decision	
  Analysis	
  
1.4.	
   Identity,	
  privacy	
  and	
  trust	
  in	
  governance	
  
1.4.1.	
   Identity	
  Management	
  
1.4.1.1.	
  Federated	
  Identity	
  Management	
  Systems	
  
1.4.1.2.	
  User	
  centric,	
  self	
  managed	
  and	
  lightweight	
  credentials	
  
1.4.1.3.	
  Legal-­‐social	
  aspects	
  of	
  eIdentity	
  management	
  
1.4.1.4.	
  Mobile	
  Identity	
  (Portability)	
  
1.4.2.	
   Privacy	
  
1.4.2.1.	
  Privacy	
  and	
  Data	
  Protection	
  
1.4.2.2.	
  Privacy	
  Enhancing	
  Technologies	
  
1.4.2.3.	
  Anonymity	
  and	
  Pseudonymity	
  
1.4.2.4.	
  Open	
  data	
  management	
  (including	
  Citizen	
  Profiling,	
  'digital	
  shadow'	
  tracing	
  
and	
  tracking	
  
1.4.3.	
   Trust	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
18	
  |	
  P a g e 	
  
1.4.3.1.	
  Legal	
  Informatics	
  
1.4.3.2.	
  Digital	
  Rights	
  Management	
  
1.4.3.3.	
  Digital	
  Citizenship	
  Rights	
  and	
  feedback	
  loops	
  
1.4.3.4.	
  Intellectual	
  Property	
  in	
  the	
  digital	
  era	
  
1.4.3.5.	
  Trust-­‐building	
   Services	
   (including	
   data	
   processing	
   and	
   profiling	
   by	
   private	
  
actors	
  for	
  public	
  services)	
  
1.5.	
   Future	
  internet	
  for	
  collaborative	
  governance	
  
1.5.1.	
   Cloud	
  Computing	
  
1.5.1.1.	
  Cloud	
  service	
  level	
  requirements	
  
1.5.1.2.	
  Business	
  models	
  in	
  the	
  cloud	
  
1.5.1.3.	
  Cloud	
  interoperability	
  
1.5.1.4.	
  Security	
  and	
  authentication	
  in	
  the	
  cloud	
  
1.5.1.5.	
  Data	
  confidentiality	
  and	
  auditability	
  
1.5.1.6.	
  Cloud	
  legal	
  implications	
  
1.5.2.	
   Pervasive	
  Computing	
  &	
  Internet	
  of	
  Things	
  in	
  Public	
  Services	
  
1.5.2.1.	
  Ambient	
  intelligence	
  
1.5.2.2.	
  Exploiting	
  smart	
  objects	
  
1.5.2.3.	
  Standardization	
  
1.5.2.4.	
  Business	
  models	
  for	
  pervasive	
  technologies	
  
1.5.2.5.	
  Privacy	
  implications	
  and	
  risks	
  
1.5.3.	
   Provision	
  of	
  next	
  generation	
  public	
  e-­‐services	
  
1.5.3.1.	
  Fixed	
  and	
  mobile	
  network	
  access	
  technologies	
  
1.5.3.2.	
  Mobile	
  web	
  
1.5.3.3.	
  Models	
  for	
  information	
  dissemination	
  
1.5.3.4.	
  Management	
  of	
  scarce	
  network	
  capacity	
  and	
  congestion	
  problems	
  
1.5.3.5.	
  Large-­‐scale	
  resource	
  sharing	
  
1.5.3.6.	
  Interworking	
  of	
  different	
  technologies	
  for	
  seamless	
  connectivity	
  of	
  users	
  
1.5.4.	
   Future	
  Human/Computer	
  Interaction	
  Applications	
  &	
  Systems	
  
1.5.4.1.	
  Web	
  accessibility	
  
1.5.4.2.	
  User-­‐centered	
  design	
  
1.5.4.3.	
  Augmented	
  cognition	
  
1.5.4.4.	
  Human	
  senses	
  recognition	
  
	
  
Policy-­‐making	
  2.0	
  encompasses	
  clearly	
  a	
  wide	
  set	
  of	
  methodologies	
  and	
  tools.	
  At	
  first	
  sight,	
  it	
  might	
  
appear	
   unclear	
   what	
   the	
   common	
   denominator	
   is.	
   In	
   our	
   view,	
   what	
   they	
   share	
   is	
   that	
   they	
   are	
  
designed	
  to	
  use	
  technology	
  in	
  order	
  to	
  inform	
  the	
  formulation	
  of	
  more	
  effective	
  public	
  policies.	
  In	
  
particular,	
  these	
  technologies	
  share	
  a	
  common	
  approach	
  in	
  taking	
  into	
  account	
  and	
  dealing	
  with	
  the	
  
full	
   complexity	
   of	
   human	
   nature.	
   As	
   spelled	
   out	
   originally	
   in	
   the	
   CROSSOVER	
   project	
   proposal:	
  
“traditional	
  policy-­‐making	
  tools	
  are	
  limited	
  insofar	
  they	
  assume	
  an	
  abstract	
  and	
  unrealistic	
  human	
  
being:	
  rational	
  (utility	
  maximizing),	
  consistent	
  (not	
  heterogeneous),	
  atomised	
  (not	
  connected),	
  wise	
  
(thinking	
  long-­‐term)	
  and	
  politically	
  committed	
  (as	
  Lisa	
  Simpson)”.	
  Policy-­‐making	
  2.0	
  thus	
  accounts	
  
for	
   this	
   diversity.	
   Its	
   methodologies	
   and	
   tools	
   are	
   designed	
   not	
   to	
   impose	
   change	
   and	
   artificial	
  
structures,	
   rather	
   to	
   interact	
   with	
   this	
   diversity.	
   Agent-­‐based	
   models	
   account	
   for	
   the	
   interaction	
  
between	
   agents	
   that	
   are	
   different	
   in	
   nature	
   and	
   values;	
   systems	
   thinking	
   accounts	
   for	
   long-­‐term	
  
interacting	
  impacts;	
  social	
  network	
  analysis	
  deals	
  with	
  the	
  mutual	
  influences	
  between	
  people	
  rather	
  
than	
   fully	
   rational	
   choices;	
   big	
   data	
   analyses	
   observed	
   behaviour	
   rather	
   than	
   theoretical	
   models;	
  
persuasive	
   technologies	
   deal	
   with	
   the	
   complex	
   psychology	
   of	
   individuals	
   and	
   introduces	
   gaming	
  
values	
   to	
   involve	
   more	
   “casual”	
   participants.	
   Moreover,	
   policy-­‐making	
   2.0	
   tools	
   allow	
   all	
  
stakeholders	
  to	
  participate	
  to	
  the	
  decision-­‐making	
  process.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
19	
  |	
  P a g e 	
  
	
  
1.4. Policy:	
  Between	
  politics	
  and	
  services	
  
The	
  application	
  of	
  technology	
  to	
  governmental	
  issues	
  is	
  not	
  a	
  new	
  topic.	
  Indeed	
  e-­‐government	
  and	
  
the	
  new	
  buzzword	
  of	
  government	
  2.0,	
  have	
  become	
  mainstream	
  in	
  recent	
  years:	
  how	
  and	
  why	
  a	
  
future	
  looking	
  research	
  agenda	
  could	
  still	
  refer	
  to	
  the	
  2.0	
  paradigm	
  as	
  innovative?	
  The	
  novelty	
  lies	
  in	
  
the	
  “policy”	
  part	
  of	
  the	
  definition.	
  
So	
  far,	
  the	
  application	
  of	
  "2.0"	
  technologies	
  to	
  governmental	
  processes	
  has	
  focussed	
  mainly	
  on	
  the	
  
usage	
  of	
  social	
  media	
  for	
  political	
  communication,	
  best	
  exemplified	
  by	
  the	
  Obama	
  campaign.	
  The	
  
typical	
  narrative	
  is	
  that	
  in	
  the	
  age	
  of	
  social	
  media,	
  traditional	
  communication	
  campaigns	
  and	
  political	
  
parties	
   are	
   unsuited	
   to	
   generate	
   commitment	
   and	
   action	
   by	
   citizens,	
   which	
   instead	
   want	
   to	
   take	
  
active	
  part	
  in	
  the	
  campaign	
  and	
  self-­‐organize	
  via	
  social	
  media:	
  ""A	
  candidate	
  who	
  can	
  master	
  the	
  
Internet	
  will	
  not	
  only	
  level	
  the	
  playing	
  field;	
  he	
  will	
  level	
  the	
  opposition."	
  RightClick	
  Strategies'	
  Larry	
  
Purpuro.	
  
A	
  second	
  area	
  of	
  strong	
  focus	
  proved	
  to	
  be	
  the	
  collaborative	
  provision	
  of	
  public	
  services	
  based	
  on	
  
peer-­‐to-­‐peer	
   support	
   and	
   open	
   data,	
   best	
   exemplified	
   by	
   the	
   widely	
   spread	
   "appsfordemocracy"	
  
contests.	
  The	
  narrative	
  here	
  is	
  that	
  government	
  should	
  act	
  as	
  a	
  platform	
  and	
  enable	
  third	
  parties	
  
(and	
  citizens	
  themselves)	
  to	
  co-­‐create	
  and	
  deliver	
  public	
  services	
  based	
  on	
  open	
  government	
  data.	
  	
  
This	
  is	
  what	
  Goldsmith	
  and	
  Eggers	
  (2004)	
  call	
  "governing	
  by	
  network".	
  
Indeed,	
   the	
   Obama	
   administration	
   clearly	
   shows	
   these	
   priorities,	
   moving	
   from	
   state-­‐of-­‐the-­‐art	
  
campaigning	
   in	
   order	
   to	
   be	
   elected,	
   and	
   then	
   implementing	
   a	
   strong	
   open	
   data	
   policy	
   with	
  
crowdsourcing	
  initiatives	
  to	
  let	
  citizens	
  create	
  services	
  based	
  on	
  these	
  data.	
  
Between	
  "politics"	
  and	
  "public	
  services	
  co-­‐delivery",	
  much	
  less	
  attention	
  has	
  been	
  devoted	
  to	
  the	
  
usage	
  of	
  social	
  technology	
  to	
  improve	
  public	
  policy.	
  While	
  politics	
  deal	
  with	
  the	
  legislative	
  branch,	
  
the	
   Parliament,	
   policy-­‐making	
   is	
   mainly	
   the	
   realm	
   of	
   the	
   executive	
   branch.	
   Typically,	
   the	
   job	
   of	
  
policy-­‐making	
   involves	
   a	
   great	
   deal	
   of	
   socio-­‐economic	
   analysis	
   as	
   well	
   as	
   consultation	
   with	
  
stakeholders.	
  	
  
This	
  roadmap	
  aims	
  to	
  fill	
  this	
  gap,	
  by	
  providing	
  a	
  complete	
  picture	
  of	
  how	
  technology	
  can	
  improve	
  
policy-­‐making.	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
20	
  |	
  P a g e 	
  
2. Not	
  just	
  another	
  hype:	
  the	
  Demand	
  side	
  of	
  policy-­‐making	
  2.0	
  
In	
  the	
  context	
  of	
  new	
  technologies,	
  we	
  are	
  periodically	
  informed	
  about	
  the	
  emerging	
  wave	
  that	
  will	
  
change	
  everything,	
  only	
  to	
  see	
  it	
  quickly	
  forgotten	
  after	
  years	
  or	
  even	
  month	
  in	
  what	
  Gartner	
  calls	
  
“trough	
  of	
  disillusionment”.	
  While	
  some	
  of	
  this	
  emphasis	
  is	
  certainly	
  driven	
  by	
  commercial	
  interests,	
  
in	
  many	
  other	
  cases	
  it	
  reflects	
  a	
  genuine	
  optimism	
  of	
  its	
  proponents,	
  who	
  tend	
  to	
  underestimate	
  the	
  
real-­‐life	
  bottlenecks	
  to	
  adoption	
  by	
  less	
  enthusiast	
  people.	
  	
  
Movzorov	
   critically	
   calls	
   this	
   cyber-­‐utopianism	
   or	
   technological	
   solutionism	
   (Morozov	
   2013);	
   on	
   a	
  
similar	
  note,	
  many	
  years	
  of	
  eGovernment	
  policy	
  have	
  revealed	
  the	
  fundamental	
  importance	
  of	
  non-­‐
technological	
  factors,	
  such	
  as	
  organisational	
  change,	
  skills,	
  incentives	
  and	
  culture.	
  	
  
One	
   way	
   to	
   prevent	
   policy-­‐making	
   2.0	
   to	
   become	
   yet	
   another	
   hype	
   in	
   the	
   Gartner	
   curve,	
   is	
   to	
  
precisely	
   spell	
   out	
   the	
   challenges	
   that	
   these	
   new	
   technologies	
   help	
   to	
   address.	
   Indeed,	
   the	
  
importance	
  of	
  this	
  demand-­‐driven	
  approach	
  based	
  on	
  grand	
  challenges	
  is	
  fully	
  embraced	
  by	
  the	
  new	
  
Horizon2020	
   research	
   programme	
   of	
   the	
   European	
   Union. 6
	
  	
   Furthermore,	
   a	
   demand-­‐driven	
  
approach	
  helps	
  us	
  to	
  frame	
  the	
  technological	
  opportunities	
  in	
  a	
  language	
  understandable	
  to	
  policy-­‐
makers,	
  thereby	
  supporting	
  the	
  awareness-­‐raising	
  objective	
  of	
  the	
  CROSSOVER	
  project.	
  
When	
  analysing	
  the	
  demand	
  side,	
  our	
  first	
  consideration	
  is	
  that	
  policy-­‐making	
  is	
  more	
  important	
  
and	
   complex	
   than	
   ever.	
  	
  The	
  role	
  of	
  government	
  has	
  substantially	
  changed	
  over	
  the	
  last	
  twenty	
  
years.	
  Governments	
  have	
  to	
  re-­‐design	
  their	
  role	
  in	
  areas	
  where	
  they	
  were	
  directly	
  involved	
  in	
  service	
  
provision,	
  such	
  as	
  utilities	
  but	
  also	
  education	
  and	
  health.	
  This	
  is	
  not	
  simply	
  a	
  matter	
  of	
  privatisation,	
  
or	
  of	
  a	
  linear	
  trend	
  towards	
  smaller	
  government.	
  Indeed,	
  even	
  before	
  the	
  recent	
  financial	
  turmoil	
  
and	
  nationalisation	
  of	
  parts	
  of	
  the	
  financial	
  system,	
  government	
  role	
  in	
  the	
  European	
  societies	
  was	
  
not	
   simply	
   “diminishing”,	
   but	
   rather	
   being	
   transformed.	
   At	
   the	
   same	
   time,	
   it	
   is	
   increasingly	
  
recognized	
  that	
  the	
  emergence	
  of	
  new	
  and	
  complex	
  problems	
  requires	
  government	
  to	
  increasingly	
  
collaborate	
   with	
   non-­‐governmental	
   actors	
   in	
   the	
   understanding	
   and	
   in	
   the	
   addressing	
   of	
   these	
  
challenges7
.	
  As	
  an	
  OECD	
  report	
  states	
  the	
  following:	
  	
  
“Government	
  has	
  a	
  larger	
  role	
  in	
  the	
  OECD	
  countries	
  than	
  two	
  decades	
  ago.	
  But	
  the	
  nature	
  of	
  public	
  
policy	
  problems	
  and	
  the	
  methods	
  to	
  deal	
  with	
  them	
  are	
  still	
  undergoing	
  deep	
  change.	
  Governments	
  
are	
  moving	
  away	
  from	
  the	
  direct	
  provision	
  of	
  services	
  towards	
  a	
  greater	
  role	
  for	
  private	
  and	
  non-­‐
profit	
  entities	
  and	
  increased	
  regulation	
  of	
  markets.	
  Government	
  regulatory	
  reach	
  is	
  also	
  extending	
  in	
  
new	
   socio-­‐economic	
   areas.	
   This	
   expansion	
   of	
   regulation	
   reflects	
   the	
   increasing	
   complexity	
   of	
  
societies.	
   At	
   the	
   same	
   time,	
   through	
   technological	
   advances,	
   government’s	
   ability	
   to	
   accumulate	
  
information	
  in	
  these	
  areas	
  has	
  increased	
  significantly.	
  As	
  government	
  face	
  more	
  new	
  and	
  complex	
  
problems	
  that	
  cannot	
  be	
  dealt	
  with	
  easily	
  by	
  direct	
  public	
  service	
  provision,	
  more	
  ambitious	
  policies	
  
require	
  more	
  complex	
  interventions	
  and	
  collaboration	
  with	
  non-­‐governmental	
  parties”	
  
This	
  is	
  particularly	
  challenging	
  in	
  our	
  "complex"	
  societies.	
  “Complex”	
  systems	
  are	
  those	
  where	
  “the	
  
behaviour	
  of	
  the	
  system	
  as	
  a	
  whole	
  cannot	
  be	
  determined	
  by	
  partitioning	
  it	
  and	
  understanding	
  the	
  
behaviour	
  of	
  each	
  of	
  the	
  parts	
  separately,	
  which	
  is	
  the	
  classic	
  strategy	
  of	
  the	
  reductionist	
  physical	
  
sciences”.	
  The	
  present	
  challenges	
  governments	
  must	
  face,	
  as	
  described	
  by	
  the	
  OECD,	
  are	
  complex	
  as	
  
they	
  are	
  characterised	
  by	
  many	
  non-­‐linear	
  interactions	
  between	
  agents;	
  they	
   emerge	
  from	
  these	
  
interactions	
   and	
   are	
   therefore	
   difficult	
   to	
   predict.	
   The	
   financial	
   crisis	
   is	
   probably	
   the	
   foremost	
  
example	
  of	
  a	
  complex	
  problem,	
  which	
  proved	
  impossible	
  to	
  predict	
  with	
  traditional	
  decision-­‐making	
  
tools.	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
6
	
  http://ec.europa.eu/research/horizon2020/index_en.cfm?pg=h2020	
  	
  
7
	
  See	
  Ostrom:	
  http://www.nobelprize.org/nobel_prizes/economics/laureates/2009/ostrom-­‐lecture.html	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
21	
  |	
  P a g e 	
  
	
  
2.1. The	
  typical	
  tasks	
  of	
  policy-­‐makers:	
  the	
  policy	
  cycle	
  
Policy-­‐making	
  is	
  typically	
  carried	
  out	
  through	
  a	
  set	
  of	
  activities	
  described	
  as	
  "policy-­‐cycle"	
  (Howard	
  	
  
2005).	
  In	
  this	
  document	
  we	
  propose	
  a	
  new	
  way	
  of	
  implementing	
  policies,	
  by	
  first	
  assessing	
  their	
  
impacts	
  in	
  a	
  virtual	
  environment.	
  
While	
  different	
  versions	
  of	
  the	
  cycle	
  are	
  proposed	
  in	
  literature,	
  in	
  this	
  context	
  we	
  adopt	
  a	
  simple	
  
version	
  articulated	
  in	
  5	
  phases:	
  
-­‐ agenda	
  setting	
  encompasses	
  the	
  basic	
  analysis	
  on	
  the	
  nature	
  and	
  size	
  of	
  problems	
  at	
  stakes	
  
are	
  addressed,	
  including	
  the	
  causal	
  relationships	
  between	
  the	
  different	
  factors	
  
-­‐ policy	
   design	
   includes	
   the	
   development	
   of	
   the	
   possible	
   solutions,	
   the	
   analysis	
   of	
   the	
  
potential	
  impact	
  of	
  these	
  solutions8
,	
  the	
  development	
  and	
  revision	
  of	
  a	
  policy	
  proposal	
  
-­‐ adoption	
  is	
  the	
  cut-­‐off	
  decision	
  on	
  the	
  policy.	
  This	
  is	
  the	
  most	
  delicate	
  and	
  sensitive	
  area,	
  
where	
  accountability	
  and	
  representativeness	
  are	
  needed.	
  It	
  is	
  also	
  the	
  area	
  most	
  covered	
  by	
  
existing	
  research	
  on	
  e-­‐democracy	
  	
  
-­‐ implementation	
  is	
  often	
  considered	
  the	
  most	
  challenging	
  phase,	
  as	
  it	
  needs	
  to	
  translate	
  the	
  
policy	
   objectives	
   in	
   concrete	
   activities,	
   that	
   have	
   to	
   deal	
   with	
   the	
   complexity	
   of	
   the	
   real	
  
world	
  .	
  It	
  includes	
  ensuring	
  a	
  broader	
  understanding,	
  the	
  change	
  of	
  behaviour	
  and	
  the	
  active	
  
collaboration	
  of	
  all	
  stakeholders.	
  
-­‐ Monitoring	
  and	
  evaluation	
  make	
  use	
  of	
  implementation	
  data	
  to	
  assess	
  whether	
  the	
  policy	
  is	
  
being	
  implemented	
  as	
  planned,	
  and	
  is	
  achieving	
  the	
  expected	
  objectives.	
  
The	
   figure	
   below	
   (authors’	
   elaboration	
   based	
   on	
   Howard	
   2005	
   and	
   EC	
   2009)	
   illustrates	
   the	
   main	
  
phases	
  of	
  the	
  policy	
  cycle	
  (in	
  the	
  internal	
  circle)	
  and	
  the	
  typical	
  concrete	
  activities	
  (external	
  circle)	
  
that	
  accompany	
  this	
  cycle.	
  In	
  particular,	
  the	
  identified	
  activities	
  are	
  based	
  on	
  the	
  Impact	
  Assessment	
  
Guidelines	
  of	
  the	
  European	
  Commission	
  (EC	
  2009).	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
8
	
  A	
  very	
  important	
  element	
  in	
  policy	
  design	
  and	
  formulation	
  is	
  given	
  by	
  ex-­‐ante	
  evaluation.	
  In	
  this	
  respect	
  ICT	
  tools	
  for	
  
policy-­‐making	
  can	
  play	
  an	
  important	
  role,	
  simulating	
  alternative	
  policy	
  options	
  and	
  impacts	
  before	
  implementing	
  a	
  policy	
  
action	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
22	
  |	
  P a g e 	
  
	
  
Figure	
  3:	
  Policy	
  Cycle	
  and	
  Related	
  Activities	
  	
  
	
  
Traditionally,	
  the	
  focus	
  about	
  the	
  impact	
  of	
  technology	
  in	
  policy-­‐making	
  has	
  been	
  on	
  the	
  adoption	
  
phase,	
   analysing	
   the	
   implications	
   of	
   ICT	
   for	
   direct	
   democracy.	
   In	
   the	
   context	
   of	
   the	
   CROSSOVER	
  
project,	
  we	
  adopt	
  a	
  broader	
  conceptual	
  framework	
  that	
  embraces	
  all	
  phases	
  of	
  policy-­‐making.	
  
	
  
2.2. The	
  traditional	
  tools	
  of	
  policy-­‐making	
  
Let	
  us	
  present	
  now	
  what	
  are	
  the	
  methodologies	
  and	
  tools	
  already	
  traditionally	
  adopted	
  in	
  policy-­‐
making.	
  Typically,	
  in	
  the	
  agenda-­‐setting	
  phase,	
  statistics	
  are	
  analysed	
  by	
  government	
  and	
  experts	
  
contracted	
  by	
  government	
  in	
  order	
  to	
  understand	
  the	
  problems	
  at	
  stake	
  and	
  the	
  underlying	
  causes	
  
of	
  the	
  problems.	
  Survey	
  and	
  consultations,	
  including	
  online	
  ones,	
  are	
  frequently	
  used	
  to	
  assess	
  the	
  
stakeholders’	
  priorities,	
  and	
  typically	
  analysed	
  in-­‐house.	
  General-­‐equilibrium	
  models	
  are	
  used	
  as	
  an	
  
assessment	
  framework.	
  
Once	
  the	
  problems	
  and	
  its	
  causes	
  are	
  defined,	
  the	
  policy	
  design	
  phase	
  is	
  typically	
  articulated	
  through	
  
an	
  ex-­‐ante	
  impact	
  assessment	
  approach.	
  A	
  limited	
  set	
  of	
  policy	
  options	
  are	
  formulated	
  in	
  house	
  with	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
23	
  |	
  P a g e 	
  
the	
   involvement	
   of	
   experts	
   and	
   stakeholders.	
   For	
   each	
   option,	
   models	
   are	
   simulated	
   in	
   order	
   to	
  
forecast	
  possible	
  sectoral	
  and	
  cross-­‐sectoral	
  impacts.	
  These	
  simulations	
  are	
  typically	
  carried	
  out	
  by	
  
general-­‐equilibrium	
   models	
   if	
   the	
   time	
   frame	
   is	
   focused	
   on	
   short	
   and	
   medium	
   term	
   economic	
  
impacts	
  of	
  policy	
  implementation.	
  Based	
  on	
  the	
  simulated	
  impact,	
  the	
  best	
  option	
  is	
  submitted	
  for	
  
adoption.	
  
The	
  adoption	
  phase	
  is	
  typically	
  carried	
  out	
  by	
  the	
  official	
  authority,	
  either	
  legislative	
  or	
  executive	
  
(depending	
   on	
   the	
   type	
   of	
   policy).	
   In	
   some	
   cases,	
   decision	
   is	
   left	
   to	
   citizens	
   through	
   direct	
  
democracy,	
   through	
   a	
   referendum	
   or	
   tools	
   such	
   as	
   participatory	
   budgeting;	
   or	
   to	
   stakeholders	
  
through	
  self-­‐regulation.	
  
The	
   implementation	
   phase	
   typically	
   is	
   carried	
   out	
   directly	
   by	
   government,	
   using	
   incentives	
   and	
  
coercion.	
   It	
   benefits	
   from	
   technology	
   mainly	
   in	
   terms	
   of	
   monitoring	
   and	
   surveillance,	
   in	
   order	
   to	
  
manage	
  incentives	
  and	
  coercion,	
  for	
  example	
  through	
  the	
  database	
  used	
  for	
  social	
  security	
  or	
  taxes	
  
revenues.	
  
The	
  monitoring	
  and	
  evaluation	
  phase	
  is	
  supported	
  by	
  mathematical	
  simulation	
  studies	
  and	
  analysis	
  
of	
   government	
   data,	
   typically	
   carried	
   out	
   in-­‐house	
   or	
   by	
   contractors.	
   Moreover,	
   as	
   numbers	
  
aggregate	
  the	
  impacts	
  of	
  everything	
  that	
  happens,	
  including	
  policy,	
  it	
  is	
  difficult	
  to	
  single	
  out	
  the	
  
impacts	
   of	
   one	
   policy	
   ex	
   post.	
   Final	
   results	
   are	
   published	
   in	
   report	
   format,	
   and	
   fed	
   back	
   to	
   the	
  
agenda	
  setting	
  phase.	
  
	
  
2.3. The	
  key	
  challenges	
  of	
  policy-­‐makers	
  
Needless	
  to	
  say,	
  the	
  current	
  policy-­‐making	
  process	
  is	
  seldom	
  based	
  on	
  objective	
  evidence	
  and	
  not	
  all	
  
views	
   are	
   necessarily	
   represented.	
   Dramatic	
   crises	
   seem	
   to	
   happen	
   too	
   often,	
   and	
   governments	
  
struggle	
  to	
  anticipate	
  and	
  deal	
  with	
  them,	
  as	
  the	
  financial	
  crisis	
  has	
  shown.	
  Citizens	
  feel	
  a	
  sense	
  of	
  
mistrust	
  towards	
  government,	
  as	
  shown	
  by	
  the	
  decrease	
  in	
  voters	
  turnout	
  in	
  the	
  elections.	
  
In	
  this	
  section,	
  we	
  analyse	
  and	
  identify	
  the	
  specific	
  challenges	
  of	
  policy-­‐making.	
  The	
  goal	
  is	
  to	
  clearly	
  
spell	
  out	
  "what	
  is	
  the	
  problem"	
  in	
  the	
  policy	
  making	
  process	
  that	
  policy-­‐making	
  2.0	
  tools	
  can	
  help	
  to	
  
solve.	
  
The	
  challenges	
  have	
  been	
  identified	
  on	
  desk-­‐based	
  research	
  of	
  "government	
  failure"	
  in	
  a	
  variety	
  of	
  
contexts,	
  and	
  are	
  illustrated	
  by	
  real-­‐life	
  examples.	
  
One	
   first	
   overarching	
   challenge	
   is	
   the	
   emergence	
   of	
   a	
   distributed	
   governance	
   model.	
   The	
  
traditional	
  division	
  of	
  “market”	
  and	
  “state”	
  no	
  longer	
  fits	
  a	
  reality	
  where	
  public	
  decision	
  and	
  action	
  is	
  
effectively	
  carried	
  out	
  by	
  a	
  plurality	
  of	
  actors.	
  Traditionally,	
  the	
  policy	
  cycle	
  is	
  designed	
  as	
  a	
  set	
  of	
  
activities	
  belonging	
  to	
  government,	
  from	
  the	
  agenda	
  setting	
  to	
  the	
  delivery	
  and	
  evaluation.	
  However	
  
in	
  recent	
  years	
  it	
  has	
  been	
  increasingly	
  recognized	
  that	
  public	
  governance	
  involves	
  a	
  wide	
  range	
  of	
  
stakeholders,	
  who	
  are	
  increasingly	
  involved	
  not	
  only	
  in	
  agenda-­‐setting	
  but	
  in	
  designing	
  the	
  policies,	
  
adopting	
   them	
   (through	
   the	
   increasing	
   role	
   of	
   self-­‐regulation),	
   implementing	
   them	
   (through	
  
collaboration,	
  voluntary	
  action,	
  corporate	
  social	
  responsibility),	
  and	
  evaluating	
  them	
  (such	
  as	
  in	
  the	
  
case	
  of	
  civil	
  society	
  as	
  watchdog	
  of	
  government).	
  As	
  Elinor	
  Ostrom	
  stated	
  in	
  her	
  lecture	
  delivered	
  
when	
  receiving	
  the	
  Nobel	
  Prize	
  in	
  Economics9
:	
  “A	
  core	
  goal	
  of	
  public	
  policy	
  should	
  be	
  to	
  facilitate	
  the	
  
development	
   of	
   institutions	
   that	
   bring	
   out	
   the	
   best	
   in	
   humans.	
   We	
   need	
   to	
   ask	
   how	
   diverse	
  
polycentric	
  institutions	
  help	
  or	
  hinder	
  the	
  innovativeness,	
  learning,	
  adapting,	
  trustworthiness,	
  levels	
  
of	
  cooperation	
  of	
  participants,	
  and	
  the	
  achievement	
  of	
  more	
  effective,	
  equitable,	
  and	
  sustainable	
  
outcomes	
   at	
   multiple	
   scales”.	
   This	
   acknowledgement	
   leads	
   to	
   important	
   implications	
   for	
   the	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
9
	
  http://www.nobelprize.org/nobel_prizes/economics/laureates/2009/ostrom-­‐lecture.html	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
24	
  |	
  P a g e 	
  
CROSSOVER	
   roadmap:	
   policy-­‐making	
   2.0	
   tools	
   are	
   not	
   just	
   tools	
   for	
   government,	
   but	
   for	
   all	
  
stakeholders	
  to	
  participate	
  in	
  the	
  policy-­‐making	
  process10
.	
  
	
  
2.3.1. Detect	
  and	
  understand	
  problems	
  before	
  they	
  become	
  unsolvable	
  
The	
  continuous	
  struggle	
  for	
  evidence-­‐based	
  policy-­‐making	
  can	
  have	
  some	
  important	
  and	
  potentially	
  
negative	
  implications	
  in	
  terms	
  of	
  the	
  capacity	
  of	
  prompt	
  identification	
  of	
  problems.	
  Policy-­‐makers	
  
have	
  to	
  balance	
  the	
  need	
  for	
  prompt	
  reaction	
  with	
  the	
  need	
  for	
  justified	
  action,	
  by	
  distinguishing	
  
signal	
  from	
  noise.	
  Delayed	
  actions	
  are	
  often	
  ineffective;	
  at	
  the	
  same	
  time,	
  short-­‐term	
  evidence	
  can	
  
lead	
   to	
   opposite	
   effects.	
   In	
   any	
   case,	
   government	
   have	
   scarce	
   resources	
   and	
   need	
   to	
   prioritize	
  
interventions	
  on	
  the	
  most	
  important	
  problems.	
  
For	
  instance	
  the	
  significant	
  underestimation	
  of	
  the	
  risks	
  of	
  the	
  housing	
  bubble	
  in	
  the	
  late	
  2000s,	
  and	
  
the	
  systemic	
  reaction	
  that	
  it	
  would	
  lead	
  to,	
  led	
  to	
  delayed	
  reactions11
.	
  	
  
Systemic	
  changes	
  do	
  not	
  happen	
  gradually,	
  but	
  become	
  visible	
  only	
  when	
  it's	
  too	
  late	
  to	
  intervene	
  or	
  
the	
   cost	
   of	
   intervening	
   is	
   too	
   high.	
   For	
   example,	
   ICT	
   is	
   today	
   recognized	
   as	
   a	
   key	
   driver	
   of	
  
productivity	
  and	
  growth,	
  but	
  evidence	
  to	
  prove	
  this	
  became	
  available	
  at	
  a	
  distance	
  of	
  years	
  from	
  the	
  
initial	
   investment.	
   	
   In	
   fact	
   the	
   initial	
   lack	
   of	
   correlation	
   between	
   ICT	
   investment	
   and	
   productivity	
  
growth	
   was	
   mostly	
   due	
   to	
   incorrect	
   measurement	
   of	
   ICT	
   capital	
   prices	
   and	
   quality.	
   Subsequent	
  
methodologies	
  found	
  that	
  computer	
  hardware	
  played	
  an	
  increasing	
  role	
  as	
  a	
  source	
  of	
  economic	
  
growth	
   (see	
   inter	
   al.	
   Colecchia	
   and	
   Schreyer	
   2002,	
   Jorgenson	
   and	
   Stiroh	
   2000,	
   Oliner	
   and	
   Sichel	
  
2000).	
  
The	
  problem	
  is	
  in	
  this	
  case	
  is	
  therefore	
  twofold:	
  to	
  collect	
  data	
  more	
  rapidly;	
  and	
  to	
  analyse	
  them	
  
with	
   a	
   wider	
   variety	
   of	
   models	
   that	
   account	
   for	
   systemic,	
   long	
   term	
   effects	
   and	
   that	
   are	
   able	
   to	
  
detect	
  and	
  anticipate	
  weak	
  signals	
  or	
  unexpected	
  wild	
  cards.	
  
2.3.2. Generate	
  high	
  involvement	
  of	
  citizens	
  in	
  policy-­‐making	
  
The	
  involvement	
  of	
  citizens	
  in	
  policy-­‐making	
  remains	
  too	
  often	
  associated	
  with	
  short-­‐termism	
  and	
  
populism.	
  	
  
It	
  is	
  difficult	
  to	
  engage	
  citizens	
  in	
  policy	
  discussions	
  in	
  the	
  first	
  place:	
  public	
  policy	
  issues	
  are	
  not	
  
generally	
  appealing	
  and	
  interesting	
  as	
  citizens	
  fail	
  to	
  understand	
  the	
  relevance	
  of	
  the	
  issues	
  and	
  to	
  
see	
  "what's	
  in	
  it	
  for	
  me".	
  The	
  decline	
  in	
  voters’	
  turnout	
  and	
  the	
  lack	
  of	
  trust	
  in	
  politicians	
  reflects	
  
this.	
   More	
   importantly,	
   there	
   are	
   innumerable	
   cases	
   where	
   the	
   "right"	
   policies	
   are	
   not	
   adopted	
  
because	
  citizens	
  "would	
  not	
  understand"	
  or	
  because	
  it	
  is	
  not	
  politically	
  acceptable.	
  
While	
  the	
  Internet	
  has	
  long	
  promised	
  an	
  opportunity	
  for	
  widespread	
  involvement,	
  e-­‐participation	
  
initiatives	
  often	
  struggle	
  to	
  generate	
  participation.	
  Participation	
  is	
  often	
  limited	
  to	
  those	
  that	
  are	
  
already	
  interested	
  in	
  politics,	
  rather	
  than	
  involving	
  those	
  that	
  are	
  not.	
  
When	
   participation	
   occurs,	
   online	
   debates	
   tend	
   to	
   focus	
   on	
   eye-­‐catching	
   issues	
   and	
   polarized	
  
positions,	
  in	
  part	
  because	
  of	
  the	
  limits	
  of	
  the	
  technology	
  available.	
  It	
  is	
  extremely	
  difficult	
  and	
  time	
  
consuming	
  to	
  generate	
  open,	
  large	
  scale	
  and	
  meaningful	
  discussion.	
  	
  
	
  
2.3.3. Identify	
  “good	
  ideas”	
  and	
  innovative	
  solutions	
  to	
  long-­‐standing	
  problems	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
10
	
  However	
  in	
  our	
  project	
  we	
  mainly	
  focus	
  on	
  tools	
  that	
  are	
  used	
  or	
  can	
  be	
  adopted	
  by	
  Governments,	
  otherwise	
  we	
  would	
  
risk	
  to	
  enlarge	
  too	
  much	
  the	
  scope	
  of	
  the	
  research	
  roadmap	
  
11
	
  http://www.wsws.org/en/articles/2013/01/26/fede-­‐j26.html	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
25	
  |	
  P a g e 	
  
Innovation	
  in	
  policy-­‐making	
  is	
  a	
  slow	
  process.	
  Because	
  of	
  the	
  technical	
  nature	
  of	
  issues	
  at	
  hand,	
  the	
  
policy	
   discussion	
   is	
   often	
   limited	
   to	
   restricted	
   circles.	
   Innovative	
   policies	
   tend	
   to	
   be	
   "imported"	
  
through	
  "institutional	
  isomorphism".	
  Innovative	
  ideas,	
  from	
  both	
  civil	
  servants	
  and	
  citizens,	
  fail	
  to	
  
surface	
  to	
  the	
  top	
  hierarchy	
  and	
  are	
  often	
  blocked	
  for	
  institutional	
  resistance.	
  
Existing	
  instruments	
  for	
  large-­‐scale	
  brainstorming	
  remain	
  limited	
  in	
  usage,	
  and	
  fail	
  to	
  surface	
  the	
  
most	
  innovative	
  ideas.	
  Crowdsourcing	
  typically	
  focus	
  on	
  the	
  most	
  “attractive”	
  ideas,	
  rather	
  than	
  the	
  
most	
  insightful.	
  
2.3.4. Reduce	
  uncertainty	
  on	
  the	
  possible	
  impacts	
  of	
  policies	
  	
  
When	
  policy	
  options	
  have	
  been	
  developed,	
  simulations	
  are	
  carried	
  out	
  to	
  anticipate	
  the	
  likely	
  impact	
  
of	
   policies.	
   The	
   option	
   with	
   the	
   most	
   positive	
   impact	
   is	
   normally	
   the	
   one	
   that	
   is	
   proposed	
   for	
  
adoption.	
  
Most	
  existing	
  methodologies	
  and	
  tools	
  for	
  the	
  simulation	
  of	
  policy	
  impacts	
  work	
  decently	
  with	
  well	
  
known,	
  linear	
  phenomena.	
  However,	
  they	
  are	
  not	
  effective	
  in	
  times	
  of	
  crisis	
  and	
  fast	
  change,	
  which	
  
unfortunately	
  turn	
  out	
  to	
  be	
  exactly	
  the	
  situations	
  where	
  government	
  intervention	
  is	
  most	
  needed.	
  	
  
As	
  an	
  example	
  nowadays	
  the	
  European	
  Central	
  Bank	
  bases	
  its	
  analysis	
  of	
  the	
  EURO	
  Area	
  economy	
  
and	
   monetary	
   policy	
   on	
   a	
   derived	
   version	
   of	
   the	
   DSGE	
   model	
   developed	
   by	
   Frank	
   Smet	
   and	
   Raf	
  
Wouters	
   in	
   200312
.	
   Smet	
   and	
   Wouters’	
   model	
   is	
   deeply	
   microfounded,	
   allowing	
   for	
   a	
   rigorous	
  
theoretical	
   structure	
   of	
   the	
   model.	
   Moreover	
   in	
   this	
   setting	
   the	
   reduced	
   form	
   parameters	
   are	
  
related	
  to	
  deep	
  structural	
  parameters	
  in	
  order	
  to	
  mitigate	
  Lucas’	
  critique,	
  while	
  the	
  utility	
  of	
  agents	
  
can	
  be	
  taken	
  as	
  a	
  measure	
  of	
  welfare	
  in	
  the	
  economics	
  (Phelps	
  ed.	
  1970).	
  	
  
However,	
  the	
  DSGE	
  models	
  suffer	
  from	
  several	
  shortcomings	
  jeopardizing	
  their	
  ability	
  to	
  predict,	
  let	
  
alone	
  to	
  prevent,	
  a	
  global	
  crisis:	
  
• Agents	
   are	
   assumed	
   to	
   be	
   perfectly	
   rational,	
   having	
   perfect	
   access	
   to	
   information	
   and	
  
adapting	
  instantly	
  to	
  new	
  situations	
  in	
  order	
  to	
  maximize	
  their	
  long-­‐run	
  personal	
  advantage	
  
• So	
   far	
   agents	
   have	
   entered	
   the	
   models	
   as	
   homogeneous	
   representative	
   entities,	
   while	
   it	
  
would	
  be	
  a	
  step	
  forward	
  being	
  able	
  to	
  take	
  into	
  account	
  agents	
  heterogeneity	
  
• Canonical	
  models	
  consider	
  atomistic	
  agents	
  with	
  little	
  or	
  no	
  interactions	
  and	
  thereby	
  are	
  not	
  
able	
  to	
  cope	
  with	
  network	
  externalities	
  
	
  
But	
  most	
  of	
  all	
  it	
  is	
  the	
  very	
  notion	
  of	
  equilibrium	
  which	
  prevents	
  standard	
  models	
  from	
  dealing	
  with	
  
crisis.	
   A	
   stable	
   steady	
   state	
   equilibrium	
   is	
   a	
   condition	
   according	
   to	
   which	
   the	
   behaviour	
   of	
   a	
  
dynamical	
   system	
   does	
   not	
   change	
   over	
   time	
   or	
   in	
   which	
   a	
   change	
   in	
   one	
   direction	
   is	
   a	
   mere	
  
temporary	
  deviation.	
  This	
  condition	
  is	
  proper	
  of	
  general	
  equilibrium	
  theory,	
  in	
  which	
  a	
  stable	
  steady	
  
state	
  is	
  believed	
  to	
  be	
  the	
  norm	
  rather	
  than	
  the	
  exception.	
  When	
  in	
  the	
  canonical	
  model	
  we	
  are	
  out	
  
of	
  equilibrium,	
  the	
  situation	
  is	
  seen	
  just	
  as	
  a	
  short	
  lapse	
  before	
  the	
  return	
  to	
  the	
  steady	
  state.	
  This	
  is	
  
in	
   sharp	
   contrast	
   with	
   the	
   very	
   notion	
   of	
   crisis,	
   which	
   represents	
   a	
   steady	
   deviation	
   from	
   the	
  
equilibrium.	
  Loosely	
  speaking,	
  the	
  crisis	
  phenomenon	
  is	
  not	
  even	
  conceived	
  within	
  the	
  framework	
  of	
  
standard	
  models.	
  
All	
  these	
  flaws	
  are	
  not	
  only	
  related	
  to	
  DSGE	
  models,	
  but	
  also	
  to	
  Computational	
  General	
  Equilibrium	
  
(CGE)	
  or	
  macro-­‐econometric	
  forecasting	
  models,	
  which	
  are	
  the	
  traditional	
  policy	
  making	
  tools.	
  In	
  
this	
  view	
  it	
  would	
  be	
  very	
  important	
  to	
  find	
  new	
  frameworks	
  capable	
  of	
  avoiding	
  those	
  shortcuts.	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
12
	
  http://www.ecb.int/home/html/researcher_swm.en.html	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
26	
  |	
  P a g e 	
  
Some	
  of	
  such	
  methodologies	
  and	
  methods	
  already	
  exists	
  and	
  some	
  governments	
  are	
  using	
  them.	
  
Our	
  aim	
  is	
  to	
  push	
  forward	
  in	
  that	
  direction.	
  
We	
  need	
  to	
  move	
  away	
  from	
  the	
  equilibrium	
  paradigm	
  in	
  order	
  to	
  be	
  able	
  to	
  assess	
  other	
  issues:	
  
evolutionary	
  dynamics;	
  heterogeneity	
  of	
  technologies	
  and	
  firm;	
  political	
  and	
  legal	
  determinants	
  of	
  
social	
  stability;	
  incentive	
  structures;	
  better	
  modelling	
  technological	
  change,	
  innovation	
  diffusion	
  and	
  
economic	
   systems	
   (taking	
   into	
   account	
   finance,	
   debt	
   and	
   insurance);	
   interactions	
   between	
  
heterogeneous	
  economic	
  agents	
  (firms	
  and	
  households)	
  and	
  central	
  governments;	
  heterogeneous	
  
responses	
  to	
  government	
  incentives;	
  economic	
  dependence	
  from	
  the	
  ecosystem.	
  	
  
	
  
Trichet,	
   the	
   former	
   head	
   of	
   ECB,	
   clearly	
   put	
   it:	
   “This	
   doesn't	
   mean	
   we	
   have	
   to	
   abandon	
  
DSGE...(but)...atomistic	
   rational	
   agents	
   don't	
   capture	
   behaviour	
   during	
   a	
   crisis...rational	
  
expectations	
  theory	
  has	
  brought	
  macroeconomics	
  a	
  long	
  way	
  ...	
  but	
  there	
  is	
  a	
  clear	
  case	
  to	
  re-­‐
examine	
  the	
  assumptions”	
  
But	
  the	
  need	
  for	
  new	
  policy	
  making	
  tools	
  is	
  not	
  limited	
  to	
  the	
  economic	
  realm:	
  in	
  the	
  future	
  it	
  will	
  
become	
   more	
   and	
   more	
   important	
   to	
   anticipate	
   non-­‐linear	
   potentially	
   catastrophic	
   impacts	
   from	
  
phenomena	
  such	
  as:	
  climate	
  change	
  (draught	
  and	
  global	
  warming);	
  threshold	
  climate	
  effects	
  such	
  as	
  
poles’	
  sea-­‐ice	
  withdraw,	
  out-­‐gassing	
  from	
  melting	
  permafrost,	
  Indian	
  monsoon,	
  oceans	
  acidification;	
  
social	
   instability	
   affecting	
   economic	
   well-­‐being	
   (social	
   conflict,	
   anarchy	
   and	
   mass	
   people	
  
movements).	
  	
  
	
  
The	
   lack	
   of	
   understanding	
   of	
   systemic	
   impact	
   has	
   driven	
   to	
   short	
   term	
   policies	
   which	
   failed	
   in	
  
grasping	
  long	
  term,	
  systemic	
  consequences	
  and	
  side	
  effects:	
  
-­‐ An	
   example	
   of	
   this	
   approach	
   might	
   be	
   given	
   by	
   the	
   sovereign	
   debt	
   issue.	
   In	
   fact	
   it	
   is	
  
relatively	
  easy	
  for	
  governments	
  under	
  popular	
  pressure	
  to	
  increase	
  expenditure	
  and	
  public	
  
debt	
  to	
  cope	
  with	
  short	
  term	
  necessities,	
  such	
  as	
  offsetting	
  the	
  negative	
  impacts	
  brought	
  
about	
  by	
  a	
  regional	
  or	
  global	
  crisis.	
  On	
  the	
  other	
  hand	
  it	
  is	
  harder	
  to	
  take	
  into	
  account	
  the	
  
long	
   term	
   effect	
   determined	
   by	
   higher	
   interest	
   rates	
   on	
   private	
   investments	
   and	
  
consumption	
  through	
  crowding	
  out	
  and	
  fiscal	
  pressure.	
  
-­‐ Another	
  example	
  of	
  short-­‐termism	
  are	
  the	
  financial	
  policies	
  pursued	
  in	
  south	
  East	
  Asia	
  at	
  the	
  
beginning	
  of	
  the	
  90s.	
  Many	
  countries,	
  such	
  as	
  Thailand,	
  liberalized	
  their	
  financial	
  markets	
  
fostering	
  the	
  inflow	
  of	
  investments	
  aimed	
  at	
  sustaining	
  growth.	
  Unfortunately	
  those	
  capitals	
  
triggered	
  a	
  real	
  estate	
  bubble	
  which	
  has	
  been	
  at	
  the	
  roots	
  of	
  the	
  1997-­‐1998	
  crisis.	
  
-­‐ In	
  2008	
  the	
  Central	
  Bank	
  of	
  Iceland	
  yielded	
  liquidity	
  loans	
  for	
  saving	
  banks	
  on	
  the	
  verge	
  of	
  
default	
  on	
  the	
  basis	
  of	
  newly-­‐issued,	
  uncovered	
  bonds,	
  i.e.	
  effectively	
  printing	
  fiat	
  money	
  on	
  
demand,	
  causing	
  a	
  significant	
  rise	
  in	
  inflation.	
  To	
  cope	
  with	
  this	
  rise	
  in	
  prices,	
  the	
  Iceland	
  
Central	
  Bank	
  had	
  to	
  keep	
  very	
  high	
  interest	
  rates	
  thereby	
  leading	
  to	
  an	
  economic	
  bubble.	
  
-­‐ According	
   to	
   a	
   large	
   number	
   of	
   economists	
   the	
   financial	
   crisis	
   was	
   triggered	
   by	
   US	
  
government	
   policies	
   spanning	
   across	
   two	
   administrations	
   which	
   were	
   intended	
   to	
   ensure	
  
citizens’	
  right	
  but	
  instead	
  determined	
  an	
  unprecedented	
  high	
  number	
  of	
  risky	
  mortgages,	
  as	
  
well	
   as	
   the	
   decline	
   in	
   mortgage	
   underwriting	
   standards	
   that	
   ensued.	
   According	
   to	
   the	
  
“Financial	
  Crisis	
  Inquiry	
  Commission	
  Report”	
  13
	
  those	
  policies,	
  together	
  with	
  the	
  deregulation	
  
of	
  the	
  financial	
  system,	
  might	
  have	
  catalysed	
  the	
  crisis.	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
13
	
  http://fcic.law.stanford.edu/report	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
27	
  |	
  P a g e 	
  
-­‐ Other	
  examples	
  can	
  be	
  the	
  bail	
  out	
  of	
  financial	
  institutions:	
  in	
  the	
  short	
  run	
  those	
  actions	
  
maintain	
  employment	
  and	
  economic	
  standards,	
  while	
  in	
  the	
  long	
  term	
  they	
  induce	
  moral	
  
hazard,	
  keep	
  operating	
  inefficient	
  companies	
  and	
  decrease	
  the	
  trust	
  of	
  economic	
  agents	
  in	
  
regulation,	
  which	
  is	
  the	
  funding	
  pillar	
  of	
  our	
  economic	
  system.	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
28	
  |	
  P a g e 	
  
	
  
2.3.5. Ensure	
  long	
  -­‐	
  term	
  thinking	
  
In	
   traditional	
   economics,	
   decisions	
   are	
   utility-­‐maximising.	
   Agents	
   rationally	
   evaluate	
   the	
  
consequences	
  of	
  their	
  actions,	
  and	
  take	
  the	
  decision	
  that	
  maximize	
  their	
  utility.	
  However,	
  it	
  is	
  well	
  
known	
  that	
  this	
  rationalistic	
  view	
  does	
  not	
  fully	
  capture	
  human	
  nature.	
  We	
  tend	
  to	
  overestimate	
  
short-­‐term	
  impact	
  and	
  underestimate	
  the	
  long	
  term.	
  In	
  policy-­‐making,	
  short-­‐termism	
  is	
  a	
  frequent	
  
issue.	
  People	
  are	
  reluctant	
  to	
  accept	
  short-­‐term	
  sacrifices	
  for	
  long-­‐term	
  benefits.	
  Politicians	
  have	
  
elections	
  typically	
  every	
  5	
  years,	
  and	
  often	
  their	
  decisions	
  are	
  taken	
  to	
  maximize	
  the	
  impact	
  “before	
  
the	
   elections”.	
   There	
   is	
   also	
   the	
   perception	
   that	
   laypeople	
   are	
   less	
   sensitive	
   to	
   long	
   term	
  
consequences,	
   which	
   are	
   instead	
   better	
   understood	
   by	
   experts.	
   Overall,	
   long-­‐term	
   impact	
   is	
   less	
  
visible	
  and	
  easier	
  to	
  hide,	
  due	
  to	
  lack	
  of	
  evidence	
  and	
  data.	
  As	
  a	
  result,	
  decisions	
  are	
  too	
  often	
  taken	
  
looking	
  at	
  short-­‐term	
  benefits,	
  even	
  though	
  they	
  might	
  bring	
  long	
  term	
  problems.	
  	
  
Climate	
  change	
  is	
  a	
  typical	
  policy	
  area	
  where	
  sub-­‐optimal	
  decisions	
  were	
  taken	
  because	
  the	
  short-­‐
term	
  costs	
  were	
  considered	
  to	
  outweigh	
  the	
  long	
  term	
  consequences.	
  The	
  long	
  term	
  impact	
  is	
  not	
  
visible,	
  while	
  the	
  short	
  term	
  sacrifices	
  were,	
  even	
  though	
  ICT	
  had	
  an	
  important	
  role	
  in	
  stimulating	
  
the	
  debate	
  and	
  catalysing	
  attention	
  of	
  the	
  media	
  on	
  the	
  issue.	
  
	
  
2.3.6. Encourage	
  behavioural	
  change	
  and	
  uptake	
  
Once	
   policies	
   are	
   adopted,	
   a	
   key	
   challenge	
   is	
   to	
   make	
   sure	
   that	
   all	
   stakeholders	
   comply	
   with	
  
regulations	
  or	
  follow	
  the	
  recommendations.	
  It	
  is	
  well	
  known	
  how	
  the	
  greatest	
  resistance	
  to	
  a	
  policy	
  
is	
  not	
  active	
  opposition,	
  but	
  lack	
  of	
  application.	
  
For	
  instance,	
  several	
  programmes	
  to	
  reduce	
  alcohol	
  dependency	
  problems	
  in	
  the	
  UK	
  failed	
  as	
  they	
  
excessively	
  relied	
  on	
  positive	
  and	
  negative	
  incentives	
  such	
  as	
  prohibition	
  and	
  taxes,	
  but	
  did	
  not	
  take	
  
into	
  account	
  peer-­‐pressure	
  and	
  social	
  relationships.	
  They	
  failed	
  to	
  leverage	
  “the	
  power	
  of	
  networks”	
  
(Ormerod	
   2010).	
   For	
   instance,	
   any	
   policy	
   related	
   to	
   reduction	
   of	
   alcohol	
   consumption	
   through	
  
prohibitions	
  and	
  taxes	
  is	
  designed	
  to	
  fail	
  as	
  long	
  as	
  it	
  does	
  not	
  take	
  into	
  account	
  social	
  networks,	
  as	
  
binge	
  drinkers	
  typically	
  have	
  friends	
  who	
  also	
  have	
  similar	
  problems.	
  In	
  another	
  classical	
  example	
  
(Christakis	
  and	
  Fowler	
  1997),	
  a	
  large	
  scale	
  longitudinal	
  study	
  showed	
  that	
  the	
  chances	
  of	
  a	
  person	
  
becoming	
  obese	
  rose	
  by	
  57	
  per	
  cent	
  if	
  he	
  or	
  she	
  had	
  a	
  friend	
  who	
  became	
  obese.	
  
The	
   identification	
   of	
   social	
   networks	
   and	
   the	
   role	
   of	
   peer	
   pressure	
   in	
   changing	
   behaviour	
   is	
   not	
  
considered	
  in	
  traditional	
  policy-­‐making	
  tools.	
  
2.3.7. Manage	
  crisis	
  and	
  the	
  “unknown	
  unknown”	
  
The	
  job	
  of	
  policy-­‐makers	
  is	
  increasingly	
  one	
  of	
  crisis	
  management.	
  There	
  is	
  robust	
  evidence	
  that	
  the	
  
world	
  is	
  increasingly	
  interconnected,	
  and	
  unstable	
  (also	
  because	
  of	
  climate	
  change).	
  Crises	
  are	
  by	
  
definition	
  sudden	
  and	
  unpredictable.	
  Dealing	
  with	
  unpredictability	
  is	
  therefore	
  a	
  key	
  requirement	
  of	
  
policy-­‐making,	
  but	
  the	
  present	
  capacity	
  to	
  deal	
  with	
  crises	
  is	
  designed	
  for	
  a	
  world	
  where	
  crises	
  are	
  
exceptional,	
  rather	
  than	
  the	
  rule.	
  Donald	
  Rumsfeld,	
  former	
  secretary	
  of	
  state,	
  famously	
  said	
  during	
  
the	
  Iraq	
  war	
  that	
  while	
  the	
  US	
  government	
  was	
  capable	
  of	
  dealing	
  with	
  the	
  “known	
  unknown”,	
  the	
  
difficulty	
  was	
  the	
  increasing	
  recurrence	
  of	
  “unknown	
  unknown”:	
  those	
  things	
  that	
  we	
  don’t	
  known	
  
that	
  we	
  don’t	
  know.	
  
There	
  is	
  evidence	
  that	
  the	
  instability	
  and	
  chaotic	
  natures	
  of	
  our	
  world	
  is	
  increasing,	
  because	
  of	
  its	
  
increasing	
   connectedness.	
   Every	
   year,	
   intense	
   climate	
   phenomena	
   throw	
   our	
   cities	
   in	
   disarray,	
  
because	
   of	
   snow,	
   flooding,	
   fires.	
   Each	
   crisis	
   seems	
   to	
   find	
   our	
   decision-­‐makers	
   unprepared	
   and	
  
unable	
  to	
  deal	
  with	
  it	
  promptly.	
  As	
  Taleb	
  (2007)	
  puts	
  it,	
  we	
  live	
  in	
  the	
  age	
  of	
  "Extremistan":	
  a	
  world	
  
of	
   "tipping	
   points"	
   (Schelling	
   1969)	
   “cascades”	
   and	
   "power	
   laws"	
   (Barabasi	
   2003)	
   where	
   extreme	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
29	
  |	
  P a g e 	
  
events	
   are	
   "the	
   new	
   normal".	
   There	
   are	
   many	
   indications	
   of	
   this	
   extreme	
   instability,	
   not	
   only	
   in	
  
negative	
   episodes	
   such	
   as	
   the	
   financial	
   crisis	
   but	
   also	
   in	
   positive	
   development,	
   such	
   as	
   the	
  
continuous	
   emergence	
   of	
   new	
   players	
   on	
   the	
   market	
   epitomised	
   by	
   Google.	
   The	
   random	
  
vulnerability	
  of	
  today’s	
  world	
  is	
  well	
  illustrated	
  by	
  this	
  chart	
  from	
  the	
  EC	
  DG	
  RESEARCH.	
  
	
  
Figure	
  4:	
  Total	
  Disasters	
  Reported	
  
2.3.8. Moving	
  from	
  conversations	
  to	
  action	
  
The	
  collaborative	
  action	
  of	
  people	
  is	
  able	
  to	
  achieve	
  seemingly	
  unachievable	
  goals:	
  experiences	
  such	
  
as	
  ZooGalaxy	
  and	
  Wikipedia	
  show	
  that	
  mass	
  collaboration	
  can	
  help	
  achieve	
  disruptive	
  innovation.	
  
Yet	
  too	
  often	
  web-­‐based	
  collaboration	
  is	
  confined	
  to	
  complaints	
  and	
  discussions,	
  rather	
  than	
  action.	
  
As	
  one	
  blogger	
  put	
  it,	
  paraphrasing	
  Marx:	
  “Philosophers	
  have	
  only	
  interpreted	
  the	
  world:	
  the	
  point	
  is	
  
to	
  complain	
  about	
  it”14
.	
  	
  
For	
  example,	
  the	
  2012	
  Italian	
  elections	
  saw	
  an	
  explosion	
  of	
  activity	
  in	
  social	
  media	
  discussing	
  about	
  
the	
  different	
  candidates.	
  This	
  energy	
  then	
  failed	
  to	
  translate	
  into	
  concrete	
  action	
  in	
  the	
  aftermath	
  of	
  
the	
  elections.	
  	
  
	
  
	
  
2.3.9. Detect	
  non-­‐compliance	
  and	
  mis-­‐spending	
  through	
  better	
  transparency	
  
In	
  times	
  of	
  crisis,	
  it	
  is	
  ever	
  more	
  important	
  for	
  governments	
  to	
  ensure	
  that	
  financial	
  resources	
  are	
  
well	
  spent	
  and	
  policies	
  are	
  duly	
  implemented.	
  But	
  monitoring	
  is	
  a	
  cost	
  in	
  itself,	
  and	
  a	
  certain	
  margin	
  
of	
  inefficiency	
  in	
  resources	
  deployment	
  is	
  somehow	
  “natural”.	
  Yet	
  the	
  cost	
  of	
  this	
  mismanagement	
  is	
  
staggering:	
  for	
  instance,	
  in	
  2010,	
  7.7%	
  of	
  all	
  Structural	
  Funds	
  money	
  was	
  spent	
  in	
  error	
  or	
  against	
  EU	
  
rules15
.	
  	
  OECD	
  estimates	
  place	
  the	
  cost	
  of	
  corruption	
  equals	
  5%	
  of	
  global	
  GDP16
.	
  Thereby	
  it	
  would	
  be	
  
crucially	
  important	
  to	
  be	
  able	
  to	
  avoid	
  the	
  mismanagement	
  	
  with	
  anticipatory	
  corrective	
  actions.	
  
2.3.10.Understand	
  the	
  impact	
  of	
  policies	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
14
	
  Quoted	
   in	
   Mick	
   Fealty,	
   The	
   wisdom	
   of	
   crowds,	
   The	
   Guardian	
   24	
   February	
   2007	
  
http://www.guardian.co.uk/commentisfree/2007/feb/24/towardsadeliberativedemocra	
  
15
	
  http://www.europeanvoice.com/article/2011/november/commission-­‐names-­‐worst-­‐managers-­‐of-­‐eu-­‐money/72613.aspx	
  
16
	
  http://www.oecd.org/dataoecd/51/5/49693613.pdf	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
30	
  |	
  P a g e 	
  
Measuring	
  the	
  impact	
  of	
  policies	
  remains	
  a	
  challenge.	
  Ideally,	
  policy-­‐makers	
  would	
  like	
  to	
  have	
  real-­‐
time	
  clear	
  evidence	
  on	
  the	
  direct	
  impact	
  of	
  their	
  choice.	
  Instead,	
  the	
  effects	
  of	
  a	
  policy	
  are	
  often	
  
delayed	
  in	
  time;	
  the	
  ultimate	
  impact	
  is	
  affected	
  by	
  a	
  multitude	
  of	
  factors	
  in	
  addition	
  to	
  the	
  policy.	
  
Timely	
  and	
  robust	
  evaluation	
  remains	
  an	
  unsolvable	
  puzzle.	
  
This	
  is	
  particularly	
  true	
  for	
  research	
  and	
  innovation	
  policy,	
  where	
  the	
  results	
  from	
  investment	
  are	
  
naturally	
  expected	
  at	
  years	
  of	
  distance.	
  As	
  Kuhlmann	
  and	
  Meyer-­‐Krahmer	
  (1994)	
  puts	
  it,	
  “the	
  results	
  
of	
  evaluations	
  necessarily	
  arrive	
  too	
  late	
  to	
  be	
  incorporated	
  into	
  the	
  policy-­‐making	
  process”.	
  
	
  
2.4. When	
  policy-­‐making	
  2.0	
  becomes	
  a	
  reality:	
  a	
  tentative	
  vision	
  for	
  
2030	
  
This	
   is	
   the	
   scenario	
   of	
   how	
   future	
   policy-­‐making	
   could	
   be	
   deployed	
   in	
   an	
   ideal	
   world,	
   if	
   all	
   the	
  
opportunities	
  of	
  policy-­‐making	
  2.0	
  tools	
  were	
  taken.	
  It	
  aims	
  at	
  illustrating	
  how	
  these	
  technologies	
  
and	
  methods	
  could	
  concretely	
  be	
  deployed	
  and	
  the	
  effect	
  they	
  would	
  have.	
  This	
  is	
  deliberately	
  a	
  
normative	
  scenario,	
  describing	
  a	
  positive	
  and	
  concrete	
  future	
  at	
  a	
  very	
  high	
  level.	
  
The	
   scenario	
   is	
   organised	
   alongside	
   the	
   typical	
   phases	
   of	
   the	
   policy-­‐making	
   cycle.	
   It	
   applies	
   to	
   a	
  
hypothetical	
  new	
  privacy	
  directive	
  being	
  developed	
  in	
  2030.	
  
	
  
2.4.1. Agenda	
  setting	
  phase:	
  recognizing	
  the	
  problem	
  
Brussels,	
  2030.	
  The	
  EC	
  task	
  force	
  on	
  privacy	
  and	
  data	
  protection	
  is	
  alerted	
  by	
  a	
  number	
  of	
  events.	
  
Their	
   yearly	
   report,	
   accompanied	
   by	
   the	
   publication	
   as	
   linked	
   open	
   data	
   in	
   February,	
   has	
   been	
  
accessed	
  by	
  more	
  than	
  10.000	
  people	
  in	
  a	
  week.	
  Several	
  high	
  profile	
  online	
  blogs	
  have	
  published	
  the	
  
geo-­‐visualized	
   mash	
   up	
   of	
   the	
   task	
   force	
   data	
   with	
   the	
   data	
   from	
   customer	
   complaints	
   about	
  
broadband	
  slowness.	
  The	
  figures	
  speak	
  for	
  themselves:	
  the	
  complaints	
  from	
  customers,	
  collected	
  
through	
  both	
  the	
  government	
  single	
  feedback	
  system	
  and	
  social	
  media,	
  about	
  privacy	
  infringements	
  
and	
  identity	
  theft	
  mirror	
  exactly	
  the	
  broadband	
  disruption.	
  All	
  seem	
  to	
  point	
  to	
  some	
  kind	
  of	
  "data	
  
theft"	
  at	
  the	
  infrastructural	
  level	
  of	
  the	
  Internet.	
  A	
  similar	
  analysis	
  on	
  open	
  linked	
  data	
  shows	
  an	
  
abnormal	
  concentration	
  of	
  complaints	
  over	
  credit	
  card	
  fraud	
  from	
  users	
  of	
  a	
  limited	
  number	
  of	
  ISP	
  
that	
  have	
  struggled	
  to	
  obtain	
  the	
  infrastructure	
  security	
  certification.	
  Anomalies	
  in	
  this	
  correlation	
  
seem	
  to	
  weaken	
  the	
  case,	
  but	
  are	
  quickly	
  discovered	
  when	
  social	
  network	
  analysis	
  is	
  carried	
  out:	
  not	
  
only	
   the	
   users	
   of	
   ISP	
   but	
   also	
   their	
   friends	
   and	
   contacts	
   are	
   most	
   likely	
   to	
   report	
   denounces	
   for	
  
fraud.	
  
While	
  some	
  years	
  ago	
  the	
  task	
  force	
  members	
  would	
  still	
  address	
  this	
  through	
  the	
  traditional	
  slow	
  
policy	
  process,	
  only	
  to	
  realize	
  its	
  social	
  impact	
  after	
  mass	
  media	
  take	
  this	
  up,	
  today	
  a	
  quick	
  look	
  at	
  
social	
  media	
  analytics	
  confirms	
  that	
  the	
  public	
  is	
  deeply	
  concerned.	
  Hashtags	
  like	
  #wherearemydata	
  
are	
   drawing	
   thousands	
   of	
   comments.	
   The	
   task	
   force	
   obtains	
   real	
   time	
   report	
   on	
   sentiment	
   and	
  
opinions	
  being	
  shared	
  publicly;	
  it	
  appears	
  clear	
  that	
  people	
  feel	
  unprotected	
  by	
  existing	
  instruments	
  
and	
  regulation	
  and	
  voice	
  their	
  dissatisfaction	
  mainly	
  towards	
  the	
  Task	
  Force	
  itself.	
  In	
  particular,	
  the	
  
reputation	
   report	
   quickly	
   identifies	
   a	
   limited	
   numbers	
   of	
   social	
   media	
   activists	
   that	
   show	
   high	
  
influence	
  in	
  terms	
  of	
  shaping	
  the	
  public	
  opinion	
  on	
  the	
  matter,	
  as	
  their	
  message	
  is	
  quickly	
  spread.	
  
Historical	
   text	
   analysis	
   of	
   social	
   media	
   allows	
   to	
   predict	
   that	
   users	
   that	
   complain	
   over	
   privacy	
  
infringement	
  are	
  likely	
  to	
  dramatically	
  decrease	
  the	
  extent	
  to	
  which	
  they	
  share	
  information	
  and	
  data	
  
on	
  the	
  web	
  over	
  the	
  following	
  weeks.	
  This	
  drastic	
  cuts	
  to	
  content	
  sharing	
  becomes	
  a	
  serious	
  liability	
  
for	
  an	
  economy	
  which	
  is	
  now	
  built	
  on	
  the	
  assumption	
  that	
  people	
  naturally	
  share	
  and	
  collaborate	
  on	
  
the	
  web.	
  Reduction	
  in	
  knowledge	
  sharing,	
  as	
  predicted	
  by	
  social	
  media	
  analytics,	
  could	
  lead	
  to	
  a	
  
reduction	
  of	
  economic	
  activity	
  which	
  is	
  already	
  fragile.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
31	
  |	
  P a g e 	
  
An	
  in-­‐depth	
  investigation	
  discovers	
  a	
  hardware	
  hacking	
  group	
  that	
  has	
  targeted	
  a	
  selected	
  number	
  
of	
  lowly	
  protected	
  broadband	
  providers	
  to	
  steal	
  data	
  directly	
  from	
  their	
  traffic.	
  The	
  policy	
  agenda	
  of	
  
the	
  Task	
  Force	
  are	
  quickly	
  switched	
  to	
  develop	
  a	
  revision	
  of	
  regulation	
  in	
  order	
  to	
  better	
  link	
  the	
  
broadband	
  regulation	
  with	
  data	
  protection.	
  	
  
An	
  open	
  collaboration	
  group	
  is	
  convened,	
  with	
  the	
  direct	
  involvement	
  of	
  users	
  previously	
  identified	
  
as	
  "highly	
  influential"	
  through	
  social	
  media	
  analytics.	
  In	
  addition,	
  cross-­‐analysis	
  reputation	
  tools	
  are	
  
used	
   to	
   identify	
   "experts"	
   on	
   joined-­‐up	
   policy	
   approaches	
   to	
   data	
   protection	
   and	
   broadband	
  
infrastructure,	
  based	
  on	
  integrated	
  data	
  from	
  social	
  media	
  (e.g.	
  Klout	
  and	
  Linkedin)	
  and	
  scientific	
  
impact	
  (e.g.	
  Altmetrics,	
  ISI	
  impact	
  factor).	
  This	
  group	
  is	
  called	
  to	
  provide	
  independent	
  fact	
  based	
  
analysis	
  of	
  the	
  problem	
  based	
  on	
  best	
  available	
  data.	
  	
  
In	
  particular,	
  the	
  group	
  is	
  called	
  to	
  understand	
  and	
  model	
  the	
  causal	
  relationship	
  between	
  fear	
  of	
  
privacy	
   infringements	
   and	
   reduction	
   in	
   knowledge	
   sharing;	
   and	
   between	
   this	
   reduction	
   and	
  
economic	
   growth.	
   The	
   analysis	
   is	
   carried	
   out	
   through	
   a	
   combination	
   of	
   network	
   analysis,	
   system	
  
dynamics	
   and	
   agent	
   based	
   modelling.	
   Their	
   report	
   simulates	
   several	
   possible	
   scenarios,	
   but	
   the	
  
common	
   theme	
   is	
   that	
   a	
   reduction	
   in	
   sharing	
   activities	
   by	
   key	
   influencers	
   could	
   lead	
   to	
   a	
   major	
  
economic	
  downturn,	
  as	
  non-­‐sharing	
  behavior	
  will	
  soon	
  spread	
  from	
  the	
  "geeks"	
  to	
  the	
  general	
  public	
  
through	
  imitation	
  and	
  social	
  pressure.	
  	
  
The	
  report	
  is	
  published	
  for	
  public	
  review,	
  enabling	
  in-­‐line	
  comments	
  and	
  in-­‐depth	
  analysis	
  of	
  the	
  raw	
  
data	
  and	
  models	
  behind	
  the	
  analysis.	
  It	
  brings	
  in	
  hundreds	
  of	
  comments.	
  Once	
  a	
  quick	
  text	
  mining	
  
analysis	
  is	
  carried	
  out,	
  the	
  comments	
  seem	
  to	
  cluster	
  on	
  an	
  unjustified	
  assumption	
  in	
  one	
  scenario,	
  
and	
   on	
   a	
   limited	
   set	
   of	
   issue	
   regarding	
   the	
   potential	
   negative	
   impact	
   on	
   net	
   neutrality	
   when	
  
implementing	
  new	
  regulation.	
  Hence,	
  the	
  scenario	
  is	
  double-­‐checked	
  and	
  the	
  assumption	
  clarified,	
  
and	
  net	
  neutrality	
  experts	
  are	
  brought	
  on	
  board	
  in	
  the	
  working	
  group.	
  
	
  
2.4.2. Policy	
  design	
  
Once	
  the	
  nature	
  and	
  size	
  of	
  the	
  problem	
  is	
  clarified,	
  the	
  working	
  group	
  is	
  called	
  to	
  design	
  possible	
  
policy	
  measures.	
  A	
  crowdsourcing	
  exercise	
  is	
  launched,	
  where	
  anyone	
  can	
  submit	
  ideas	
  for	
  specific	
  
amendments	
   to	
   the	
   present	
   regulation.	
   The	
   analysis	
   is	
   based	
   on	
   the	
   most	
   voted	
   suggestion,	
   but	
  
these	
  turn	
  out	
  to	
  be	
  not	
  extremely	
  insightful.	
  A	
  reputation	
  management	
  system	
  is	
  integrated	
  with	
  
the	
   exercise,	
   allowing	
   to	
   identify	
   original	
   ideas	
   and	
   insight	
   based	
   on	
   voting	
   “weighted”	
   based	
   on	
  
expertise	
  in	
  the	
  field.	
  A	
  set	
  of	
  10	
  recommendations	
  are	
  presented	
  for	
  further	
  analysis	
  by	
  the	
  working	
  
group.	
  
The	
  working	
  group,	
  based	
  on	
  this	
  input,	
  formulate	
  3	
  policy	
  options	
  alongside	
  these	
  axis:	
  
-­‐	
  continuing	
  with	
  the	
  current	
  data	
  protection	
  framework	
  
-­‐	
  enhancing	
  it	
  with	
  greater	
  forms	
  of	
  self-­‐regulation;	
  increased	
  transparency,	
  easier	
  enforcement	
  and	
  
greater	
  empowerment	
  of	
  users	
  
-­‐	
  define	
  a	
  new,	
  stricter	
  data	
  protection	
  regulation	
  
	
  
The	
  three	
  options	
  are	
  then	
  run	
  through	
  the	
  large-­‐scale	
  simulation	
  engine,	
  which	
  combines	
  agent	
  
based	
  modeling,	
  system	
  dynamics,	
  network	
  analysis	
  and	
  big	
  data	
  analytics.	
  This	
  allows	
  to	
  anticipate	
  
the	
  unexpected	
  effect	
  of	
  a	
  new	
  stricter	
  regulation,	
  that	
  would	
  probably	
  induce	
  virtuous	
  broadband	
  
providers	
   to	
   conform	
   to	
   the	
   minimum	
   requested	
   by	
   regulation,	
   while	
   private	
   companies	
   that	
  
typically	
  choose	
  the	
  most	
  secure	
  providers	
  would	
  have	
  to	
  increase	
  their	
  expenditure	
  on	
  security,	
  in	
  
particular	
   in	
   the	
   field	
   of	
   web	
   services,	
   which	
   is	
   already	
   weak	
   in	
   Europe	
   and	
   exposed	
   to	
   global	
  
competition.	
   In	
   addition,	
   consumers	
   would	
   be	
   satisfied	
   with	
   a	
   perception	
   of	
   increased	
   security,	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
32	
  |	
  P a g e 	
  
thereby	
  reducing	
  their	
  attention	
  on	
  own	
  control	
  over	
  data,	
  which	
  would	
  in	
  the	
  long	
  term	
  increase	
  
the	
  security	
  risks	
  of	
  another	
  crisis.	
  	
  
All	
   the	
   models	
   underlying	
   the	
   simulation	
   are	
   open	
   to	
   public	
   review,	
   in	
   order	
   to	
   ensure	
   the	
  
transparency	
  of	
  the	
  initiative.	
  Indeed,	
  the	
  first	
  version	
  of	
  the	
  ex-­‐ante	
  impact	
  assessment	
  showed	
  
that	
   option	
   1	
   was	
   the	
   less	
   risky	
   and	
   more	
   beneficial,	
   but	
   a	
   little	
   known	
   researcher	
   from	
   Greece	
  
quickly	
  pointed	
  out	
  to	
  a	
  banal	
  coding	
  mistake	
  in	
  the	
  database	
  used	
  to	
  compute	
  historical	
  series	
  of	
  
privacy	
  infringements.	
  
In	
  the	
  end,	
  option	
  2	
  is	
  chosen	
  as	
  the	
  most	
  effective.	
  The	
  amendment	
  to	
  the	
  regulation	
  are	
  quickly	
  
rapidly	
  drafted,	
  publicly	
  reviewed	
  and	
  then	
  turned	
  into	
  law	
  by	
  the	
  European	
  Parliament.	
  	
  
	
  
2.4.3. Implementation	
  
The	
  regulation	
  envisages	
  a	
  strong	
  role	
  for	
  the	
  public	
  in	
  both	
  enforcement	
  and	
  self-­‐regulation.	
  Each	
  
local	
  branch	
  of	
  broadband	
  providers	
  have	
  to	
  publish	
  in	
  real	
  time	
  	
  as	
  open	
  linked	
  data	
  the	
  results	
  of	
  
the	
  security	
  certification,	
  as	
  well	
  as	
  any	
  traffic	
  management	
  intervention	
  they	
  carry	
  out.	
  	
  
In	
  addition,	
  a	
  set	
  of	
  “persuasive	
  games”	
  has	
  been	
  developed	
  to	
  help	
  consumers	
  manage	
  and	
  control	
  
their	
  data	
  flows.	
  Users	
  receive	
  badges	
  each	
  time	
  they	
  perform	
  a	
  data	
  safety	
  self-­‐assessment,	
  which	
  
is	
  easy	
  to	
  carry	
  out	
  through	
  a	
  highly	
  visual	
  smartphone	
  app	
  which	
  highlights	
  to	
  what	
  extend	
  the	
  
users	
  behaviour	
  diverges	
  from	
  the	
  public	
  recommendations	
  and	
  from	
  the	
  people	
  in	
  his/her	
  social	
  
network.	
   Unexpectedly,	
   a	
   kind	
   of	
   game	
   about	
   being	
   “safest	
   kid	
   on	
   the	
   block”	
   starts	
   particularly	
  
between	
  teenagers,	
  that	
  compete	
  in	
  trying	
  to	
  overcome	
  each	
  other	
  safety	
  provisions.	
  New	
  business	
  
models	
  of	
  third	
  party	
  data	
  management	
  services	
  are	
  launched	
  for	
  those	
  less	
  interested	
  in	
  managing	
  
their	
  data.	
  	
  
	
  
As	
   a	
   result,	
   the	
   propensity	
   to	
   buy,	
   share	
   and	
   collaborate	
   online	
   increase	
   sensibly,	
   driving	
   to	
   a	
  
moderately	
  positive	
  economic	
  impact.	
  
	
  
2.4.4. Evaluation	
  
Real-­‐time	
   data	
   analytics	
   on	
   the	
   performance	
   of	
   data	
   providers,	
   as	
   well	
   as	
   anonymized	
   data	
   on	
  
consumers	
  data	
  protection	
  measure,	
  allow	
  decision-­‐makers	
  to	
  identify	
  potential	
  breaches	
  as	
  soon	
  as	
  
they	
  happen.	
  
Participatory	
  sensing	
  tools,	
  combined	
  with	
  opinion	
  mining	
  allow	
  citizens	
  and	
  policy-­‐makers	
  to	
  easily	
  
monitor	
  when	
  new	
  problems	
  emerge.	
  
Open	
  data	
  on	
  measures	
  taken	
  by	
  regulators	
  allow	
  civil	
  society	
  organisation	
  to	
  ensure	
  the	
  adequacy	
  
of	
  government	
  intervention.	
  
	
  
2.5. The	
   key	
   challenges	
   for	
   policy	
   makers	
   and	
   the	
   corresponding	
  
phases	
  in	
  the	
  policy	
  cycle	
  
Let	
  us	
  now	
  relate	
  the	
  key	
  challenges	
  of	
  policy	
  making	
  activity	
  with	
  the	
  phases	
  in	
  the	
  policy	
  cycle:	
  
• The	
  Agenda	
  Setting	
  phase	
  is	
  mostly	
  related	
  to	
  the	
  challenges	
  
o Detect	
  and	
  understand	
  problems	
  before	
  they	
  become	
  unsolvable	
  	
  
o Manage	
  crisis	
  and	
  the	
  “unknown	
  unknown”	
  	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
33	
  |	
  P a g e 	
  
o Ensure	
  long	
  -­‐	
  term	
  thinking:	
  Agenda	
  
• The	
  Design	
  phase	
  is	
  mostly	
  related	
  to	
  the	
  challenges	
  	
  
o Encourage	
  behavioural	
  change	
  and	
  uptake	
  
o Identify	
  “good	
  ideas”	
  and	
  innovative	
  solutions	
  to	
  long-­‐standing	
  problems	
  
o Reduce	
  uncertainty	
  on	
  the	
  possible	
  impacts	
  of	
  policies	
  
o Generate	
  high	
  involvement	
  of	
  citizens	
  in	
  policy-­‐making	
  
• The	
  Implementation	
  phase	
  is	
  mostly	
  related	
  to	
  
o Moving	
  from	
  conversations	
  to	
  action	
  
o Reduce	
  uncertainty	
  on	
  the	
  possible	
  impacts	
  of	
  policies	
  
• The	
  Monitor	
  and	
  Evaluation	
  phase	
  is	
  mostly	
  related	
  to	
  
o Detect	
  non-­‐compliance	
  and	
  mis-­‐spending	
  through	
  better	
  transparency	
  	
  
o Manage	
  crisis	
  and	
  the	
  “unknown	
  unknown”	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
34	
  |	
  P a g e 	
  
3. The	
  supply	
  side:	
  current	
  status	
  and	
  the	
  Research	
  Challenges	
  
	
  
In	
  this	
  section,	
  we	
  illustrate	
  in	
  detail	
  each	
  research	
  challenge	
  which	
  needs	
  to	
  be	
  addressed	
  in	
  order	
  
to	
  make	
  the	
  vision	
  a	
  reality	
  and	
  address	
  the	
  policy-­‐challenges	
  described	
  in	
  the	
  previous	
  chapter,	
  by	
  
describing:	
  
-­‐ The	
  definition,	
  	
  
-­‐ The	
  potential	
  opportunities	
  for	
  governance,	
  	
  
-­‐ The	
  state	
  of	
  the	
  art	
  of	
  market	
  and	
  research,	
  	
  
-­‐ The	
  existing	
  challenges	
  and	
  	
  
-­‐ The	
  recommended	
  research	
  themes.	
  
The	
   research	
   challenges	
   are	
   organised	
   in	
   2	
   groups:	
   the	
   first	
   regroups	
   6	
   challenges	
   on	
   Policy	
  
Modelling,	
  while	
  the	
  second	
  one	
  regroups	
  9	
  challenges	
  on	
  Collaborative	
  Governance.	
  
	
  
	
  
	
  
	
  
3.1. Policy	
  Modelling	
  	
  
3.1.1. Systems	
  of	
  Atomized	
  Models	
  
	
  
Introduction	
  and	
  definition	
  
This	
  research	
  challenge	
  seeks	
  to	
  find	
  the	
  way	
  to	
  model	
  a	
  system	
  by	
  using	
  already	
  existing	
  models	
  or	
  
composing	
   more	
   comprehensive	
   models	
   by	
   using	
   smaller	
   building	
   blocks,	
   sometimes	
   also	
   called	
  
“atoms”,	
   either	
   by	
   reusing	
   existing	
   objects/models	
   or	
   by	
   generating/building	
   them	
   from	
   the	
   very	
  
beginning.	
  Therefore,	
  the	
  most	
  important	
  issue	
  is	
  the	
  definition/identification	
  of	
  proper	
  (or	
  most	
  
apt)	
  modelling	
  standards,	
  procedures	
  and	
  methodologies	
  by	
  using	
  existing	
  ones	
  or	
  by	
  defining	
  new	
  
ones.	
   Further	
   to	
   that,	
   the	
   present	
   sub-­‐challenge	
   calls	
   for	
   establishing	
   the	
   formal	
   mechanisms	
   by	
  
which	
  models	
  might	
  be	
  integrated	
  in	
  order	
  to	
  build	
  bigger	
  models	
  or	
  to	
  simply	
  exchange	
  data	
  and	
  
valuable	
  information	
  between	
  the	
  models.	
  Finally,	
  the	
  issue	
  of	
  model	
  interoperability	
  as	
  well	
  as	
  the	
  
availability	
   of	
   interoperable	
   modelling	
   environments	
   should	
   be	
   tackled,	
   as	
   well	
   as	
   the	
   need	
   for	
  
feedback-­‐rich	
   models	
   that	
   are	
   transparent	
   and	
   easy	
   for	
   the	
   public	
   and	
   decision	
   makers	
   to	
  
understand.	
  
	
  
Why	
  it	
  matters	
  in	
  governance	
  
Using	
  existing	
  objects/models	
  that	
  are	
  able	
  to	
  describe	
  systems,	
  sub-­‐systems	
  and	
  interaction	
  among	
  
them,	
  allows	
  everyone	
  to	
  build	
  his	
  own	
  insight	
  on	
  a	
  specific	
  problem/solution.	
  So,	
  in	
  governance,	
  
such	
  opportunity	
  gives	
  us	
  the	
  chance	
  to:	
  
• Release	
   public	
   data,	
   linking	
   them	
   and	
   producing	
   visual	
   representations	
   able	
   to	
   reveal	
  
unanticipated	
  insights.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
35	
  |	
  P a g e 	
  
• Use	
  social	
  computing	
  to	
  promote	
  engagement	
  and	
  citizens’	
  inclusion	
  in	
  policy	
  decision,	
  and	
  
exploit	
  the	
  power	
  of	
  ICT	
  in	
  mining	
  and	
  understanding	
  the	
  opinions	
  they	
  express.	
  
• Analyse	
  policies	
  and	
  produce	
  models	
  that	
  can	
  be	
  visualised	
  and	
  run	
  to	
  produce	
  simulations	
  
able	
  to	
  show	
  the	
  effects	
  and	
  impacts	
  from	
  different	
  perspectives	
  such	
  as	
  political,	
  economic,	
  
social,	
  technological,	
  environmental	
  and	
  legal	
  facets.	
  
Current	
  Practice	
  and	
  Inspiring	
  cases	
  
In	
  systems	
  analysis,	
  it	
  is	
  common	
  to	
  deal	
  with	
  the	
  complexity	
  of	
  an	
  entire	
  system	
  by	
  considering	
  it	
  to	
  
consist	
   of	
   interrelated	
   sub-­‐systems.	
   This	
   leads	
   naturally	
   to	
   consider	
   models	
   as	
   consisting	
   of	
   sub-­‐
models.	
   Such	
   a	
   (conceptual)	
   model	
   can	
   be	
   implemented	
   as	
   a	
   computer	
   model	
   that	
   consists	
   of	
   a	
  
number	
   of	
   connected	
   component	
   models	
   (or	
   modules).	
   Component-­‐oriented	
   designs	
   actually	
  
represent	
  a	
  natural	
  choice	
  for	
  building	
  scalable,	
  robust,	
  large-­‐scale	
  applications,	
  and	
  to	
  maximize	
  the	
  
ease	
  of	
  maintenance	
  in	
  a	
  variety	
  of	
  domains.	
  
An	
  implementation	
  based	
  on	
  component	
  models	
  has	
  at	
  least	
  two	
  major	
  advantages:	
  	
  
• First,	
  new	
  models	
  can	
  be	
  constructed	
  by	
  coupling	
  existing	
  component	
  models	
  of	
  known	
  and	
  
guaranteed	
   quality	
   with	
   new	
   component	
   models.	
   This	
   has	
   the	
   potential	
   to	
   increase	
   the	
  
speed	
  of	
  development.	
  
• Secondly,	
  the	
  forecasting	
  capabilities	
  of	
  two	
  different	
  component	
  models	
  can	
  be	
  compared,	
  
as	
  opposed	
  to	
  compare	
  whole	
  simulation	
  systems	
  as	
  the	
  only	
  option.	
  	
  	
  
Further,	
   common	
   and	
   frequently	
   used	
   functionalities,	
   such	
   as	
   numerical	
   integration	
   services,	
  
visualisation	
   and	
   statistical	
   ex-­‐post	
   analyses	
   tools,	
   can	
   be	
   implemented	
   as	
   generic	
   tools	
   and	
  
developed	
  once	
  for	
  all	
  and	
  easily	
  shared	
  by	
  model	
  developers.	
  By	
  the	
  way,	
  the	
  current	
  practice	
  in	
  
composing	
  and	
  re-­‐using	
  models	
  is	
  still	
  not	
  sufficiently	
  widespread.	
  In	
  relation	
  to	
  Model	
  Reuse,	
  this	
  is	
  
mainly	
  due	
  to	
  the	
  fact	
  that	
  little	
  to	
  no	
  repository	
  actually	
  exists17
.	
  Moreover,	
  the	
  publicly	
  available	
  
models	
   are	
   not	
   “open”	
   to	
   modification	
   or	
   re-­‐use.	
   It	
   would	
   be	
   useful	
   if	
   every	
   paper	
   containing	
   a	
  
model	
   included	
   a	
   link	
   to	
   on-­‐line	
   version	
   that	
   people	
   could	
   run	
   and	
   modify, Some	
   modelling	
  
environments	
   (or	
   modelling	
   suites)	
   provide	
   some	
   examples	
   and	
   small	
   libraries	
   of	
   ready-­‐to-­‐use	
  
models,	
  but	
  in	
  most	
  cases,	
  they	
  are	
  not	
  completely	
  open	
  nor	
  any	
  explanation	
  is	
  provided	
  on	
  how	
  to	
  
reproduce	
  them	
  (their	
  structure,	
  parameters,	
  etc.).	
  As	
  an	
  inspiring	
  case	
  see	
  the	
  SEAMLESS	
  project,	
  
which	
  was	
  funded	
  by	
  the	
  EU	
  Framework	
  Programme	
  6	
  (Global	
  Change	
  and	
  Ecosystems),	
  ran	
  from	
  
2005	
   till	
   March	
   2009,	
   and	
   developed	
   a	
   computerized	
   framework	
   for	
   integrated	
   assessment	
   of	
  
agricultural	
  systems	
  and	
  the	
  environment18
.	
  During	
  the	
  project,	
  a	
  modular	
  approach	
  was	
  chosen	
  to	
  
develop	
  a	
  system	
  named	
  “Agricultural	
  Production	
  and	
  Externalities	
  Simulator	
  (APES)”,	
  illustrated	
  in	
  
figure	
  (5).	
  APES	
  is	
  a	
  modular	
  simulation	
  system	
  targeted	
  at	
  estimating	
  the	
  biophysical	
  behaviour	
  of	
  
agricultural	
  production	
  systems	
  in	
  response	
  to	
  the	
  interaction	
  of	
  weather,	
  soils	
  and	
  different	
  options	
  
of	
  agro-­‐technical	
  management.	
  Although	
  a	
  specific,	
  limited	
  set	
  of	
  components	
  is	
  available	
  in	
  the	
  first	
  
release,	
   the	
   system	
   is	
   being	
   built	
   to	
   incorporate,	
   at	
   a	
   later	
   time,	
   other	
   modules	
   which	
   might	
   be	
  
needed	
  to	
  simulate	
  processes	
  not	
  included	
  in	
  the	
  first	
  version.	
  The	
  processes	
  are	
  simulated	
  in	
  APES	
  
with	
  deterministic	
  approaches	
  which	
  are	
  mostly	
  based	
  on	
  mechanistic	
  representations	
  of	
  biophysical	
  
processes.	
   APES	
   was	
   used	
   to	
   compare	
   alternative	
   agricultural	
   and	
   environmental	
   policy	
   options,	
  
facilitating	
  the	
  process	
  of	
  assessing	
  key	
  indicators	
  that	
  characterize	
  interactions	
  between	
  agricultural	
  
systems,	
  natural	
  and	
  human	
  resources,	
  and	
  society.	
  The	
  developed	
  framework,	
  named	
  SEAMLESS-­‐IF	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
17
	
  This	
  is	
  true	
  for	
  most	
  of	
  the	
  sectors,	
  even	
  though	
  for	
  instance	
  most	
  energy	
  models	
  are	
  based	
  on	
  the	
  MARKAL	
  
family	
  of	
  models.	
  Furthermore	
  something	
  to	
  consider	
  is	
  that	
  models	
  need	
  to	
  be	
  customized,	
  so	
  that	
  having	
  a	
  
single	
  framework	
  readily	
  applicable	
  to	
  different	
  contexts	
  and	
  sectors	
  may	
  actually	
  be	
  counter	
  productive	
  
18
	
  http://www.seamless-­‐ip.org	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
36	
  |	
  P a g e 	
  
in	
   a	
   finale	
   stage,	
   also	
   enabled	
   linkage	
   of	
   quantitative	
   models,	
   pan-­‐European	
   databases	
   and	
  
qualitative	
  procedures	
  to	
  simulate	
  the	
  impact	
  on	
  society	
  of	
  biophysical,	
  economic	
  and	
  behavioural	
  
changes.	
  SEAMLESS-­‐IF	
  now	
  facilitates	
  ex-­‐ante	
  assessments	
  at	
  the	
  full	
  range	
  of	
  scales	
  from	
  the	
  global	
  
to	
  the	
  field	
  level	
  to	
  support	
  policy	
  and	
  decision	
  making	
  for	
  sustainable	
  development.	
  SEAMLESS-­‐IF	
  
nowadays	
   can	
   be	
   used	
   to	
   investigate	
   the	
   effects	
   of	
   agricultural	
   and	
   environmental	
   policies	
   while	
  
accounting	
   for	
   technical	
   innovations.	
   Further,	
   the	
   interactions	
   of	
   such	
   policies	
   with	
   other	
   major	
  
trends	
  such	
  as	
  climate	
  change	
  and	
  increasing	
  land	
  used	
  for	
  bio-­‐fuel	
  crops	
  can	
  be	
  studied	
  efficiently	
  
in	
  the	
  near	
  future.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
37	
  |	
  P a g e 	
  
	
  
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  Figure	
  5:	
  Agricultural	
  Production	
  and	
  Externalities	
  Simulator	
  (APES)	
  
	
  
	
  
Analyses	
  with	
  SEAMLESS-­‐IF	
  can	
  be	
  done	
  at	
  multiple	
  scales	
  and	
  with	
  varying	
  time	
  horizons,	
  whilst	
  
focusing	
  on	
  the	
  most	
  important	
  issues	
  emerging	
  at	
  each	
  scale.	
  This	
  is	
  possible	
  as	
  the	
  framework	
  is	
  
based	
   on	
   research	
   innovations	
   in	
   linking	
   models	
   across	
   scales	
   allowing	
   consistent	
   “micro-­‐macro”	
  
analysis	
   as	
   well	
   as	
   linking	
   models	
   across	
   disciplines	
   allowing	
   “economicbiophysical”	
   analysis.	
   The	
  
linked	
  models	
  range	
  from	
  a	
  bio-­‐physical	
  field	
  model	
  to	
  a	
  farm	
  model	
  and	
  to	
  an	
  agricultural	
  sector	
  
model	
  for	
  the	
  EU;	
  in	
  other	
  words	
  they	
  ensure	
  a	
  consistent	
  analysis	
  of	
  what	
  effects	
  EC	
  policies	
  may	
  
have	
  on	
  agricultural	
  markets,	
  farming	
  systems	
  and	
  the	
  environment.	
  In	
  addition,	
  the	
  effectiveness	
  of	
  
a	
  policy	
  in	
  its	
  institutional	
  context	
  is	
  assessed	
  by	
  applying	
  qualitative	
  procedures.	
  The	
  interlinked	
  
pan-­‐European	
  database	
  provides	
  the	
  relevant	
  data	
  needed	
  at	
  different	
  scales.	
  
	
  
	
  
For	
  another	
  inspiring	
  example	
  have	
  a	
  look	
  at	
  the	
  Insight	
  Maker	
  case	
  at	
  http://insightmaker.com/.	
  
Insight	
  Maker	
  allows	
  to	
  build	
  simulation	
  models	
  ("Insights")	
  for	
  all	
  scales:	
  from	
  the	
  smallest	
  cell,	
  to	
  
the	
  social	
  effects	
  of	
  product	
  adoption,	
  to	
  global	
  climate	
  change.	
  Once	
  they	
  are	
  built	
  one	
  can	
  share	
  
them	
   with	
   others.	
   The	
   models	
   are	
   called	
   “an	
   Insight”	
   as	
   they	
   will	
   typically	
   reveal	
   one	
   or	
   more	
  
fascinating	
  point	
  about	
  the	
  system	
  under	
  study.	
  All	
  the	
  simulations	
  built	
  with	
  Insight	
  Maker	
  can	
  be	
  
shared	
  via	
  the	
  web.	
  	
  This	
  means	
  people	
  can	
  change	
  the	
  variables	
  and	
  see	
  the	
  results	
  for	
  themselves.	
  	
  
Vensim	
  Molecules19
	
  is	
  a	
  software	
  used	
  for	
  constructing	
  system	
  dynamics	
  models	
  from	
  molecules	
  of	
  
system	
  dynamics	
  structure.	
  Molecules	
  are	
  made	
  of	
  primitive	
  stock	
  and	
  flow	
  or	
  auxiliary	
  elements	
  
and	
   are,	
   in	
   turn,	
   the	
   building	
   blocks	
   of	
   complete	
   models,	
   elements	
   of	
   substructure	
   serving	
   a	
  
particular	
  purpose.	
  Molecules	
  provide	
  a	
  framework	
  for	
  presenting	
  important	
  and	
  commonly	
  used	
  
elements	
  of	
  model	
  structure	
  making	
  faster	
  and	
  easier	
  to	
  develop	
  system	
  dynamics	
  models.	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
19
	
  http://www.vensim.com/molecule.html	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
38	
  |	
  P a g e 	
  
	
  
Anylogic20
,	
   a	
   multi-­‐method	
   simulation	
   modelling	
   tool	
   capable	
   of	
   integrating	
   and	
   combining	
   the	
  
following	
   modelling	
   approaches:	
   system	
   dynamics,	
   discrete	
   event	
   simulation	
   and	
   agent-­‐based	
  
modelling.	
  Anylogic’s	
  simulation	
  language	
  is	
  composed	
  by	
  stock	
  and	
  flow	
  diagrams	
  (used	
  for	
  System	
  
Dynamics	
   modelling),	
   statecharts,	
   which	
   define	
   the	
   agents’	
   behaviour	
   in	
   Agent	
   Based	
   modelling,	
  
action	
   charts	
   (used	
   to	
   define	
   algorithms),	
   and	
   finally	
   process	
   flowcharts	
   which	
   are	
   the	
   basic	
  
constructions	
  for	
  defining	
  processes	
  in	
  Discrete	
  Event	
  modelling.	
  
	
  
Available	
  Tools	
  
A	
   very	
   interesting	
   tool	
   is	
   En-­‐ROADS21
,	
   a	
   global	
   simulation	
   model	
   that	
   focuses	
   on	
   how	
   changes	
   in	
  
global	
   GDP,	
   energy	
   efficiency,	
   R&D	
   results,	
   carbon	
   price,	
   fuel	
   mix,	
   and	
   other	
   factors	
   will	
   change	
  
carbon	
  emissions,	
  energy	
  access,	
  and	
  temperature.	
  
	
  
	
  
En-­‐ROADS	
   is	
   designed	
   to	
   complement,	
   other	
   more	
   disaggregated	
   models	
   addressing	
   these	
  
questions,	
  and	
  relies	
  on	
  the	
  other	
  models	
  and	
  EIA	
  projections	
  for	
  testing	
  and	
  data.	
  
	
  
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
20
	
  http://www.xjtek.com/	
  
21
	
  http://climateinteractive.org/simulations/en-­‐roads	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
39	
  |	
  P a g e 	
  
	
  
	
  
	
  
	
  
	
  
En-­‐ROADS	
   is	
   customized	
   to	
   address	
   enquiries	
   regarding	
   how	
   much	
   might	
   technological	
  
breakthroughs	
  contribute	
  to	
  addressing	
  climate	
  change.	
  These	
  particular	
  breakthroughs	
  include	
  for	
  
instance	
  R&D	
  and	
  scale-­‐up	
  of	
  a	
  new	
  zero-­‐carbon	
  energy	
  supply,	
  renewable	
  energy,	
  energy	
  efficiency,	
  
inexpensive	
   natural	
   gas,	
   etc.	
   More	
   precisely	
   En-­‐ROADS	
   investigates	
   which	
   assumptions	
   about	
   the	
  
technology	
  and	
  the	
  economy	
  would	
  be	
  necessary	
  for	
  a	
  breakthrough	
  to	
  grow	
  with	
  enough	
  speed	
  
and	
  scale	
  to	
  deliver	
  climate	
  goals.	
  
	
  
	
  
	
  
	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
40	
  |	
  P a g e 	
  
	
  
One	
  of	
  the	
  most	
  innovative	
  parts	
  of	
  En-­‐ROADS	
  regards	
  its	
  capabilities	
  to	
  test	
  assumptions	
  about	
  the	
  
potential	
  success	
  of	
  R&D	
  towards	
  zero-­‐carbon	
  energy.	
  More	
  in	
  particular	
  the	
  simulation	
  investigate	
  
what	
  are	
  the	
  likely	
  dynamics	
  of	
  the	
  emergence	
  of	
  a	
  new	
  energy	
  supply,	
  as	
  well	
  as	
  how	
  fast	
  could	
  it	
  
grow	
  and	
  displace	
  high-­‐carbon	
  sources	
  and	
  reduce	
  carbon	
  emissions.	
  En-­‐ROADS	
  is	
  an	
  extension	
  of	
  
the	
  C-­‐ROADS	
  model,	
  which	
  will	
  be	
  described	
  below	
  in	
  the	
  roadmap.	
  The	
  distinction	
  between	
  the	
  two	
  
models	
  is	
  that	
  while	
  C-­‐ROADS	
  focuses	
  on	
  how	
  the	
  changes	
  in	
  national	
  and	
  regional	
  emissions	
  could	
  
affect	
   GHG	
   emissions	
   and	
   climate	
   outcomes,	
   En-­‐ROADS	
   focuses	
   on	
   how	
   changes	
   in	
   the	
   energy,	
  
economic,	
  and	
  public	
  policy	
  systems	
  could	
  influence	
  GHG	
  emissions	
  and	
  climate	
  outcomes.	
  
	
  
Key	
  challenges	
  and	
  gaps	
  
With	
  regards	
  to	
  implementation	
  architecture	
  and	
  use	
  of	
  modelling	
  frameworks,	
  there	
  are	
  two	
  major	
  
problems:	
  	
  
• the	
   framework	
   design	
   and	
   implementation	
   must	
   be	
   optimized	
   to	
   balance	
   carefully	
   its	
  
flexibility	
  and	
  its	
  usability	
  to	
  avoid	
  incurring	
  either	
  a	
  performance	
  penalty	
  or	
  users	
  having	
  
too	
  steep	
  a	
  learning	
  curve,	
  and	
  	
  
• developing	
  components	
  for	
  a	
  specific	
  framework	
  constrains	
  their	
  use	
  to	
  that	
  framework.	
  
	
  
The	
   most	
   immediate	
   option	
   to	
   overcome	
   such	
   problems	
   is	
   developing	
   inherently	
   reusable	
  
components	
  (i.e.	
  non	
  framework	
  specific),	
  which	
  can	
  be	
  used	
  in	
  a	
  specific	
  modelling	
  framework	
  by	
  
encapsulating	
  them	
  using	
  dedicated	
  classes	
  called	
  “wrappers”;	
  such	
  classes	
  act	
  as	
  bridges	
  between	
  
the	
  framework	
  and	
  the	
  component	
  interface.	
  The	
  disadvantage	
  of	
  this	
  solution	
  is	
  the	
  creation	
  of	
  
another	
  “layer”	
  in	
  the	
  implementation,	
  which	
  adds	
  to	
  the	
  already	
  implemented	
  machinery	
  in	
  the	
  
framework.	
   The	
   appropriateness	
   of	
   this	
   solution,	
   both	
   as	
   ease	
   of	
   implementation	
   and	
   overall	
  
performance,	
  must	
  be	
  evaluated	
  case	
  by	
  case.	
  
Regardless	
   of	
   the	
   choice	
   of	
   developing	
   framework	
   specific	
   or	
   intrinsically	
   reusable	
   components,	
  
there	
   is	
   a	
   basic	
   choice	
   which	
   must	
   be	
   carefully	
   evaluated	
   prior	
   to	
   that	
   and	
   which	
   is	
   related,	
   in	
  
general	
   terms,	
   to	
   the	
   framework	
   as	
   a	
   flexible	
   modelling	
   environment	
   to	
   build	
   complex	
   models	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
41	
  |	
  P a g e 	
  
(model	
   linking),	
   but	
   also	
   to	
   the	
   framework	
   as	
   an	
   efficient	
   engine	
   for	
   simulation,	
   calibration	
   and	
  
simulation	
  of	
  model	
  components	
  (model	
  execution).	
  Modern	
  software	
  technologies	
  allow	
  building	
  
flexible,	
   coherent	
   and	
   elegant	
   constructs,	
   but	
   that	
   comes	
   at	
   a	
   performance	
   cost.	
   Without	
   even	
  
introducing	
  specific	
  references	
  to	
  Object	
  Oriented	
  Programming	
  (OOP),	
  it	
  seems	
  important	
  to	
  point	
  
out	
   that	
   the	
   use	
   of	
   object-­‐oriented	
   programming	
   constructs,	
   which	
   actually	
   enhance	
   flexibility,	
  
modularity	
  and	
  reuse	
  of	
  software,	
  all	
  nice	
  things,	
  require	
  the	
  compiler	
  to	
  use	
  virtual	
  methods	
  calls,	
  
dynamic	
  dispatching,	
  and	
  so	
  on.	
  All	
  these	
  operations	
  are	
  resource	
  intensive	
  and	
  in	
  some	
  cases,	
  they	
  
can	
  heavily	
  affect	
  the	
  code	
  performance,	
  and	
  this	
  becomes	
  evident	
  in	
  applications	
  in	
  which	
  such	
  use	
  
is	
  done	
  thousand	
  times	
  every	
  simulation	
  step.	
  	
  
By	
   the	
   way,	
   the	
   Model	
   Composition	
   horizon	
   is	
   even	
   more	
   clouded	
   as	
   the	
   potential	
   advantages	
  
resulting	
  from	
  the	
  possibility	
  of	
  composing	
  bigger	
  models	
  from	
  smaller	
  ones	
  have	
  been	
  shown	
  only	
  
recently.	
  It	
  is	
  essentially	
  due	
  to	
  the	
  problem	
  of	
  interoperability	
  and	
  integration	
  of	
  different	
  vendors’	
  
(thus	
   proprietary)	
   model	
   formats	
   and	
   to	
   the	
   lack	
   of	
   standards	
   allowing	
   performing	
   composition	
  
tasks.	
   Another	
   problem	
   stems	
   from	
   the	
   fact	
   that	
   many	
   models	
   are	
   still	
   too	
   dependent	
   on	
   their	
  
implementation	
  methodology.	
  Moreover,	
  model	
  integration	
  is	
  at	
  present	
  almost	
  non-­‐existing.	
  Very	
  
few	
   modelling	
   environments/suites	
   provide	
   the	
   import/export	
   functionalities	
   and	
   a	
   standard	
  
language	
  for	
  model	
  interoperability	
  is	
  not	
  currently	
  available.	
  Most	
  of	
  the	
  current	
  practice	
  for	
  data	
  
communication	
   or	
   information	
   transfer	
   is	
   performed	
   by	
   means	
   of	
   third	
   party	
   solutions	
   (e.g.:	
  
interoperability	
  in	
  most	
  cases	
  is	
  achieved	
  by	
  transferring	
  data	
  via	
  electronic	
  spreadsheets	
  or,	
  only	
  in	
  
rare	
  cases,	
  by	
  using	
  Database	
  Management	
  Systems	
  (DBMS)	
  or	
  Enterprise	
  Resource	
  Planning	
  (ERP)	
  
systems.	
  
	
  
Current	
  research	
  
Current	
  research,	
  as	
  well	
  as	
  previous	
  research,	
  has	
  not	
  yet	
  worked	
  on	
  (with	
  the	
  exception	
  of	
  just	
  a	
  
few	
  cases)	
  the	
  problem	
  of	
  different	
  models	
  integration.	
  At	
  present,	
  due	
  to	
  the	
  plethora	
  of	
  different	
  
modelling/simulation	
  environments/suites,	
  as	
  well	
  as	
  to	
  differences	
  at	
  the	
  scientific	
  field	
  level,	
  many	
  
competing	
  file	
  formats	
  exist.	
  It	
  is	
  possible	
  that	
  vendors	
  perceive	
  the	
  modelling	
  practice	
  as	
  a	
  very	
  
small	
  market	
  niche	
  (as	
  the	
  users	
  stem	
  mainly	
  from	
  Academia	
  and	
  to	
  a	
  very	
  small	
  extent	
  from	
  private	
  
companies	
  where	
  a	
  Decision	
  Support	
  Systems	
  is	
  used,	
  what	
  is	
  more	
  the	
  Public	
  Administration	
  share	
  
is	
  negligible)	
  and	
  therefore	
  are	
  reluctant	
  to	
  introduce	
  interoperable	
  features.	
  
Also,	
  current	
  research,	
  as	
  well	
  as	
  previous	
  research,	
  has	
  only	
  recently	
  begun	
  to	
  explore	
  the	
  following	
  
issues:	
  
• Open-­‐source	
  modelling	
  and	
  simulation	
  environments	
  (there	
  are	
  open	
  environments	
  that	
  are	
  
rising	
  in	
  importance	
  in	
  the	
  research	
  community,	
  albeit	
  in	
  most	
  cases	
  they	
  only	
  provide	
  the	
  
possibility	
  to	
  implement	
  and	
  simulate	
  a	
  model	
  according	
  to	
  the	
  modelling	
  methodology	
  they	
  
refer	
  to).	
  
• Communication	
   of	
   data	
   among	
   models	
   developed	
   in	
   different	
   proprietary	
   (or	
   open)	
  
environments	
  by	
  depending	
  on	
  third	
  party	
  solutions	
  (e.g.:	
  interoperability	
  is	
  in	
  most	
  cases	
  
only	
   achieved	
   by	
   transferring	
   data	
   by	
   means	
   of	
   electronic	
   spreadsheets	
   or,	
   only	
   in	
   rare	
  
cases,	
  by	
  using	
  a	
  DBMS	
  or	
  an	
  organisation’s	
  ERP).	
  
• Open	
  visualisation	
  of	
  results	
  stemming	
  from	
  model	
  simulation	
  (e.g.:	
  online	
  visualisation	
  of	
  
simulation	
  results	
  in	
  a	
  browser	
  by	
  interfacing	
  -­‐	
  only	
  in	
  a	
  few	
  cases	
  -­‐	
  the	
  simulation	
  engines,	
  
or	
  -­‐	
  as	
  it	
  is	
  more	
  often	
  the	
  case	
  -­‐	
  by	
  connecting	
  to	
  a	
  third	
  party	
  mean,	
  as	
  described	
  in	
  the	
  
previous	
  bullet	
  point).	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
42	
  |	
  P a g e 	
  
	
  
Future	
  research	
  
Future	
  research	
  should	
  therefore	
  focus	
  on:	
  
• Definition	
   of	
   standard	
   procedures	
   for	
   model	
   composition/decomposition,	
   e.g.	
   how	
   to	
  
deductively	
  pass	
  from	
  a	
  macro-­‐description	
  of	
  models	
  to	
  the	
  fine	
  definition	
  of	
  its	
  building-­‐
blocks	
   or	
   molecules	
   (top-­‐down	
   approach),	
   how	
   to	
   inductively	
   conceive	
   a	
   progressive	
  
composition	
   of	
   bigger	
   models	
   by	
   aggregating	
   new	
   modules	
   as	
   soon	
   as	
   they	
   are	
   needed	
  
(bottom-­‐up	
  approach)	
  or	
  by	
  expanding	
  already	
  existing	
  objects.	
  
• Proposition	
  of	
  a	
  minimum	
  set	
  of	
  archetypical	
  structures,	
  building	
  blocks	
  or	
  molecules	
  that	
  
might	
  be	
  used	
  according	
  to	
  the	
  proper	
  level	
  of	
  decomposition	
  of	
  the	
  model	
  (e.g.	
  systemic	
  
archetypes,	
  according	
  to	
  the	
  Systems	
  Thinking	
  /	
  System	
  Dynamics	
  approach,	
  might	
  be	
  useful	
  
to	
  describe	
  the	
  overall	
  behaviour	
  thanks	
  to	
  the	
  main	
  variables	
  in	
  the	
  system	
  to	
  be	
  modelled	
  
at	
   a	
   macro-­‐to-­‐middle	
   level).	
   The	
   procedures	
   to	
   implement,	
   validate	
   and	
   redistribute	
   any	
  
further	
  improvement	
  of	
  these	
  “minimal”	
  objects	
  should	
  be	
  investigated.	
  
• Definition	
   of	
   open	
   modelling	
   standards,	
   as	
   the	
   basis	
   for	
   interoperability,	
   that	
   is	
   defining	
  
common	
  file	
  formats	
  and	
  templates	
  (i.e.:	
  by	
  means	
  of	
  XML),	
  which	
  would	
  allow	
  the	
  models	
  
described	
  by	
  means	
  of	
  these	
  XML	
  files	
  to	
  be	
  opened,	
  accessed	
  and	
  integrated	
  into	
  every	
  
(compliant)	
  model-­‐design	
  and	
  simulation	
  environment22
.	
  
• Interoperability,	
  also	
  intended	
  in	
  terms	
  of	
  Service	
  Oriented	
  Architectures	
  (e.g.:	
  certain	
  stand-­‐
alone	
  and	
  always	
  operative	
  models	
  might	
  expose	
  some	
  “services”	
  in	
  order	
  to	
  make	
  available	
  
either	
  their	
  endogenous	
  data	
  or	
  bits	
  of	
  information,	
  or	
  some	
  peculiar	
  function	
  or	
  structural	
  
part,	
  while	
  some	
  other	
  may	
  request	
  to	
  use	
  those	
  services	
  when	
  needed.	
  In	
  consequence,	
  it	
  
creates	
   a	
   need	
   for	
   a	
   definition	
   of	
   model	
   repositories,	
   a	
   list	
   of	
   operative	
   models	
   and	
   the	
  
functionalities	
  that	
  they	
  might	
  expose	
  which	
  finally,	
  entails	
  the	
  definition	
  of	
  a	
  SOA	
  among	
  
interoperable	
  models).	
  
• Definition	
  and	
  implementation	
  of	
  model	
  repositories	
  (and	
  procedures	
  to	
  add	
  new	
  objects	
  to	
  
them),	
   even	
   if	
   they	
   are	
   restricted	
   to	
   hosting	
   models	
   developed	
   according	
   to	
   a	
   specific	
  
methodology	
  (Agent	
  Based,	
  System	
  Dynamics,	
  Event	
  Oriented,	
  Stochastic,	
  etc.)	
  
• Definition	
  and	
  implementation	
  of	
  new	
  relationships	
  that	
  are	
  created	
  when	
  two	
  models	
  are	
  
integrated.	
   All	
   possible	
   important	
   relationships	
   resulting	
   from	
   a	
   model	
  
integration/composition	
   should	
   be	
   identified	
   and	
   eventually	
   included	
   in	
   the	
   new	
   deriving	
  
integrated	
  model.	
  
• Input	
   /	
   Output	
   definition	
   /	
   re-­‐definition:	
   the	
   integration	
   of	
   modelling	
   techniques	
   is	
   a	
  
pertinent	
  issue	
  in	
  the	
  scope	
  of	
  this	
  challenge.	
  The	
  multi-­‐modelling	
  tools	
  should	
  be,	
  in	
  the	
  
future,	
  available	
  not	
  only	
  to	
  experts	
  but	
  also	
  to	
  lay	
  users.	
  Moreover,	
  at	
  present,	
  only	
  a	
  few	
  of	
  
the	
  actually	
  available	
  modelling/simulation	
  suites	
  are	
  able	
  to	
  provide	
  the	
  possibility	
  to	
  build	
  
a	
  model	
  by	
  referring	
  to	
  a	
  different	
  modelling	
  methodology.	
  
	
  
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
22
	
  	
  Making	
  portability	
  is	
  very	
  hard	
  to	
  implement:	
  in	
  the	
  past	
  there	
  has	
  been	
  a	
  significant	
  effort	
  with	
  SMILE	
  and	
  later	
  XMILE	
  
to	
  make	
  SD	
  models	
  portable	
  between	
  different	
  software	
  programs.	
  This	
  was	
  really	
  a	
  best	
  case	
  scenario	
  as	
  the	
  software	
  
programs	
  in	
  play	
  were	
  so	
  very	
  similar	
  to	
  start,	
  but	
  it	
  has	
  ended	
  up	
  unsuccessfully	
  as	
  agree	
  and	
  implementation	
  has	
  not	
  
been	
   reached.	
   Deciding	
   on	
   a	
   universal	
   format	
   to	
   bind	
   existing	
   things	
   together	
   seems	
   like	
   it	
   will	
   not	
   be	
   able	
   to	
   work	
  
(http://xkcd.com/927/).	
  There	
  has	
  been	
  greater	
  success	
  with	
  things	
  like	
  Modelica	
  where	
  you	
  get	
  a	
  format	
  and	
  an	
  app	
  and	
  
then	
  other	
  programs	
  decide	
  to	
  adopt	
  the	
  format	
  themselves,	
  a	
  more	
  organic	
  process	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
43	
  |	
  P a g e 	
  
3.1.2. Collaborative	
  Modelling	
  
	
  
Introduction	
  and	
  definition	
  
The	
  English	
  sayings	
  “two	
  heads	
  are	
  better	
  than	
  one”	
  and	
  “too	
  many	
  cooks	
  spoil	
  the	
  broth”	
  give	
  an	
  
idea	
   of	
   the	
   expectations	
   that	
   arise	
   from	
   a	
   collaboration	
   of	
   people.	
   On	
   the	
   one	
   hand,	
   one	
   would	
  
expect	
  that	
  a	
  group	
  of	
  people	
  is	
  able	
  to	
  better	
  observe	
  and	
  perceive	
  situations	
  as	
  well	
  as	
  to	
  make	
  
better	
  decisions	
  than	
  a	
  single	
  person	
  would	
  be	
  able	
  to	
  make.	
  On	
  the	
  other	
  hand,	
  it	
  is	
  also	
  common	
  
knowledge	
   that	
   the	
   collaboration	
   of	
   several	
   people	
   entails	
   the	
   problem	
   of	
   group	
   coordination,	
  
which,	
  if	
  disregarded,	
  can	
  make	
  group	
  work	
  inefficient,	
  compared	
  to	
  the	
  work	
  of	
  a	
  single	
  person.	
  
There	
  is	
  the	
  need	
  for	
  an	
  authoritative	
  gatekeeper	
  that	
  is	
  also	
  the	
  modeler	
  implementing	
  the	
  model,	
  
otherwise	
   collaborative	
   sessions	
   would	
   get	
   out	
   of	
   control	
   quickly	
   with	
   non-­‐implementable	
   ideas	
  
becoming	
  the	
  focus	
  of	
  discussion	
  very	
  fast.	
  	
  
There	
  are	
  three	
  kinds	
  of	
  problems	
  that	
  are	
  typically	
  approached	
  by	
  groups:	
  	
  
• cognition	
  problems,	
  problems	
  with	
  a	
  definite	
  solution	
  or	
  a	
  set	
  of	
  solutions	
  that	
  are	
  certainly	
  
better	
  than	
  others;	
  
• coordination	
  problems,	
  problems	
  that	
  require	
  the	
  group	
  to	
  figure	
  out	
  how	
  to	
  coordinate	
  the	
  
behaviour	
  of	
  its	
  members;	
  
• cooperation	
  problems,	
  problems	
  which	
  feature	
  the	
  involvement	
  of	
  several	
  self-­‐interested,	
  
distrustful	
  people	
  who	
  have	
  to	
  work	
  together.	
  
	
   	
  
Collaborative	
  modelling	
  (also	
  called	
  group	
  model	
  building)	
  refers	
  to	
  a	
  process	
  where	
  a	
  number	
  of	
  
people	
  actively	
  contribute	
  to	
  the	
  creation	
  of	
  a	
  model.	
  The	
  weakest	
  form	
  of	
  involvement	
  is	
  feedback	
  
to	
  the	
  session	
  facilitator,	
  similar	
  to	
  the	
  conventional	
  way	
  of	
  modelling.	
  Stronger	
  forms	
  are	
  proposals	
  
for	
  changes	
  or	
  (partial)	
  model	
  proposals.	
  In	
  this	
  particular	
  approach	
  the	
  modelling	
  process	
  should	
  be	
  
supported	
   by	
   a	
   combination	
   of	
   narrative	
   scenarios,	
   modelling	
   rules,	
   and	
   e-­‐Participation	
   tools	
   (all	
  
integrated	
  via	
  an	
  ICT	
  e-­‐Governance	
  platform):	
  so	
  the	
  policy	
  model	
  for	
  a	
  given	
  domain	
  can	
  be	
  created	
  
iteratively	
  using	
  cooperation	
  of	
  several	
  stakeholder	
  groups	
  (decision	
  makers,	
  analysts,	
  companies,	
  
civic	
  society,	
  and	
  the	
  general	
  public).	
  	
  
As	
  a	
  matter	
  of	
  fact	
  groups	
  require	
  rules	
  (or	
  cultural	
  norms)	
  to	
  maintain	
  order	
  and	
  coherence,	
  as	
  well	
  
as	
   diversity	
   and	
   independence	
   of	
   its	
   group	
   members	
   in	
   order	
   to	
   create	
   a	
   kind	
   of	
   a	
   collective	
  
intelligence.	
   Bringing	
   together	
   people	
   with	
   diverse	
   perspectives	
   and	
   backgrounds	
   for	
   working	
  
together	
  in	
  multi-­‐disciplinary	
  teams	
  is	
  expected	
  to	
  improve	
  the	
  overall	
  group	
  performance,	
  so	
  the	
  
first	
  issue	
  on	
  which	
  the	
  collaborative	
  process	
  should	
  be	
  based	
  is	
  the	
  definition	
  of	
  a	
  shared	
  modelling	
  
rules	
  framework	
  (the	
  social	
  norms),	
  guiding	
  the	
  modelling	
  team	
  in	
  determining	
  whether	
  a	
  proposal	
  
is	
  accepted	
  or	
  rejected.	
  Two	
  usually	
  adopted	
  types	
  of	
  rules	
  are:	
  
• Rules	
  of	
  majority,	
  where	
  a	
  certain	
  number	
  of	
  group	
  members	
  had	
  to	
  support	
  or	
  oppose	
  a	
  
proposal	
  in	
  order	
  for	
  the	
  whole	
  group	
  to	
  accept	
  or	
  reject	
  it	
  (e.g.,	
  more	
  than	
  half).	
  A	
  tie-­‐break	
  
rule	
   was	
   sometimes	
   specified	
   (e.g.,	
   for	
   the	
   case	
   of	
   an	
   equal	
   number	
   of	
   supporters	
   and	
  
opponents).	
  The	
  tie-­‐break	
  could	
  involve	
  seniority	
  issues.	
  
• Rules	
  of	
  seniority,	
  where	
  the	
  weight	
  of	
  a	
  group	
  member’s	
  support	
  or	
  opposition	
  was	
  related	
  
to	
  his	
  or	
  her	
  status	
  within	
  the	
  group.	
  This	
  status	
  could	
  be	
  acquired	
  (e.g.,	
  by	
  experience)	
  or	
  
associated	
  with	
  a	
  position	
  to	
  which	
  the	
  member	
  was	
  appointed.	
  A	
  frequent	
  example	
  of	
  this	
  
was	
   the	
   case	
   of	
   a	
   more	
   experienced	
   modeller	
   who	
   was	
   considered	
   as	
   the	
   leader	
   by	
   the	
  
group	
  and	
  took	
  decisions	
  on	
  their	
  behalf.	
  The	
  other	
  members	
  filled	
  the	
  role	
  of	
  consultants	
  in	
  
such	
  a	
  case.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
44	
  |	
  P a g e 	
  
	
  
These	
  rules	
  were	
  sometimes	
  set	
  up	
  explicitly	
  before	
  the	
  group	
  began	
  their	
  work,	
  or	
  in	
  an	
  early	
  phase	
  
of	
   this	
   work.	
   But	
   in	
   most	
   cases	
   they	
   rather	
   emerged	
   as	
   the	
   result	
   of	
   each	
   member’s	
   behaviour.	
  
Individuals	
   making	
   regular	
   contributions	
   of	
   high	
   quality	
   were	
   likely	
   to	
   acquire	
   seniority.	
   In	
  
homogeneous	
  teams	
  majority	
  rules	
  were	
  used	
  more	
  often.	
  
	
  
Why	
  it	
  matters	
  in	
  governance	
  
From	
   a	
   very	
   high	
   level	
   of	
   abstraction,	
   collaborative	
   modelling	
   itself	
   can	
   be	
   seen	
   as	
   a	
   social	
  
interaction	
   between	
   several	
   people,	
   while	
   these	
   people	
   who	
   together	
   perform	
   the	
   modelling	
  
process	
  form	
  a	
  social	
  entity.	
  Thus,	
  the	
  process	
  of	
  collaboratively	
  defining	
  and	
  implementing	
  a	
  model,	
  
with	
  a	
  particular	
  reference	
  to	
  the	
  public	
  policy	
  modelling,	
  is	
  strictly	
  connected	
  with	
  the	
  public	
  aspect	
  
of	
   every	
   citizen’s	
   life,	
   starting	
   from	
   the	
   communities	
   bridged	
   by	
   the	
   decision	
   makers	
   that	
  
collaboratively	
  define	
  some	
  policies,	
  to	
  an	
  average	
  citizen	
  which	
  interacts	
  with	
  other	
  citizens	
  within	
  
the	
  rules	
  framework	
  defined	
  by	
  the	
  policies	
  themselves.	
  	
  
Starting	
  from	
  the	
  needs	
  perceived	
  by	
  the	
  citizens,	
  the	
  limitations	
  of	
  existing	
  modelling	
  techniques	
  
adopted	
  in	
  policy	
  making	
  include	
  the	
  following	
  issues:	
  
• Changing	
  models	
  is	
  too	
  time-­‐consuming	
  and	
  integrating	
  to	
  other	
  diagrams	
  is	
  difficult.	
  Also	
  
there	
  are	
  version	
  control	
  problems.	
  	
  
• It	
  is	
  not	
  possible	
  for	
  more	
  than	
  one	
  person	
  to	
  work	
  on	
  the	
  same	
  diagram	
  at	
  the	
  same	
  time.	
  	
  
• Modelling	
  has	
  to	
  be	
  done	
  at	
  the	
  specific	
  location	
  where	
  the	
  modeller	
  is	
  present.	
  
• Contribution	
  to	
  the	
  model	
  comes	
  from	
  those	
  interviewed	
  or	
  at	
  a	
  group	
  meeting,	
  limiting	
  the	
  
potential	
  contribution	
  from	
  a	
  larger	
  group.	
  
• Low	
  model	
  acceptance:	
  the	
  model	
  resulting	
  from	
  the	
  modelling	
  session	
  is	
  not	
  supported	
  by	
  
some	
  of	
  the	
  stakeholders.	
  
• Participants	
   feel	
   misunderstood:	
   as	
   a	
   consequence	
   of	
   bad	
   elicitation	
   or	
   a	
   wrong	
  
understanding	
  of	
  the	
  model.	
  
• Low	
   perceived	
   model	
   quality	
   and	
   limited	
   model	
   comprehension:	
   Individuals	
   do	
   not	
   fully	
  
understand	
  the	
  model	
  or	
  do	
  not	
  agree	
  with	
  it.	
  
	
  
Reasons	
  that	
  argue	
  for	
  conducting	
  policy	
  modelling	
  in	
  a	
  collaborative	
  manner	
  are:	
  
• No	
  person	
  typically	
  understands	
  all	
  requirements	
  and	
  understanding	
  tends	
  to	
  be	
  distributed	
  
across	
  a	
  number	
  of	
  individuals.	
  	
  
• A	
  group	
  is	
  better	
  capable	
  of	
  pointing	
  out	
  shortcomings	
  than	
  an	
  individual.	
  	
  
• Individuals	
  who	
  participate	
  during	
  analysis	
  and	
  design	
  are	
  more	
  likely	
  to	
  cooperate	
  during	
  
implementation.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
45	
  |	
  P a g e 	
  
	
  
Collaborative	
   modelling	
   calls	
   for	
   the	
   definition	
   of	
   the	
   citizen’s	
   role	
   in	
   the	
   public	
   policy	
   modelling	
  
process	
  (e.g.:	
  the	
  mass	
  participation	
  issues	
  and	
  processes	
  have	
  been	
  already	
  researched	
  in	
  depth	
  by	
  
the	
   e-­‐Participation	
   research	
   programs).	
   In	
   order	
   to	
   guarantee	
   participation	
   there	
   are	
   some	
  
prerequisites	
  that	
  should	
  be	
  fulfilled:	
  
• All	
   citizens	
   who	
   access	
   ICT	
   services	
   in	
   order	
   to	
   participate	
   should	
   represent	
   the	
   views	
   of	
  
communities	
  affected	
  by	
  the	
  given	
  policy;	
  
• All	
  citizens	
  are	
  able	
  to	
  take	
  part	
  in	
  the	
  modelling	
  process	
  via	
  intuitive	
  IT	
  systems	
  that	
  enable	
  
them	
  an	
  effective	
  and	
  efficient	
  contribution;	
  
• All	
   citizens	
   possess	
   proper	
   skills	
   (or	
   are	
   assisted)	
   to	
   purposely	
   follow	
   a	
   process	
   of	
   group	
  
model-­‐building	
  in	
  order	
  to	
  avoid/abate	
  wrong	
  mental	
  models	
  and	
  thus	
  ultimately	
  reach	
  a	
  
shared	
  vision	
  of	
  the	
  problem.	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
	
  
Current	
  Practice	
  and	
  Inspiring	
  cases	
  
In	
  current	
  practice,	
  collaborative	
  modelling	
  is	
  mainly	
  performed	
  offline;	
  still	
  the	
  rules	
  and	
  guidelines	
  
for	
  session	
  processes	
  are	
  not	
  yet	
  sufficiently	
  widespread.	
  In	
  fact,	
  the	
  abatement	
  of	
  wrong	
  mental	
  
models	
  and	
  the	
  creation	
  of	
  knowledge	
  from	
  information	
  usually	
  imply	
  the	
  dialogue	
  among	
  people	
  
with	
   different	
   views	
   of	
   the	
   problem	
   as	
   well	
   as	
   the	
   need	
   for	
   critical	
   skills.	
   Further	
   to	
   that,	
   the	
  
information	
   that	
   occurred	
   in	
   a	
   discussion	
   has	
   to	
   be	
   grounded	
   and	
   definitively	
   transferred	
   to	
   the	
  
formal	
   model.	
   Thus,	
   e-­‐Participation	
   might	
   be	
   of	
   help	
   in	
   achieving	
   a	
   critical	
   mass	
   of	
   data	
   and	
  
information	
   exchange	
   online	
   but	
   in	
   itself	
   does	
   not	
   solve	
   the	
   problem	
   of	
   mass	
   cooperation	
   and	
  
collaboration	
  in	
  a	
  formal	
  modelling	
  process.	
  Even	
  more,	
  the	
  participation	
  in	
  this	
  process	
  entails,	
  at	
  
present,	
  a	
  thorough	
  knowledge	
  on	
  modelling	
  processes	
  or	
  tools	
  that	
  an	
  average	
  citizen	
  does	
  not	
  
have.	
   Therefore,	
   there	
   is	
   an	
   urgent	
   need	
   for	
   Intuitive	
   Interfaces,	
   Modelling	
   Wizards	
   and	
   guided	
  
simplified	
  approaches	
  to	
  modelling.	
  Starting	
  from	
  the	
  relevance	
  of	
  collaborative	
  modelling	
  in	
  policy	
  
making,	
  as	
  a	
  very	
  former	
  inspiring	
  case	
  Maarten	
  Sierhuis	
  and	
  Albert	
  M.	
  Selvin,	
  working	
  at	
  NYNEX	
  
Science	
  &	
  Technology	
  Inc	
  in	
  New	
  York,	
  presented	
  in	
  1996	
  a	
  applied	
  research	
  report	
  on	
  “Towards	
  a	
  
Framework	
   for	
   Collaborative	
   Modelling	
   and	
   Simulation”,	
   describing	
   methodologies	
   for	
   modelling	
  
and	
  simulation	
  in	
  a	
  collaborative	
  analysis	
  or	
  design	
  project,	
  and	
  describing	
  a	
  case	
  study	
  in	
  which	
  
Conversational	
   Modelling,	
   a	
   software-­‐supported	
   technique	
   for	
   collaborative	
   modelling,	
   	
   enabled	
  
participants	
  to	
  construct	
  static	
  knowledge	
  models	
  in	
  collaborative	
  sessions.	
  The	
  sessions	
  described	
  
in	
   the	
   report	
   resulted	
   in	
   the	
   identification	
   of	
   207	
   queries.	
   Of	
   these,	
   24	
   were	
   chosen	
   for	
   detailed	
  
modelling.	
   As	
   a	
   result	
   of	
   the	
   modelling,	
   44	
   resources,	
   29	
   knowledge	
   items,	
   58	
   data	
   items,	
   and	
   8	
  
organizational	
  issues	
  were	
  identified.	
  The	
  response	
  from	
  participants	
  was	
  positive.	
  Many	
  stated	
  that	
  
they	
  had	
  learned	
  more	
  about	
  each	
  other	
  work	
  in	
  the	
  conversational	
  modelling	
  sessions	
  then	
  they	
  
had	
  been	
  able	
  to	
  in	
  the	
  course	
  of	
  their	
  normal	
  work	
  activities.	
  The	
  development	
  organization	
  has	
  
been	
   able	
   to	
   use	
   the	
   output	
   of	
   the	
   sessions	
   to	
   generate	
   design	
   requirements.	
   A	
   picture	
   of	
   the	
  
interface	
  (figure	
  6)	
  used	
  during	
  the	
  sessions	
  follows.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
46	
  |	
  P a g e 	
  
	
  
	
  
	
  
	
  Figure	
  6:	
  Conversational	
  Modelling	
  Interface	
  	
  
	
  
As	
   more	
   recent	
   inspiring	
   case,	
   one	
   can	
   refer	
   to	
   the	
   results	
   of	
   the	
   FP7	
   projects	
   OCPOMO23
	
  or	
  
PADGETS24
.	
   This	
   last	
   one,	
   PADGETS,	
   aims	
   at	
   bringing	
   together	
   two	
   well	
   established	
   domains,	
   the	
  
mashup	
   architectural	
   approach	
   of	
   web	
   2.0	
   for	
   creating	
   web	
   applications	
   (gadgets)	
   and	
   the	
  
methodology	
  of	
  system	
  dynamics	
  in	
  analysing	
  complex	
  system	
  behaviour.	
  The	
  objective	
  is	
  to	
  design,	
  
develop	
   and	
   deploy	
   a	
   prototype	
   toolset	
   that	
   will	
   allow	
   policy	
   makers	
   to	
   graphically	
   create	
   web	
  
applications	
  that	
  will	
  be	
  deployed	
  in	
  the	
  environment	
  of	
  underlying	
  knowledge	
  in	
  Web	
  2.0	
  media.	
  
The	
  project	
  introduces	
  the	
  concept	
  of	
  Policy	
  Gadget	
  (PADGET)	
  –	
  similarly	
  to	
  the	
  approach	
  of	
  gadget	
  
applications	
  in	
  web	
  2.0	
  –	
  to	
  represent	
  a	
  micro	
  web	
  application	
  that	
  combines	
  a	
  policy	
  message	
  with	
  
underlying	
  group	
  knowledge	
  in	
  social	
  media	
  (in	
  the	
  form	
  of	
  content	
  and	
  user	
  activities)	
  and	
  interacts	
  
with	
  end	
  users	
  in	
  popular	
  locations	
  (such	
  as	
  social	
  networks,	
  blogs,	
  forums,	
  news	
  sites,	
  etc.)	
  in	
  order	
  
to	
  get	
  and	
  convey	
  their	
  input	
  to	
  policy	
  makers.	
  
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
23
	
  	
  Open	
  Collaboration	
  in	
  Policy	
  Modelling,	
  http://www.ocopomo.eu	
  
24
	
  http://www.padgets.eu	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
47	
  |	
  P a g e 	
  
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  Figure	
  7:	
  the	
  PADGET	
  Framework	
  
	
  
PADGET	
  is	
  composed	
  of	
  four	
  main	
  components:	
  
• A	
  message,	
  that	
  is	
  a	
  policy	
  in	
  any	
  of	
  its	
  stages	
  and	
  forms	
  
• A	
   set	
   of	
   interaction	
   services,	
   that	
   allows	
   users	
   to	
   interact	
   with	
   the	
   policy	
   gadget	
   (find	
   it,	
  
access	
  its	
  content,	
  comment	
  its	
  content,	
  share	
  it	
  etc.).	
  These	
  interfaces	
  may	
  be	
  provided	
  by	
  
either	
  the	
  underlying	
  social	
  media	
  platforms	
  in	
  which	
  the	
  PADGET	
  Campaign	
  is	
  launched	
  or	
  
by	
  the	
  PADGET	
  itself	
  when	
  it	
  takes	
  the	
  form	
  of	
  a	
  micro	
  application	
  (i.e.	
  in	
  the	
  case	
  of	
  the	
  
iGoogle	
  gadget).	
  
• The	
  social	
  context,	
  that	
  is	
  the	
  framework	
  describing	
  the	
  social	
  activity	
  and	
  content	
  relating	
  
with	
  the	
  policy	
  gadget	
  in	
  each	
  individual	
  social	
  media	
  platform	
  where	
  the	
  policy	
  gadget	
  is	
  
present.	
  
• The	
  decision	
  services,	
  which	
  are	
  offered	
  by	
  two	
  modules.	
  The	
  PADGETS	
  analytics	
  and	
  the	
  
PADGETS	
   simulation	
   model.	
   The	
   decision	
   services	
   component	
   is	
   responsible	
   for	
   the	
  
generation	
  of	
  the	
  information	
  outputs	
  to	
  be	
  presented	
  to	
  the	
  PADGET	
  initiator	
  (usually	
  a	
  
policy	
  maker).	
  	
  
	
  
PADGETS	
  will	
  use	
  publicly	
  available	
  APIs	
  for	
  interconnecting,	
  publishing	
  and	
  retrieving	
  content	
  from	
  
underlying	
  social	
  media	
  platforms.	
  The	
  collected	
  information	
  and	
  user	
  activities	
  that	
  policy	
  gadgets	
  
invoke	
   in	
   the	
   media	
   platforms	
   will	
   be	
   categorized	
   using	
   semantic	
   tags	
   as	
   to	
   their	
   relation	
   to	
   the	
  
policies	
  in	
  order	
  to	
  help	
  the	
  policy	
  maker	
  form	
  an	
  opinion	
  about	
  what	
  the	
  users	
  think	
  about	
  relevant	
  
issues	
  and	
  policies.	
  	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
48	
  |	
  P a g e 	
  
An	
   interesting	
   model	
   is	
   Threshold	
   21	
   (T21),	
   built	
   using	
   the	
   System	
   Dynamics	
   methodology	
   to	
  
facilitate	
  decision	
  making.	
  Feedback	
  relations	
  among	
  key	
  variables	
  within	
  sectors	
  are	
  endogenously	
  
simulated	
  by	
  T21,	
  and	
  those	
  relations	
  can	
  lead	
  to	
  further	
  feedback	
  to	
  other	
  sectors.	
  This	
  process	
  is	
  
valuable	
  for	
  learning	
  more	
  about	
  the	
  complex	
  interactions	
  among	
  and	
  across	
  sectors	
  that	
  need	
  to	
  be	
  
taken	
  into	
  account	
  in	
  order	
  to	
  develop	
  more	
  effective	
  policies	
  and	
  mitigate	
  or	
  avoid	
  negative	
  side	
  
effects.	
  This	
  approach	
  allows	
  to	
  bring	
  experts	
  together	
  from	
  different	
  sectors	
  to	
  better	
  understand	
  
these	
  relations	
  and	
  obtain	
  the	
  data	
  needed	
  to	
  translate	
  a	
  qualitative	
  causal	
  diagram	
  (a	
  map	
  of	
  the	
  
system)	
  into	
  a	
  quantitative	
  model,	
  using	
  a	
  participatory	
  approach.	
  	
  
	
  
	
  
Source:	
   “Macro	
   Economic	
   Policy	
   Analysis	
   Applications”,	
   presented	
   by	
   John	
   Shilling	
   at	
   the	
  
Transatlantic	
  Research	
  on	
  Policy	
  Modelling	
  Workshop25
	
  	
  
	
  
An	
  interesting	
  case	
  of	
  application	
  of	
  the	
  model	
  to	
   inform	
  policy	
  making	
  regarded	
  China.	
  More	
  in	
  
particular,	
  a	
  number	
  of	
  agencies	
  and	
  NGOs	
  working	
  in	
  China	
  cooperated	
  with	
  the	
  MI	
  in	
  an	
  attempt	
  
to	
   address	
   a	
   wide	
   variety	
   of	
   issues	
   related	
   to	
   achieving	
   more	
   sustainable	
   growth,	
   dealing	
   with	
  
resource	
  constraints	
  (e.g.	
  water,	
  agriculture,	
  energy)	
  in	
  the	
  face	
  of	
  a	
  large	
  and	
  growing	
  population,	
  
reducing	
  GHG	
  emissions,	
  and	
  promoting	
  more	
  innovation	
  to	
  move	
  down	
  a	
  greener	
  path.	
  	
  The	
  effects	
  
of	
   the	
   growth	
   in	
   population,	
   the	
   economy,	
   transportation,	
   energy	
   consumption,	
   food	
   imports	
   on	
  
growth	
   prospects	
   have	
   been	
   examined.	
   	
   The	
   study	
   identified	
   also	
   some	
   important	
   cross	
   sector	
  
factors,	
  such	
  as	
  that	
  slowing	
  the	
  growth	
  of	
  the	
  per	
  capita	
  size	
  of	
  housing	
  units	
  would	
  have	
  many	
  
beneficial	
  effects,	
  including	
  less	
  cement	
  and	
  steel	
  production,	
  leading	
  to	
  less	
  GHG	
  emissions,	
  less	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
25
	
  http://www.CROSSOVER-­‐project.eu/Workshop/WorkshopProgram.aspx	
  
Legend:
(+) Positive link
(-) Negative link
Society Economy
Environment
population
health
employment
emissions
-
renewable
energy
+
+
fuel prices
+
water stress
water demand
agriculture
production
GDP
-
+ +
-
government
expenditure
+
+
productivity
+
+
+
fossil fuel
consumption
+
+
-
+
income per capita
+
+
+
nutrition
+
+
B
R
Nutrition loop
Agriculture
-water loop
B
Health-GDP
-energy loop
R
Health-GDP
loop
+
+
Causal Diagram Example
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
49	
  |	
  P a g e 	
  
energy	
  demand,	
  less	
  reduction	
  in	
  arable	
  land	
  for	
  agriculture.	
  These	
  illustrate	
  important	
  cross	
  sector	
  
effects	
  and	
  show	
  how	
  policies	
  need	
  to	
  take	
  a	
  broad	
  view	
  of	
  their	
  results.	
  	
  By	
  generating	
  scenarios	
  
over	
  the	
  longer	
  term	
  till	
  2030,	
  the	
  model	
  was	
  able	
  to	
  show	
  how	
  continuing	
  business	
  as	
  usual	
  would	
  
lead	
  to	
  significant	
  challenges	
  and	
  tipping	
  points	
  on	
  water	
  demand,	
  agricultural	
  production,	
  and	
  GHG	
  
emissions.	
   	
   Then	
   it	
   was	
   demonstrated	
   how	
   policies	
   that	
   would	
   shift	
   level	
   of	
   consumption	
   and	
  
innovation	
  would	
  have	
  significant	
  impacts	
  on	
  sustainability,	
  including	
  that	
  some	
  slower	
  increase	
  in	
  
overall	
  consumption	
  may	
  be	
  critical	
  for	
  achieving	
  sustainability	
  on	
  these	
  indicators,	
  despite	
  lower	
  
GDP	
  growth	
  and	
  job	
  creation.	
  	
  It	
  shows	
  that	
  it	
  is	
  important	
  to	
  develop	
  policies	
  that	
  mitigate	
  the	
  
weaker	
   performance	
   while	
   assuring	
   sustainability	
   and	
   that	
   provide	
   everyone	
   with	
   an	
   acceptable	
  
living	
  standard.	
  	
  It	
  clearly	
  illustrates	
  the	
  policy	
  challenges	
  faced	
  and	
  lays	
  the	
  basis	
  for	
  developing	
  
more	
  effective	
  policies.	
  
	
  
Source:	
  “	
  Consumption	
  and	
  Sustainability:	
  A	
  Quantitative	
  Approach	
  Based	
  on	
  T21	
  China”,	
  presented	
  
by	
  Weishuang	
  Qu	
  at	
  the	
  Transatlantic	
  Research	
  on	
  Policy	
  Modelling	
  Workshop	
  26
	
  	
  
	
  
Available	
  Tools	
  
Research	
  about	
  collaborative	
  software	
  has	
  been	
  conducted	
  since	
  the	
  mid	
  1980's,	
  when	
  computer-­‐
human	
  interaction,	
  office	
  automation,	
  and	
  support	
  for	
  group	
  work	
  became	
  the	
  focus	
  of	
  research	
  
projects.	
  The	
  term	
  computer-­‐supported	
  cooperative	
  work	
  (CSCW)	
  was	
  first	
  used	
  in	
  1984	
  and	
  focused	
  
on	
  the	
  support	
  of	
  small	
  groups	
  of	
  people.	
  Other	
  terms	
  are	
  used	
  as	
  synonyms	
  for	
  CSCW,	
  especially:	
  
collaborative	
  computing,	
  computer	
  mediated	
  communication,	
  and	
  group	
  decision	
  support	
  systems.	
  
CSCW	
  is	
  defined	
  as	
  a	
  “computer-­‐assisted	
  coordinated	
  activity	
  such	
  as	
  communication	
  and	
  problem	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
26
	
  http://www.CROSSOVER-­‐project.eu/Workshop/WorkshopProgram.aspx	
  
Scenario summary for 2030
Low Consump High Consump
Unit Baseline High Tech Low Tech
real GDP RMB2000/Yr 6.67E+13 5.64E+13 7.23E+13
per capita real GDP RMB2000/Yr 46,829 39,745 47,580
unemployment rate % of workforce 6.12% 18.67% 0.00%
total electricity
demand
Bn KWH/Yr 8,191 7,124 8,949
total petroleum
demand
MT/Yr 1,059 791 1,294
fossil fuel CO2
emission
Ton/Yr 1.16E+10 8.87E+09 1.40E+10
Agriculture Land Ha 1.15E+08 1.19E+08 1.08E+08
total water demand Ton/Yr 6.07E+11 4.61E+11 7.93E+11
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
50	
  |	
  P a g e 	
  
solving	
   carried	
   out	
   by	
   a	
   group	
   of	
   collaborating	
   individuals"	
   or	
   as	
   a	
   system,	
   which	
   “looks	
   at	
   how	
  
groups	
  work	
  and	
  seeks	
  to	
  discover	
  how	
  technology	
  (especially	
  computers)	
  can	
  help	
  them	
  work".	
  The	
  
term	
  groupware	
  also	
  stems	
  from	
  the	
  1980's	
  and	
  is	
  defined	
  as	
  “computer-­‐based	
  systems	
  that	
  support	
  
groups	
   of	
   people	
   engaged	
   in	
   a	
   common	
   task	
   (or	
   goal)	
   and	
   that	
   provide	
   an	
   interface	
   to	
   a	
   shared	
  
environment".	
  Interestingly,	
  some	
  authors	
  see	
  groupware	
  as	
  advanced	
  software	
  that	
  has	
  to	
  provide	
  
awareness	
   support,	
   while	
   other	
   authors	
   also	
   understand	
   code	
   management	
   or	
   emailing	
   as	
  
groupware	
  systems.	
  In	
  contrast	
  to	
  groupware,	
  CSCW	
  does	
  not	
  only	
  comprise	
  technological	
  aspects	
  
of	
  collaboration,	
  but	
  also	
  incorporates	
  psychological,	
  social,	
  and	
  organizational	
  effects.	
  	
  
Collaborative	
   technologies,	
   especially	
   in	
   the	
   field	
   of	
   groupware	
   and	
   CSCW,	
   are	
   typically	
   classified	
  
using	
   the	
   time-­‐space	
   taxonomy	
   which	
   distinguishes	
   between	
   communication	
   that	
   occurs	
   at	
   the	
  
same	
  space	
  or	
  concurrently	
  at	
  different	
  spaces,	
  and	
  communication	
  that	
  occurs	
  in	
  the	
  same	
  time	
  
(synchronously)	
   or	
   in	
   different	
   times	
   (asynchronously).	
   This	
   view	
   was	
   established	
   in	
   1988	
   by	
   R.	
  
Johansen	
   (“GroupWare:	
   Computer	
   Support	
   for	
   Business	
   Teams”,	
   The	
   Free	
   Press,	
   New	
   York)	
   and	
  
taken	
  on	
  in	
  various	
  related	
  publications.	
  The	
  following	
  figure	
  depicts	
  the	
  typical	
  time-­‐space	
  matrix	
  as	
  
presented	
  in	
  these	
  publications.	
  
	
  
	
  
	
  Figure	
  8:	
  the	
  Time-­‐Space	
  Matrix	
  
	
  
The	
  matrix	
  divides	
  collaborative	
  technologies	
  into	
  four	
  possible	
  constellations,	
  while	
  each	
  of	
  these	
  
constellations	
  can	
  be	
  supported	
  better	
  or	
  worse	
  by	
  different	
  communication	
  media.	
  
By	
  the	
  way	
  the	
  architecture	
  of	
  a	
  collaborative	
  modelling	
  tool,	
  i.e.,	
  a	
  system	
  that	
  supports	
  a	
  group	
  in	
  
developing	
   models,	
   is	
   still	
   under	
   investigation.	
   Some	
   authors	
   have	
   suggested	
   groupware	
   systems	
  
that	
   help	
   teams	
   in	
   collective	
   sense-­‐making	
   which	
   is	
   an	
   important	
   part	
   of	
   the	
   modelling	
   process.	
  
Conklin,	
   Selvin,	
   Buckingham	
   and	
   Sierhuis	
   in	
   “Facilitated	
   Hypertext	
   for	
   Collective	
   Sensemaking:	
   15	
  
Years	
  on	
  from	
  gIBIS”,	
  a	
  paper	
  presented	
  in	
  2003	
  during	
  the	
  8th	
  International	
  Working	
  Conference	
  on	
  
the	
  Language-­‐Action	
  Perspective	
  on	
  Communication	
  Modelling	
  (Tilburg,	
  The	
  Netherlands),	
  reports	
  
on	
  an	
  approach,	
  Compendium,	
  that	
  is	
  the	
  result	
  of	
  15	
  years	
  of	
  experience.	
  Compendium	
  combines	
  
three	
  different	
  areas:	
  meeting	
  facilitation,	
  graphical	
  hypertext	
  and	
  conceptual	
  frameworks.	
  To	
  make	
  
them	
   work,	
   facilitation	
   is	
   viewed	
   as	
   essential	
   to	
   remove	
   the	
   cognitive	
   overhead	
   for	
   the	
   group	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
51	
  |	
  P a g e 	
  
members.	
  As	
  groupware	
  systems	
  address	
  the	
  important	
  issue	
  of	
  collective	
  sense-­‐making	
  they	
  can	
  be	
  
used	
   as	
   the	
   core	
   of	
   a	
   collaborative	
  modelling	
  tool.	
  So	
  far	
  these	
  systems	
  are	
  typically	
  tailored	
   for	
  
specific	
   modelling	
   languages	
   though.	
   For	
   a	
   collaborative	
   modelling	
   tool	
   they	
   need	
   to	
   be	
   more	
  
modular	
  so	
  that	
  any	
  modelling	
  language	
  can	
  be	
  “plugged	
  in”	
  (e.g.,	
  other	
  enterprise	
  or	
  information	
  
systems	
  modelling	
  languages).	
  In	
  addition,	
  there	
  is	
  also	
  the	
  need	
  for	
  a	
  negotiation	
  component	
  that	
  
facilitates	
  structured	
  arguments	
  and	
  decisions	
  regarding	
  modelling	
  choices.	
  Based	
  on	
  this	
  reflections	
  
and	
  issues,	
  recently	
  two	
  tools	
  are	
  emerging:	
  
• The	
   COllaborative	
   Modelling	
   Architecture,	
   COMA27
,	
   allows	
   group	
   modelling.	
   Any	
   group	
  
member	
  can	
  work	
  on	
  the	
  models	
  whenever	
  it	
  suits	
  them.	
  Any	
  participant	
  can	
  contribute	
  in	
  
the	
   way	
   they	
   can:	
   by	
   just	
   looking	
   at	
   proposals	
   and	
   commenting	
   them,	
   by	
   making	
   minor	
  
changes	
  to	
  them	
  or	
  maybe	
  even	
  by	
  making	
  their	
  own	
  proposals.	
  The	
  facilitator	
  can	
  see	
  the	
  
status	
   of	
   the	
   modelling	
   process	
   at	
   any	
   time	
   and	
   can	
   decide	
   whether	
   a	
   certain	
   proposal	
  
should	
   be	
   adopted	
   or	
   needs	
   improvement	
   based	
   on	
   the	
   comments	
   by	
   the	
   other	
   group	
  
members	
  and	
  his	
  own	
  judgment.	
  
	
  
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  Figure	
  9:	
  COMA,	
  COllaborative	
  Modelling	
  Architecture	
  
COMA's	
  design	
  has	
  been	
  inspired	
  by	
  theoretical	
  insights	
  from	
  organizational	
  semiotics	
  and	
  
driven	
  by	
  observations	
  of	
  group	
  modelling	
  behavior.	
  The	
  tool	
  is	
  implemented	
  in	
  Visual	
  C++	
  
2005	
  on	
  Windows	
  based	
  on	
  the	
  UML	
  Pad	
  and	
  with	
  the	
  wxWidgets	
  GUI	
  library28
.	
  	
  	
  
• The	
   OCOPOMO	
   eParticipation	
   platforms,	
   deployed	
   by	
   Open	
   Collaboration	
   for	
   Policy	
  
Modelling	
  FP7	
  project,	
  that	
  will	
  end	
  in	
  December	
  2012.	
  	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
27
	
  www.coma.nu	
  
28
	
  http://www.wxwidgets.org/	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
52	
  |	
  P a g e 	
  
	
  
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  Figure	
  10:	
  OCOPOMO	
  eParticipation	
  Platform	
  
The	
  platform	
  is	
  a	
  suite	
  of	
  ICT	
  tools	
  for:	
  
o Iterative	
  development	
  of	
  policies	
  in	
  a	
  form	
  of	
  narrative	
  scenarios;	
  
o Policy	
  modelling,	
  creation	
  of	
  agent-­‐based	
  formal	
  policy	
  models;	
  
o Open	
  and	
  transparent	
  collaboration	
  in	
  the	
  process	
  of	
  policy	
  development;	
  
o Seamless,	
  goal-­‐oriented	
  information	
  exchange	
  between	
  all	
  the	
  stakeholders	
  (policy	
  
analysts,	
  operators,	
  decision	
  makers,	
  wider	
  interest	
  groups,	
  general	
  public,	
  etc.);	
  
o Simulation	
  and	
  visualisation	
  of	
  policy	
  alternatives	
  and	
  their	
  consequences;	
  
A	
   First	
   prototype	
   was	
   released	
   in	
   autumn	
   2011	
   and	
   tested	
   on	
   a	
   1st	
   round	
   of	
   pilot	
  
applications	
  started	
  on	
  winter	
  2011.	
  2nd	
  pilot	
  applications	
  and	
  evaluation	
  started	
  in	
  autumn	
  
2012,	
  and	
  the	
  platform	
  has	
  been	
  released	
  in	
  December	
  2012.	
  
	
  
Key	
  challenges	
  and	
  gaps	
  
This	
  research	
  challenge	
  is	
  connected	
  to	
  the	
  research	
  on	
  Web	
  2.0	
  and	
  the	
  next	
  generation	
  web.	
  As	
  far	
  
as	
  the	
  Policy	
  Modelling	
  in	
  Governance	
  is	
  concerned,	
  this	
  research	
  challenge	
  bridges	
  the	
  gap	
  between	
  
citizens	
   and	
   decision	
   makers.	
   It	
   permits	
   an	
   early	
   stage	
   evaluation	
   of	
   the	
   decision	
   maker	
   mental	
  
models	
   by	
   opening	
   a	
   dialogue	
   with	
   citizens	
   and	
   allows	
   for	
   an	
   exchange	
   of	
   perspectives.	
   It	
   finally	
  
enables	
   the	
   collaboration	
   in	
   the	
   public	
   policy	
   modelling	
   process	
   with	
   the	
   use	
   of	
   a	
   rigorous	
   and	
  
formal	
  scientific	
  process.	
  
	
  
Current	
  research	
  
According	
  to	
  current	
  research,	
  the	
  following	
  issues	
  are	
  being	
  explored:	
  
• Group	
   model	
   building	
   and	
   systems	
   thinking,	
   focusing	
   on	
   models	
   when	
   tackling	
   a	
   mix	
   of	
  
interrelated	
   strategic	
   problems	
   to	
   enhance	
   team	
   learning,	
   foster	
   consensus,	
   and	
   create	
  
commitment;	
   although	
   people	
   have	
   different	
   views	
   of	
   the	
   situation	
   and	
   define	
   problems	
  
differently,	
  this	
  current	
  field	
  of	
  research	
  shows	
  that	
  this	
  can	
  be	
  very	
  productive	
  if	
  and	
  when	
  
people	
  learn	
  from	
  each	
  other	
  in	
  order	
  to	
  build	
  a	
  shared	
  perspective.	
  	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
53	
  |	
  P a g e 	
  
• Web	
  2.0	
  tools	
  for	
  collaboration,	
  as	
  recently	
  pointed	
  out	
  in	
  the	
  FP7	
  project	
  OCOPOMO	
  (Open	
  
Collaboration	
  in	
  Policy	
  Modelling),	
  which	
  aim	
  to	
  implement	
  collaborative	
  scenario	
  building	
  
and	
  policy	
  modelling	
  via	
  an	
  integrated	
  ICT	
  toolbox.	
  OCOPOMO	
  provides	
  an	
  innovative	
  "off	
  
the	
  mainstream"	
  bottom-­‐up	
  approach	
  to	
  policy	
  development,	
  combined	
  with	
  advanced	
  ICT	
  
tools	
  and	
  techniques	
  supporting	
  open	
  collaboration.	
  The	
  project	
  is	
  developing	
  an	
  ICT-­‐based	
  
environment	
   integrating	
   lessons	
   and	
   practical	
   techniques	
   from	
   complexity	
   science,	
   agent	
  
based	
  social	
  simulation,	
  foresight	
  scenario	
  analysis	
  and	
  stakeholder	
  participation	
  in	
  order	
  to	
  
formulate	
   and	
   monitor	
   social	
   policies	
   to	
   be	
   adopted	
   at	
   several	
   levels.	
   The	
   project	
   is	
   co-­‐
funded	
  by	
  the	
  European	
  Commission	
  under	
  the	
  7th	
  Framework	
  Program,	
  Theme	
  7.3	
  (ICT	
  for	
  
Governance	
  and	
  Policy	
  Modelling).	
  
	
  
Future	
  research	
  
Future	
  research	
  should	
  therefore	
  focus	
  on:	
  
• Collaborative	
   Internet-­‐based	
   modelling	
   tools,	
   allowing	
   more	
   than	
   one	
   modeller	
   to	
  
cooperate,	
  at	
  the	
  same	
  time,	
  on	
  a	
  single	
  model.	
  
• Definition	
  of	
  frameworks	
  allowing	
  even	
  “low-­‐skilled”	
  citizens	
  to	
  provide	
  their	
  contribution	
  
(even	
  if	
  in	
  a	
  discursive	
  way)	
  to	
  the	
  modelling	
  process.	
  
• Design	
  of	
  more	
  intuitive	
  and	
  accessible	
  Human-­‐Computer	
  Interfaces.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
54	
  |	
  P a g e 	
  
	
  
3.1.3. Easy	
  Access	
  to	
  Information	
  and	
  Knowledge	
  Creation	
  
	
  
Introduction	
  and	
  definition	
  
According	
  to	
  a	
  cybernetic	
  view	
  of	
  intelligent	
  organisations	
  knowledge	
  supersedes	
  1.	
  the	
  facts,	
  2.	
  data	
  
(statements	
  about	
  facts)	
  and	
  3.	
  meaningful	
  information	
  (what	
  changes	
  us),	
  the	
  last	
  also	
  defined	
  as	
  
“the	
  difference	
  that	
  makes	
  the	
  difference”.	
  Knowledge	
  most	
  often	
  defined	
  as	
  “whatever	
  is	
  known,	
  
the	
  body	
  of	
  truth,	
  information	
  and	
  principles	
  acquired”	
  by	
  a	
  subject	
  on	
  a	
  certain	
  topic.	
  Therefore	
  
knowledge	
  is	
  always	
  embodied	
  in	
  someone.	
  It	
  implies	
  insight,	
  which,	
  in	
  turn,	
  enables	
  orientation,	
  
and	
  thus	
  may	
  be	
  also	
  use	
  as	
  a	
  potential	
  for	
  action	
  (when	
  we	
  are	
  able	
  to	
  use	
  information	
  in	
  a	
  certain	
  
environment,	
   then	
   we	
   start	
   to	
   learn,	
   which	
   is	
   the	
   process	
   that	
   helps	
   developing	
   and	
   grounding	
  
knowledge).	
  Two	
  more	
  concepts	
  come	
  after	
  knowledge	
  on	
  the	
  same	
  scale,	
  and	
  are	
  Understanding	
  
and	
  Wisdom.	
  Understanding	
  is	
  the	
  ability	
  to	
  transform	
  knowledge	
  into	
  effective	
  action,	
  i.e.	
  in-­‐depth	
  
knowledge,	
  involving	
  both	
  deep	
  insights	
  into	
  patterns	
  of	
  relationships	
  that	
  generate	
  the	
  behaviour	
  
of	
  a	
  system	
  and	
  the	
  possibility	
  to	
  convey	
  knowledge	
  to	
  others,	
  whereby	
  wisdom	
  is	
  a	
  higher	
  quality	
  of	
  
knowledge	
  and	
  understanding	
  the	
  ethical	
  and	
  aesthetic	
  dimensions.	
  
The	
  research	
  challenge	
  is	
  related	
  to	
  the	
  elicitation	
  of	
  information	
  which,	
  in	
  turn,	
  during	
  the	
  overall	
  
model	
  building	
  and	
  use	
  processes	
  will	
  help	
  decision	
  makers	
  to	
  learn	
  how	
  a	
  certain	
  system	
  works	
  and	
  
ultimately	
   to	
   gain	
   insights	
   (knowledge)	
   and	
   understanding	
   (apply	
   the	
   extracted	
   knowledge	
   from	
  
those	
  processes)	
  in	
  order	
  to	
  successfully	
  implement	
  a	
  desired	
  policy.	
  It	
  is	
  important	
  to	
  note	
  that	
  
other	
  research	
  fields	
  (in	
  particular,	
  ICT	
  disciplines)	
  tend	
  to	
  misuse	
  the	
  word	
  “knowledge”	
  and	
  invert	
  
it	
  with	
  ”information”.	
  	
  
	
  
Why	
  it	
  matters	
  in	
  governance	
  
Proper	
  information	
  acquisition	
  and	
  knowledge	
  development	
  are	
  the	
  key	
  aspect	
  in	
  all	
  research	
  fields,	
  
so	
   this	
   research	
   challenge	
   has	
   a	
   horizontal	
   importance	
   for	
   research	
   in	
   general.	
   According	
   to	
   the	
  
general	
  need	
  for	
  policy	
  assessment	
  and	
  evaluation,	
  there	
  are	
  some	
  specific	
  issues	
  stemming	
  from	
  
this	
  research	
  challenge,	
  which	
  are	
  strongly	
  related	
  to	
  governance:	
  
• Public	
  data	
  use	
  and	
  thus	
  public	
  information	
  elicitation	
  (by	
  citizens)	
  
• 	
  Citizens’	
  behavioural	
  data	
  which	
  are	
  gradually	
  becoming	
  essential	
  for	
  any	
  policy	
  assessment	
  
process	
  
• Interoperability	
  of	
  public	
  IT	
  systems	
  
• Creation	
  of	
  a	
  common	
  understanding	
  on	
  a	
  certain	
  system’s	
  behaviour	
  (by	
  means	
  of	
  learning)	
  
in	
   order	
   to	
   develop	
   a	
   shared	
   vision	
   on	
   the	
   problems	
   that	
   a	
   certain	
   policy	
   might	
   want	
   to	
  
overcome	
  
	
  
Current	
  Practice	
  and	
  Inspiring	
  cases	
  
In	
   current	
   practice,	
   information	
   is	
   drawn	
   from	
   data	
   stored	
   in	
   different	
   types	
   of	
   media	
   (mainly	
  
DBMS/ERPs).	
  Web	
  2.0	
  has	
  further	
  transformed	
  the	
  way	
  we	
  create	
  data	
  and	
  elicit	
  information	
  from	
  
data.	
  Data	
  availability	
  ceased	
  to	
  pose	
  problems	
  as	
  a	
  result	
  of:	
  
• The	
  Internet	
  growth	
  and	
  its	
  uptake	
  
• User	
  Generated	
  Content	
  in	
  Social	
  Networks	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
55	
  |	
  P a g e 	
  
• Cooperation	
   of	
   IT	
   systems	
   from	
   different	
   organisations	
   thanks	
   to	
   the	
   Service-­‐Oriented	
  
Architectures	
   (even	
   among	
   old	
   legacy	
   systems),	
   which	
   resulted	
   also	
   in	
   private	
   data	
  
availability	
  
• Public	
  Administration	
  Transparency	
  and	
  Public	
  Data	
  use/reuse	
  
	
  
Available	
  Tools	
  
A	
  review	
  of	
  the	
  available	
  tools	
  is	
  to	
  be	
  finalized.	
  
	
  
Key	
  challenges	
  and	
  gaps	
  
The	
  knowledge	
  is	
  still	
  mostly	
  created	
  and	
  passed	
  on	
  by	
  formal	
  methods	
  of	
  teaching,	
  even	
  though	
  the	
  
advents	
   of	
   the	
   e-­‐Learning,	
   m-­‐Learning	
   and	
   webinar	
   fields	
   allow	
   for	
   an	
   increased	
   possibility	
   to	
  
perform	
   Distance	
   Learning	
   on	
   the	
   Web.	
   But,	
   since	
   knowledge	
   is	
   developed	
   and	
   grounded	
   by	
   the	
  
learning	
  process	
  through	
  action	
  in	
  the	
  environment,	
  the	
  learning	
  in	
  real	
  life	
  comes	
  from	
  committing	
  
mistakes.	
  In	
  the	
  field	
  of	
  real	
  life	
  governance,	
  it	
  entails	
  implementing	
  a	
  wrong	
  policy	
  and	
  observing	
  
the	
  positive	
  and	
  negative	
  consequences	
  that	
  this	
  policy	
  generates	
  (for	
  example	
  due	
  to	
  a	
  system’s	
  
“policy	
  resistance”).	
  Learning	
  of	
  successes	
  is	
  also	
  important,	
  as	
  the	
  A.I.	
  method	
  is	
  based	
  on	
  positive	
  
psychology.	
  At	
  present,	
  thanks	
  to	
  the	
  increasing	
  data	
  availability,	
  information	
  elicitation	
  process	
  is	
  
much	
   easier,	
   either	
   by	
   tacitly	
   bringing	
   users	
   (data	
   generators)	
   to	
   provide	
   data	
   in	
   a	
   guided	
   way	
  
(according	
   to	
   a	
   pre-­‐set	
   framework	
   for	
   data	
   input)	
   or	
   with	
   a	
   help	
   of	
   a	
   specific	
   process	
   (e.g.:	
  
consultations	
  in	
  e-­‐Participation	
  tools).	
  
	
  
Current	
  research	
  
According	
  to	
  current	
  research,	
  the	
  main	
  focus	
  is	
  put	
  on	
  the	
  Knowledge	
  Management	
  field	
  or	
  also	
  
(more	
  properly,	
  as	
  in	
  our	
  case)	
  to	
  the	
  Knowledge	
  Elicitation	
  field.	
  The	
  latter	
  basically	
  encompasses	
  
the	
  following	
  steps:	
  
• Data	
  retrieval	
  and	
  extraction	
  
• Data	
  analysis	
  and	
  interpretation	
  (which	
  usually	
  produces	
  information)	
  
• Data/information	
  adaptation	
  and	
  integration	
  (this	
  is	
  particularly	
  the	
  case	
  where	
  information	
  
needs	
  to	
  be	
  used	
  in	
  a	
  model)	
  
	
  
Future	
  research	
  
There	
  is	
  still	
  a	
  large	
  field	
  to	
  be	
  explored	
  –	
  the	
  methods	
  of	
  extraction	
  of	
  meaningful	
  information	
  from	
  
unstructured	
  sources	
  of	
  data,	
  e.g.	
  when	
  analysing	
  free	
  texts,	
  which	
  applies	
  to	
  all	
  sources	
  of	
  User-­‐
Generated	
  Content	
  (forums,	
  wikis,	
  social	
  networks,	
  etc.),	
  where	
  the	
  semantic	
  dimension	
  is	
  essential	
  
to	
   derive	
   meaningful	
   information	
   rather	
   than	
   just	
   quantitatively	
   analysing	
   the	
   syntax	
   of	
   text.	
   In	
  
general,	
  a	
  lot	
  of	
  data	
  is	
  generated	
  by	
  citizens	
  and	
  particularly	
  by	
  their	
  behaviour	
  online,	
  so	
  that	
  the	
  
available	
  aggregated	
  data	
  sets	
  contains	
  information	
  on	
  what	
  a	
  citizen	
  does,	
  what	
  s/he	
  likes,	
  how	
  
s/he	
   behaves	
   in	
   certain	
   environments,	
   and	
   so	
   on.	
   This	
   data	
   is	
   considered	
   very	
   valuable	
   both	
   for	
  
private	
  and	
  public	
  organisations	
  (even	
  though	
  under	
  privacy	
  restrictions	
  which	
  have	
  to	
  be	
  properly	
  
addressed).	
  	
  
Also,	
  according	
  to	
  the	
  knowledge	
  creation	
  and	
  development	
  of	
  understanding	
  (regarding	
  a	
  specific	
  
system),	
  there	
  is	
  some	
  research	
  currently	
  carried	
  out	
  on	
  how	
  to	
  improve	
  the	
  learning	
  process	
  via	
  the	
  
use	
  of	
  e-­‐Learning	
  systems.	
  In	
  this	
  respect,	
  it	
  is	
  crucial	
  to	
  boost	
  the	
  research	
  on	
  micro-­‐worlds,	
  i.e.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
56	
  |	
  P a g e 	
  
complex	
  virtual	
  environments	
  where	
  reality	
  is	
  somehow	
  reproduced	
  and	
  where	
  a	
  decision	
  maker	
  is	
  
trained	
   in	
   order	
   to	
   implement	
   his/her	
   strategies	
   and	
   hypothesis	
   and	
   perform	
   what-­‐if	
   analysis	
  
without	
  the	
  need	
  to	
  necessarily	
  learn	
  from	
  mistakes	
  in	
  real	
  life.	
  
	
  
Future	
  research	
  will	
  thus	
  have	
  to	
  focus	
  on	
  the	
  following	
  issues:	
  
• Information	
   elicitation	
   by	
   analysing	
   and	
   interpreting	
   data,	
   also	
   taking	
   into	
   account	
   the	
  
semantic	
  point	
  of	
  view.	
  
• Creation	
   of	
   proper	
   micro-­‐worlds	
   (or	
   ILEs,	
   Interactive	
   Learning	
   Environments),	
   where	
   the	
  
acquired	
  information	
  on	
  a	
  certain	
  system	
  is	
  used	
  (by	
  means	
  of	
  actions),	
  and	
  knowledge	
  is	
  
developed	
   by	
   observation	
   of	
   the	
   outcomes	
   of	
   the	
   actions.	
   Also,	
   ILEs	
   will	
   have	
   to	
   be	
  
integrated	
   into	
   LMS	
   (Learning	
   Management	
   Systems)	
   in	
   order	
   to	
   extend	
   the	
   potential	
   of	
  
distance	
  learning	
  practices,	
  eventually	
  also	
  in	
  a	
  cooperative	
  way	
  (mass	
  learning).	
  
• Interoperability	
  of	
  data	
  sources	
  in	
  order	
  to	
  integrate/aggregate	
  different	
  types	
  of	
  data	
  and	
  
be	
  able	
  to	
  automatically	
  infer	
  information	
  from	
  more	
  meaningful	
  datasets.	
  
• In	
  view	
  of	
  the	
  “Internet	
  of	
  Things”,	
  the	
  provision	
  of	
  “portable”	
  models/tools	
  for	
  citizens	
  in	
  
order	
  to	
  gather	
  valuable	
  data	
  based	
  on	
  citizens’’	
  real	
  behaviours.	
  Moreover,	
  these	
  models	
  
and	
  tools	
  would	
  enable	
  citizens	
  to	
  check	
  the	
  results	
  of	
  their	
  actions	
  by	
  analysing	
  in	
  real-­‐time	
  
the	
  response	
  of	
  the	
  model	
  to	
  the	
  information	
  they	
  are	
  contributing	
  to	
  generate,	
  and	
  thus	
  
evaluating	
  the	
  eventual	
  benefits	
  they	
  are	
  receiving	
  from	
  their	
  virtuous	
  behaviour	
  or	
  harm	
  
they	
  are	
  creating	
  either	
  to	
  their	
  environment	
  or	
  to	
  themselves	
  (e-­‐Cognocracy).	
  
	
  
	
  
	
  
	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
57	
  |	
  P a g e 	
  
3.1.4. Model	
  Validation	
  
	
  
Introduction	
  and	
  definition	
  
Policy	
   makers	
   need	
   and	
   use	
   information	
   stemming	
   from	
   simulations	
   in	
   order	
   to	
   develop	
   more	
  
effective	
  policies.	
  As	
  citizens,	
  public	
  administration	
  and	
  other	
  stakeholders	
  are	
  affected	
  by	
  decisions	
  
based	
  on	
  these	
  models,	
  the	
  reliability	
  of	
  applied	
  models	
  is	
  crucial.	
  Model	
  validation	
  can	
  be	
  defined	
  
as	
   ”substantiation	
   that	
   a	
   computerised	
   model	
   within	
   its	
   domain	
   of	
   applicability	
   possesses	
   a	
  
satisfactory	
  range	
  of	
  accuracy	
  consistent	
  with	
  the	
  intended	
  application	
  of	
  the	
  model”	
  (Schlesinger,	
  
1979).	
   Therefore,	
   a	
   policy	
   model	
   should	
   be	
   developed	
   for	
   a	
   specific	
   purpose	
   (or	
   context)	
   and	
   its	
  
validity	
  determined	
  with	
  respect	
  to	
  that	
  purpose	
  (or	
  context)29
.	
  If	
  the	
  purpose	
  of	
  such	
  a	
  model	
  is	
  to	
  
answer	
  a	
  variety	
  of	
  questions,	
  the	
  validity	
  of	
  the	
  model	
  needs	
  to	
  be	
  determined	
  with	
  respect	
  to	
  each	
  
question.	
  A	
  model	
  is	
  considered	
  valid	
  for	
  a	
  set	
  of	
  experimental	
  conditions	
  if	
  the	
  model’s	
  accuracy	
  is	
  
within	
   its	
   acceptable	
   range,	
   which	
   is	
   the	
   amount	
   of	
   accuracy	
   required	
   for	
   the	
   model’s	
   intended	
  
purpose.	
   The	
   substantiation	
   that	
   a	
   model	
   is	
   valid	
   is	
   generally	
   considered	
   to	
   be	
   a	
   process	
   and	
   is	
  
usually	
   part	
   of	
   the	
   (total)	
   policy	
   model	
   development	
   process	
   	
   (Sargent,	
   2008).	
   For	
   this	
   purpose,	
  
specific	
  and	
  integrated	
  techniques	
  and	
  ICT	
  tools	
  are	
  required	
  to	
  be	
  developed	
  for	
  policy	
  modelling.	
  
	
  
Model	
  validation	
  is	
  composed	
  of	
  two	
  main	
  phases:	
  
• Conceptual	
  model	
  validation,	
  i.e.	
  determining	
  that	
  theories	
  and	
  assumptions	
  underlying	
  the	
  
conceptual	
  model	
  are	
  correct	
  and	
  that	
  the	
  model’s	
  representation	
  of	
  the	
  problem	
  entity	
  and	
  
the	
  model’s	
  structure,	
  logic,	
  and	
  mathematical	
  and	
  causal	
  relationships	
  are	
  “reasonable”	
  for	
  
the	
  intended	
  purpose	
  of	
  the	
  model.	
  
• Computerised	
  model	
  verification	
  ensures	
  that	
  computer	
  programming	
  and	
  implementation	
  
of	
   the	
   conceptual	
   model	
   are	
   correct,	
   as	
   well	
   as	
   states	
   that	
   the	
   overall	
   behaviour	
   of	
   the	
  
model	
  is	
  in	
  line	
  with	
  the	
  available	
  historical	
  data.	
  
	
  
Why	
  it	
  matters	
  in	
  governance	
  
Model	
  Validation	
  is	
  connected	
  both	
  to	
  modelling	
  and	
  simulation.	
  According	
  to	
  the	
  general	
  need	
  for	
  
policy	
   assessment	
   and	
   evaluation,	
   there	
   are	
   some	
   specific	
   issues	
   stemming	
   from	
   the	
   Model	
  
Validation,	
  which	
  are	
  strongly	
  related	
  to	
  governance:	
  
• Reliability	
  of	
  models:	
  policy	
  makers	
  use	
  simulation	
  results	
  to	
  develop	
  effective	
  policies	
  that	
  
have	
  an	
  important	
  impact	
  on	
  citizens,	
  public	
  administration	
  and	
  other	
  stakeholders.	
  Model	
  
validation	
  is	
  fundamental	
  to	
  guarantee	
  that	
  the	
  output	
  (simulation	
  results)	
  for	
  policy	
  makers	
  
is	
  reliable.	
  
• Acceleration	
   of	
   policy	
   modelling	
   process:	
   policy	
   models	
   must	
   be	
   developed	
   in	
   a	
   timely	
  
manner	
  and	
  at	
  minimum	
  cost	
  in	
  order	
  to	
  efficiently	
  and	
  effectively	
  support	
  policy	
  makers.	
  
Model	
   validation	
   is	
   both	
   cost	
   and	
   time	
   consuming	
   and	
   should	
   be	
   automated	
   and	
  
accelerated.	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
29
	
  Some	
  researchers	
  claim	
  that	
  the	
  category	
  "validity"	
  has	
  little	
  meaning	
  in	
  relation	
  to	
  policy	
  models,	
  as	
  they	
  are	
  generally	
  a	
  
form	
  of	
  narrative	
  or	
  storytelling	
  so	
  that	
  their	
  value	
  comes	
  in	
  the	
  act	
  of	
  obtaining	
  a	
  better	
  understanding	
  of	
  the	
  system	
  and	
  
being	
  able	
  to	
  communicate	
  concepts	
  effectively	
  and	
  spur	
  discussion	
  between	
  different	
  stakeholders. 	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
58	
  |	
  P a g e 	
  
• Modular	
  and	
  re-­‐usable	
  models:	
  a	
  policy	
  model	
  developer	
  deciding	
  to	
  re-­‐use	
  existing	
  models	
  
or	
  compose	
  them,	
  stumble	
  across	
  the	
  issue	
  of	
  models’	
  reliability.	
  Model	
  validation	
  can	
  be	
  
used	
  for	
  certifying	
  this	
  reliability	
  and	
  creating	
  a	
  database	
  of	
  validated	
  models.	
  	
  
Current	
  Practice	
  and	
  Inspiring	
  cases	
  
In	
  current	
  practice	
  the	
  most	
  frequently	
  used	
  is	
  a	
  decision	
  of	
  the	
  development	
  team	
  based	
  on	
  the	
  
results	
  of	
  the	
  various	
  tests	
  and	
  evaluations	
  conducted	
  as	
  part	
  of	
  the	
  model	
  development	
  process.	
  
Another	
   approach	
   is	
   to	
   engage	
   users	
   in	
   the	
   validation	
   process.	
   When	
   developing	
   large-­‐scale	
  
simulation	
  models,	
  the	
  validation	
  of	
  a	
  model	
  can	
  be	
  carried	
  by	
  an	
  independent	
  third-­‐party.	
  Needless	
  
to	
  say,	
  that	
  the	
  third	
  party	
  needs	
  to	
  have	
  a	
  thorough	
  understanding	
  of	
  the	
  intended	
  purpose	
  of	
  the	
  
simulation	
  model.	
  Finally,	
  the	
  scoring	
  model	
  can	
  be	
  used	
  for	
  testing	
  the	
  model’s	
  validity	
  (e.g.	
  see	
  
Balci	
  1989;	
  Gass	
  1983;	
  	
  Gass	
  &	
  Joel	
  1987).	
  Scores	
  (or	
  weights)	
  are	
  determined	
  subjectively	
  when	
  
conducting	
   various	
   aspects	
   of	
   the	
   validation	
   process	
   and	
   then	
   combined	
   to	
   determine	
   category	
  
scores	
  and	
  an	
  overall	
  score	
  for	
  the	
  simulation	
  model.	
  A	
  simulation	
  model	
  is	
  considered	
  valid	
  if	
  its	
  
overall	
  and	
  category	
  scores	
  are	
  greater	
  than	
  some	
  passing	
  score.	
  
	
  
Available	
  Tools	
  
A	
  review	
  of	
  the	
  available	
  tools	
  is	
  to	
  be	
  finalized.	
  
	
  
Key	
  challenges	
  and	
  gaps	
  
Typically	
   all	
   above-­‐mentioned	
   approaches	
   are	
   applied	
   after	
   the	
   simulation	
   model	
   has	
   been	
  
developed.	
  Performing	
  a	
  complete	
  validation	
  effort	
  after	
  the	
  simulation	
  model	
  has	
  been	
  finalised	
  
requires	
   both	
   time	
   and	
   money.	
   However,	
   conducting	
   model	
   validation	
   concurrently	
   with	
   the	
  
development	
  of	
  the	
  simulation	
  model	
  enables	
  the	
  model	
  development	
  team	
  to	
  receive	
  inputs	
  earlier	
  
on	
   each	
   stage	
   of	
   model	
   development.	
   Therefore,	
   ICT	
   tools	
   for	
   speeding	
   up,	
   automating	
   and	
  
integrating	
   model	
   validation	
   process	
   into	
   policy	
   model	
   development	
   process	
   are	
   necessary	
   to	
  
guarantee	
  the	
  validity	
  of	
  models	
  with	
  an	
  effective	
  use	
  of	
  resources.	
  
	
  
Current	
  research	
  
In	
  Current	
  research,	
  there	
  are	
  a	
  large	
  number	
  of	
  subjective	
  and	
  objective	
  validation	
  techniques	
  used	
  
for	
  verifying	
  and	
  validating	
  the	
  modules	
  and	
  the	
  overall	
  model.	
  Robert	
  G.	
  Sargent	
  at	
  the	
  Syracuse	
  
University	
  	
  in	
  2010	
  provided	
  a	
  relevant	
  ones:	
  Animation;	
  Comparison	
  to	
  Other	
  Model;	
  Degenerate	
  
Tests;	
   Event	
   Validity;	
   Extreme	
   Condition	
   Tests;	
   Face	
   Validity;	
   Historical	
   Data	
   Validation;	
   Historical	
  
Methods;	
   Internal	
   Validity;	
   Multistage	
   Validation;	
   Operational	
   Graphics;	
   Parameter	
   Variability	
   /	
  
Sensitivity	
  Analysis;	
  Predictive	
  Validation;	
  Traces;	
  and	
  Turing	
  Tests.	
  Furthermore,	
  he	
  described	
  a	
  new	
  
statistical	
  procedure	
  for	
  validating	
  simulation	
  and	
  analytic	
  stochastic	
  models	
  using	
  hypothesis	
  testing	
  
when	
   the	
   amount	
   of	
   model	
   accuracy	
   is	
   specified.	
   This	
   procedure	
   provides	
   for	
   the	
   model	
   to	
   be	
  
accepted	
   if	
   the	
   difference	
   between	
   the	
   system	
   and	
   the	
   model	
   outputs	
   are	
   within	
   the	
   specified	
  
ranges	
  of	
  accuracy.	
  The	
  system	
  must	
  be	
  observable	
  to	
  allow	
  data	
  to	
  be	
  collected	
  for	
  validation.	
  	
  
	
  
Future	
  research	
  
Future	
  research	
  should	
  explore	
  the	
  following	
  issues:	
  
• In	
  order	
  to	
  speed	
  up	
  and	
  reduce	
  the	
  cost	
  of	
  a	
  model	
  validation	
  process,	
  user-­‐friendly	
  and	
  
collaborative	
   statistical	
   software	
   should	
   be	
   developed,	
   possibly	
   combined	
   with	
   expert	
  
systems	
  and	
  artificial	
  intelligence.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
59	
  |	
  P a g e 	
  
• Due	
  to	
  the	
  big	
  gap	
  between	
  theory	
  and	
  practice,	
  the	
  considerable	
  opportunity	
  exists	
  for	
  the	
  
study	
   and	
   application	
   of	
   rigorous	
   verification	
   and	
   validation	
   techniques.	
   In	
   the	
   current	
  
practice,	
  the	
  comparison	
  of	
  the	
  model	
  and	
  system	
  performance	
  measures	
  is	
  typically	
  carried	
  
out	
  in	
  an	
  informal	
  manner.	
  
• Complicated	
  simulation	
  models	
  are	
  usually	
  either	
  not	
  validated	
  at	
  all	
  or	
  are	
  only	
  subjectively	
  
validated;	
   for	
   example,	
   animated	
   output	
   is	
   eyeballed	
   for	
   a	
   short	
   while.	
   Therefore,	
  
complexity	
  issues	
  in	
  model	
  validation	
  may	
  be	
  better	
  addressed	
  through	
  the	
  development	
  of	
  
more	
  suitable	
  methodologies	
  and	
  tools.	
  
• Model	
   validation	
   is	
   not	
   a	
   discrete	
   step	
   in	
   the	
   simulation	
   process.	
   It	
   needs	
   to	
   be	
   applied	
  
continuously	
   from	
   the	
   formulation	
   of	
   the	
   problem	
   to	
   the	
   implementation	
   of	
   the	
   study	
  
findings	
   as	
   a	
   completely	
   validated	
   and	
   verified	
   model	
   does	
   not	
   exist.	
   Validation	
   and	
  
verification	
  process	
  of	
  a	
  model	
  is	
  never	
  completed.	
  
• As	
  the	
  model	
  developers	
  are	
  inevitably	
  biased	
  and	
  may	
  be	
  concentrated	
  on	
  positive	
  features	
  
of	
  the	
  given	
  model,	
  the	
  third	
  party	
  approach	
  (board	
  of	
  experts)	
  seems	
  to	
  be	
  a	
  better	
  solution	
  
in	
  model	
  validation.	
  
• Considering	
  the	
  ranges	
  that	
  simulation	
  studies	
  cover	
  (from	
  small	
  models	
  to	
  very	
  large-­‐scale	
  
simulation	
  models),	
  further	
  research	
  is	
  needed	
  to	
  determine	
  with	
  respect	
  to	
  the	
  size	
  and	
  
type	
  of	
  simulation	
  study	
  	
  
o Which	
  model	
  validation	
  approach	
  should	
  be	
  used,	
  	
  
o How	
  should	
  model	
  validation	
  be	
  managed,	
  	
  
o What	
  type	
  of	
  support	
  system	
  software	
  for	
  model	
  validation	
  is	
  needed.	
  
• Validating	
  large-­‐scale	
  simulations	
  that	
  combine	
  different	
  simulation	
  (sub-­‐)	
  models	
  and	
  use	
  
different	
  types	
  of	
  computer	
  hardware	
  such	
  as	
  in	
  currently	
  being	
  done	
  in	
  HLA	
  (Higher	
  Level	
  
Architecture).	
  A	
  number	
  of	
  these	
  VV&A	
  issues	
  need	
  research,	
  e.g.	
  how	
  does	
  one	
  verify	
  that	
  
the	
  simulation	
  clocks	
  and	
  event	
  (message)	
  times	
  (timestamps)	
  have	
  the	
  same	
  representation	
  
(floating	
   point,	
   word	
   size,	
   etc.)	
   and	
   validate	
   that	
   events	
   having	
   time	
   ties	
   are	
   handled	
  
properly.	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
60	
  |	
  P a g e 	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
	
  	
  	
  	
  	
  	
  	
  	
  
3.1.5. Immersive	
  Simulation	
  
	
  
Introduction	
  and	
  definition	
  
As	
  policy	
  models	
  grow	
  in	
  size	
  and	
  complexity,	
  the	
  process	
  of	
  analysing	
  and	
  visualising	
  the	
  resulting	
  
large	
   amounts	
   of	
   data	
   becomes	
   an	
   increasingly	
   difficult	
   task.	
   Traditionally,	
   data	
   analysis	
   and	
  
visualisation	
  were	
  performed	
  as	
  post-­‐processing	
  steps	
  after	
  a	
  simulation	
  had	
  been	
  completed.	
  As	
  
simulations	
   increased	
   in	
   size,	
   this	
   task	
   became	
   increasingly	
   difficult,	
   often	
   requiring	
   significant	
  
computation,	
   high-­‐performance	
   machines,	
   high	
   capacity	
   storage,	
   and	
   high	
   bandwidth	
   networks.	
  
Computational	
  steering	
  is	
  an	
  emerging	
  technology	
  that	
  addresses	
  this	
  problem	
  by	
  “closing	
  the	
  loop”	
  
and	
   providing	
   a	
   mechanism	
   for	
   integrating	
   modelling,	
   simulation,	
   data	
   analysis	
   and	
   visualisation.	
  
This	
  integration	
  allows	
  a	
  researcher	
  to	
  interactively	
  control	
  simulations	
  and	
  perform	
  data	
  analysis	
  
while	
  avoiding	
  many	
  of	
  the	
  pitfalls	
  associated	
  with	
  the	
  traditional	
  batch	
  /	
  post	
  processing	
  cycle.	
  This	
  
research	
   challenge	
   refers	
   to	
   the	
   issue	
   of	
   the	
   integration	
   of	
   visualisation	
   techniques	
   within	
   an	
  
integrated	
   simulation	
   environment.	
   This	
   integration	
   plays	
   a	
   crucial	
   role	
   in	
   making	
   the	
   policy	
  
modelling	
  process	
  more	
  extensive	
  and,	
  at	
  the	
  same	
  time,	
  comprehensible.	
  In	
  fact,	
  the	
  real	
  aim	
  of	
  
interactive	
   simulation	
   is,	
   on	
   the	
   one	
   hand,	
   to	
   allow	
   model	
   developers	
   to	
   easily	
   manage	
   complex	
  
models	
  and	
  their	
  integration	
  with	
  data	
  (e.g.	
  real-­‐time	
  data	
  or	
  qualitative	
  data	
  integration)	
  and,	
  on	
  
the	
  other	
  hand,	
  to	
  allow	
  the	
  other	
  stakeholders	
  not	
  only	
  to	
  better	
  understand	
  the	
  simulation	
  results,	
  
but	
   also	
   to	
   understand	
   the	
   model	
   and,	
   eventually,	
   to	
   be	
   involved	
   in	
   the	
   modelling	
   process.	
  
Interactive	
   simulation	
   can	
   dramatically	
   increase	
   the	
   efficiency	
   and	
   effectiveness	
   of	
   the	
   modelling	
  
and	
   simulation	
   process,	
   allowing	
   the	
   inclusion	
   and	
   automation	
   of	
   some	
   phases	
   (e.g.	
   output	
   and	
  
feedback	
  analysis)	
  that	
  were	
  not	
  managed	
  in	
  a	
  structured	
  way	
  up	
  to	
  this	
  point.	
  
	
  
Why	
  it	
  matters	
  in	
  governance	
  
Immersive	
   simulation	
   is	
   a	
   particular	
   aspect	
   of	
   simulation.	
   As	
   far	
   as	
   the	
   Policy	
   Assessment	
   in	
  
Governance	
  is	
  concerned,	
  this	
  challenge	
  may:	
  
• Accelerate	
  the	
  simulation	
  process:	
  policy	
  makers	
  would	
  be	
  able	
  to	
  analyse	
  simulation	
  results,	
  
eventually	
  run	
  new	
  scenarios	
  and	
  make	
  decisions	
  as	
  soon	
  as	
  possible	
  and	
  at	
  the	
  minimum	
  
cost.	
  
• 	
  Collaborative	
   environment:	
   the	
   bigger	
   is	
   the	
   number	
   of	
   stakeholders	
   involved	
   in	
   policy	
  
modelling	
  and	
  simulation	
  process,	
  the	
  greater	
  is	
  the	
  necessity	
  of	
  an	
  interactive	
  simulation	
  
environment	
  that	
  allows	
  non-­‐experts	
  to	
  use	
  the	
  model	
  and	
  understand	
  results	
  as	
  well	
  as	
  
permit	
  experts	
  to	
  easily	
  understand	
  new	
  requirements	
  and	
  consequent	
  modification.	
  
• Citizen	
   engagement:	
   interactive	
   simulation	
   tools	
   help	
   to	
   engage	
   citizens	
   in	
   policy-­‐making	
  
process	
  and	
  to	
  display	
  to	
  them	
  in	
  a	
  simple	
  way	
  the	
  results.	
  
• Data	
  integration:	
  interactive	
  simulation	
  tools	
  allow	
  better	
  managing	
  of	
  a	
  large	
  number	
  and	
  
different	
  types	
  of	
  data	
  and	
  information,	
  both	
  for	
  input	
  and	
  output/feedback	
  analysis.	
  	
  
	
  
Current	
  Practice	
  and	
  Inspiring	
  cases	
  
In	
   current	
   practice,	
   data	
   analysis	
   and	
   visualisation,	
   albeit	
   critical	
   for	
   the	
   process,	
   are	
   often	
  
performed	
  as	
  a	
  post-­‐processing	
  step	
  after	
  batch	
  jobs	
  are	
  run.	
  For	
  this	
  reason,	
  the	
  errors	
  in	
  validating	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
61	
  |	
  P a g e 	
  
the	
  results	
  of	
  the	
  entire	
  simulation	
  may	
  be	
  discovered	
  only	
  during	
  post-­‐processing.	
  What	
  is	
  more,	
  
the	
  decoupling	
  of	
  simulation	
  and	
  analysis/visualisation	
  can	
  present	
  serious	
  scientific	
  obstacles	
  to	
  the	
  
researcher	
  in	
  interpreting	
  the	
  answers	
  to	
  “what	
  if”	
  questions.	
  Given	
  the	
  limitations	
  of	
  the	
  batch	
  /	
  
post	
   processing	
   cycle,	
   it	
   might	
   be	
   advisable	
   to	
   break	
   the	
   cycle	
   and	
   improve	
   the	
   integration	
   of	
  
simulation	
   and	
   visualisation.	
   Implementation	
   of	
   an	
   interactive	
   simulation	
   and	
   visualisation	
  
environment	
  requires	
  a	
  successful	
  integration	
  of	
  the	
  many	
  aspects	
  of	
  scientific	
  computing,	
  including	
  
performance	
   analysis,	
   geometric	
   modelling,	
   numerical	
   analysis,	
   and	
   scientific	
   visualisation.	
   These	
  
requirements	
   need	
   to	
   be	
   effectively	
   coordinated	
   within	
   an	
   efficient	
   computing	
   environment.	
  
Recently,	
   several	
   tools	
   and	
   environments	
   for	
   computational	
   steering	
   have	
   been	
   developed.	
   They	
  
range	
   from	
   tools	
   that	
   modify	
   performance	
   characteristics	
   of	
   running	
   applications,	
   either	
   by	
  
automated	
   means	
   or	
   by	
   user	
   interaction,	
   to	
   tools	
   that	
   modify	
   the	
   underlying	
   computational	
  
application,	
  thereby	
  allowing	
  application	
  steering	
  of	
  the	
  computational	
  process.	
  	
  
	
  
Available	
  Tools	
  
A	
  review	
  of	
  the	
  available	
  tools	
  is	
  to	
  be	
  finalized.	
  
	
  
Key	
  challenges	
  and	
  gaps	
  
The	
  development	
  of	
  immersive	
  tools	
  is	
  still	
  based	
  on	
  model	
  developers	
  needs	
  and	
  therefore	
  a	
  gap	
  
still	
   exists	
   between	
   requirements	
   of	
   policy	
   makers	
   and	
   those	
   of	
   developers.	
   In	
   a	
   collaborative	
  
modelling	
  environment,	
  interaction	
  is	
  fundamental	
  in	
  order	
  to	
  speed	
  up	
  the	
  process	
  and	
  make	
  ICT	
  
tools	
  user-­‐friendly	
  for	
  all	
  the	
  stakeholders	
  involved	
  in	
  the	
  policy	
  model	
  development	
  process.	
  	
  
	
  
Current	
  research	
  
In	
  the	
  current	
  research,	
  interactive	
  visualisation	
  typically	
  combines	
  two	
  main	
  approaches:	
  providing	
  
efficient	
  algorithms	
  for	
  the	
  presentation	
  of	
  data	
  and	
  providing	
  efficient	
  access	
  to	
  the	
  data.	
  The	
  first	
  
advance	
  is	
  evident	
  albeit	
  challenging.	
  Even	
  though	
  computers	
  continually	
  get	
  faster,	
  data	
  sizes	
  are	
  
growing	
  at	
  an	
  even	
  more	
  rapid	
  rate.	
  Therefore,	
  the	
  total	
  time	
  from	
  data	
  to	
  picture	
  is	
  not	
  decreasing	
  
for	
  many	
  of	
  the	
  problem	
  domains.	
  Alternative	
  algorithms,	
  such	
  as	
  ray	
  tracing	
  	
  (Nakayama,	
  2002)	
  and	
  
view	
   dependent	
   algorithms	
   	
   (Lessig,	
   2009)	
   can	
   restore	
   a	
   degree	
   of	
   interactivity	
   for	
   very	
   large	
  
datasets.	
   Each	
   of	
   those	
   algorithms	
   has	
   its	
   trade-­‐offs	
   and	
   is	
   suitable	
   for	
   a	
   different	
   scenario.	
   The	
  
second	
  advance	
  is	
  less	
  evident	
  but	
  very	
  powerful.	
  Through	
  the	
  integration	
  of	
  visualisation	
  tools	
  with	
  
simulation	
   codes,	
   a	
   scientist	
   can	
   achieve	
   a	
   new	
   degree	
   of	
   interactivity	
   through	
   the	
   direct	
  
visualisation	
   and	
   even	
   manipulation	
   of	
   the	
   data.	
   The	
   scientist	
   does	
   not	
   necessarily	
   wait	
   for	
   the	
  
computation	
  to	
  finish	
  before	
  interacting	
  with	
  the	
  data,	
  but	
  can	
  interact	
  with	
  a	
  running	
  simulation.	
  
While	
  conceptually	
  simple,	
  this	
  approach	
  poses	
  numerous	
  technical	
  challenges.	
  
	
  
Future	
  research	
  
With	
  regard	
  to	
  future	
  research,	
  interactive	
  simulation	
  plays	
  a	
  crucial	
  role	
  in	
  a	
  collaborative	
  modelling	
  
environment.	
  The	
  trade-­‐off	
  between	
  the	
  possibility	
  of	
  enlarging	
  models	
  and	
  including	
  several	
  kinds	
  
of	
  data,	
  and	
  the	
  number	
  of	
  people	
  that	
  can	
  understand	
  and	
  modify	
  the	
  model	
  should	
  be	
  deeply	
  
analysed.	
  For	
  this	
  purpose,	
  some	
  fundamental	
  issues	
  must	
  be	
  approached:	
  
• Systems	
  should	
  be	
  modular	
  and	
  easy	
  to	
  extend	
  within	
  the	
  existing	
  codes.	
  
• Users	
  of	
  the	
  systems	
  should	
  be	
  able	
  to	
  add	
  new	
  capabilities	
  easily	
  without	
  being	
  experts	
  in	
  
systems	
  programming.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
62	
  |	
  P a g e 	
  
• Input	
  /	
  output	
  systems	
  should	
  be	
  easily	
  integrated.	
  
• Steering	
   systems	
   should	
   be	
   adaptable	
   to	
   hardware	
   ranging	
   from	
   the	
   largest	
   of	
  
supercomputing	
  systems	
  to	
  low-­‐end	
  workstations	
  and	
  PCs.	
  
3.1.6. Output	
  Analysis	
  and	
  Knowledge	
  Synthesis	
  
	
  
Introduction	
  and	
  definition	
  
Inputs	
   driving	
   a	
   simulation	
   are	
   often	
   random	
   variables,	
   and	
   because	
   of	
   this	
   randomness	
   in	
   the	
  
components	
   driving	
   simulations,	
   the	
   output	
   from	
   a	
   simulation	
   is	
   also	
   random,	
   so	
   statistical	
  
techniques	
  must	
  be	
  used	
  to	
  analyse	
  the	
  results.	
  In	
  particular,	
  the	
  output	
  processes	
  are	
  often	
  non-­‐
stationary	
  and	
  auto-­‐correlated	
  and	
  classical	
  statistical	
  techniques	
  based	
  on	
  independent	
  identically	
  
distributed	
  observations	
  are	
  not	
  directly	
  applicable.	
  In	
  addition,	
  by	
  observing	
  a	
  simulation	
  output,	
  it	
  
is	
  possible	
  to	
  infer	
  the	
  general	
  structure	
  of	
  a	
  system,	
  so	
  ultimately	
  gaining	
  insights	
  on	
  that	
  system	
  
and	
   being	
   able	
   to	
   synthesise	
   knowledge	
   on	
   it.	
   There	
   is	
   also	
   the	
   possibility	
   to	
   review	
   the	
   initial	
  
assumptions	
  by	
  observing	
  the	
  outcome	
  and	
  by	
  comparing	
  it	
  to	
  the	
  expected	
  response	
  of	
  a	
  system,	
  
i.e.	
  performing	
  a	
  modelling	
  feedback	
  on	
  the	
  initial	
  model.	
  Finally,	
  one	
  of	
  the	
  most	
  important	
  uses	
  of	
  
simulation	
   output	
   analysis	
   is	
   the	
   comparison	
   of	
   competing	
   systems	
   or	
   alternative	
   system	
  
configurations.	
  
Visualisation	
  tools	
  are	
  essentials	
  for	
  the	
  correct	
  execution	
  of	
  this	
  iterative	
  step.	
  The	
  present	
  research	
  
challenge	
   deals	
   with	
   the	
   issue	
   of	
   output	
   analysis	
   of	
   a	
   policy	
   model	
   and,	
   at	
   the	
   same	
   time,	
   of	
  
feedback	
  analysis	
  in	
  order	
  to	
  incrementally	
  increase	
  and	
  synthesise	
  the	
  knowledge	
  of	
  the	
  system.	
  
	
  
Why	
  it	
  matters	
  in	
  governance	
  
Output	
   analysis	
   is	
   a	
   specific	
   aspect	
   of	
   simulation.	
   According	
   to	
   the	
   general	
   need	
   for	
   policy	
  
assessment	
  and	
  evaluation,	
  there	
  are	
  some	
  specific	
  issues	
  stemming	
  from	
  the	
  output	
  analysis,	
  which	
  
are	
  strongly	
  related	
  to	
  governance:	
  
• Acceleration	
   of	
   policy	
   assessment	
   process:	
   automated	
   output	
   analysis	
   tools	
   would	
   help	
  
policy	
  makers	
  to	
  efficiently	
  and	
  effectively	
  analyse	
  the	
  impacts	
  of	
  a	
  policy	
  even	
  if	
  the	
  large	
  
number	
  of	
  simulation	
  data	
  must	
  be	
  taken	
  into	
  account	
  
• Citizen	
   engagement:	
   user-­‐friendly	
   automated	
   tools	
   for	
   output	
   analysis	
   can	
   be	
   offered	
   to	
  
citizens	
  in	
  order	
  to	
  share	
  the	
  simulation	
  results	
  and	
  better	
  engage	
  them	
  in	
  policy-­‐making	
  
process.	
  
	
  
Current	
  Practice	
  and	
  Inspiring	
  cases	
  
In	
   the	
   current	
   practice	
   a	
   large	
   amount	
   of	
   time	
   and	
   financial	
   resources	
   are	
   spent	
   on	
   model	
  
development	
  and	
  programming,	
  but	
  little	
  effort	
  is	
  allocated	
  to	
  analyse	
  the	
  simulation	
  output	
  data	
  in	
  
an	
  appropriate	
  manner.	
  As	
  a	
  matter	
  of	
  fact,	
  a	
  very	
  common	
  way	
  of	
  operating	
  is	
  to	
  make	
  a	
  single	
  
simulation	
  of	
  somewhat	
  arbitrary	
  length	
  run	
  and	
  then	
  treat	
  the	
  resulting	
  simulation	
  estimates	
  as	
  
being	
  the	
  "true"	
  characteristics	
  of	
  the	
  model.	
  Since	
  random	
  samples	
  from	
  probability	
  distributions	
  
are	
   typically	
   used	
   to	
   drive	
   a	
   simulation	
   model	
   through	
   time,	
   these	
   estimates	
   are	
   realisations	
   of	
  
random	
  variables	
  that	
  may	
  have	
  large	
  variances.	
  As	
  a	
  result,	
  these	
  estimates	
  could,	
  in	
  a	
  particular	
  
simulation	
  run,	
  differ	
  greatly	
  from	
  the	
  corresponding	
  true	
  answers	
  for	
  the	
  model.	
  The	
  net	
  effect	
  is	
  
that	
  there	
  may	
  be	
  a	
  significant	
  probability	
  of	
  making	
  erroneous	
  inferences	
  about	
  the	
  system	
  under	
  
study.	
   Historically,	
   there	
   are	
   several	
   reasons	
   why	
   output	
   data	
   analysis	
   was	
   not	
   conducted	
   in	
   an	
  
appropriate	
  manner.	
  First,	
  users	
  often	
  have	
  the	
  unfortunate	
  impression	
  that	
  simulation	
  is	
  just	
  an	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
63	
  |	
  P a g e 	
  
exercise	
   in	
   computer	
   programming.	
   Consequently,	
   many	
   simulation	
   studies	
   begun	
   with	
   heuristic	
  
model	
   building	
   and	
   computer	
   coding,	
   and	
   end	
   with	
   a	
   single	
   run	
   of	
   the	
   program	
   to	
   produce	
   "the	
  
answers."	
  In	
  fact,	
  however,	
  a	
  simulation	
  is	
  a	
  computer-­‐based	
  statistical	
  sampling	
  experiment.	
  Thus,	
  
if	
  the	
  results	
  of	
  a	
  simulation	
  study	
  are	
  to	
  have	
  any	
  meaning,	
  appropriate	
  statistical	
  techniques	
  must	
  
be	
  used	
  to	
  design	
  and	
  analyse	
  the	
  simulation	
  experiments	
  and	
  ICT	
  tools	
  must	
  be	
  developed	
  to	
  make	
  
the	
   process	
   more	
   effective	
   and	
   efficient.	
   In	
   addition,	
   there	
   are	
   some	
   important	
   issues	
   of	
   output	
  
analysis	
  that	
  are	
  not	
  strictly	
  connected	
  to	
  statistics.	
  In	
  particular,	
  an	
  evident	
  gap	
  in	
  literature	
  regards	
  
the	
  analysis	
  and	
  integration	
  of	
  feedbacks	
  in	
  modelling	
  and	
  simulation	
  process.	
  Actually,	
  stakeholders	
  
are	
   involved,	
   in	
   a	
   post-­‐processing	
   phase,	
   in	
   order	
   to	
   analysis	
   the	
   results	
   (more	
   often	
   only	
   the	
  
elaboration	
  of	
  them)	
  and	
  understand	
  something	
  about	
  the	
  policy.	
  Sometimes	
  they	
  are	
  able	
  to	
  give	
  a	
  
feedback	
   on	
   the	
   difference	
   between	
   their	
   expectations	
   and	
   the	
   result	
   but	
   the	
   process	
   is	
   not	
  
structured	
  and	
  effective	
  tools	
  are	
  lacking.	
  The	
  development	
  of	
  tools	
  for	
  analysing	
  and	
  integrating	
  
feedbacks	
  should	
  be	
  explored	
  in	
  order	
  to	
  enlarge	
  the	
  number	
  of	
  stakeholders	
  involved	
  and,	
  at	
  the	
  
same	
  time,	
  to	
  allow	
  efficient	
  and	
  effective	
  modification	
  at	
  each	
  phase	
  of	
  the	
  process,	
  incrementally	
  
increasing	
  the	
  knowledge	
  of	
  the	
  model	
  and,	
  consequently,	
  of	
  the	
  given	
  policy.	
  
	
  
Available	
  Tools	
  
A	
  review	
  of	
  the	
  available	
  tools	
  is	
  to	
  be	
  finalized.	
  
	
  
Key	
  challenges	
  and	
  gaps	
  
A	
  fundamental	
  issue	
  for	
  statistical	
  analysis	
  is	
  that	
  the	
  output	
  processes	
  of	
  virtually	
  all	
  simulations	
  are	
  
non-­‐stationary	
   (the	
   distributions	
   of	
   the	
   successive	
   observations	
   change	
   over	
   time)	
   and	
   auto	
  
correlated	
  (the	
  observations	
  in	
  the	
  process	
  are	
  correlated	
  with	
  each	
  other).	
  Thus,	
  classical	
  statistical	
  
techniques	
  based	
  on	
  independent	
  identically	
  distributed	
  observations	
  are	
  not	
  directly	
  applicable.	
  At	
  
present,	
  there	
  are	
  still	
  several	
  output-­‐analysis	
  problems	
  for	
  which	
  there	
  is	
  no	
  commonly	
  accepted	
  
solution,	
   and	
   the	
   solutions	
   that	
   are	
   available	
   are	
   often	
   too	
   complicated	
   to	
   apply.	
   Another	
  
impediment	
  to	
  obtaining	
  accurate	
  estimates	
  of	
  a	
  model's	
  true	
  parameters	
  or	
  characteristics	
  is	
  the	
  
cost	
   of	
   the	
   computer	
   time	
   needed	
   to	
   collect	
   the	
   necessary	
   amount	
   of	
   simulation	
   output	
   data.	
  
Indeed,	
  there	
  are	
  situations	
  where	
  an	
  appropriate	
  statistical	
  procedure	
  is	
  available,	
  but	
  the	
  cost	
  of	
  
collecting	
  the	
  amount	
  of	
  data	
  dictated	
  by	
  the	
  procedure	
  is	
  prohibitive.	
  	
  	
  
	
  
	
  
Current	
  research	
  
In	
  current	
  research,	
  main	
  references	
  are	
  Law	
  (1983),	
  Nakayama	
  (2002),	
  	
  Alexopoulos	
  &	
  Kim	
  (2002),	
  	
  
Goldsman	
  &	
  Tokol	
  (2000),	
  Kelton	
  (1997),	
  Alexopoulos	
  &	
  Seila	
  (1998),	
  Goldsman	
  &	
  Nelson	
  (1998),	
  	
  
Law	
  (2006).	
  
For	
  output	
  analysis,	
  there	
  are	
  two	
  types	
  of	
  simulations:	
  
• Finite-­‐horizon	
  simulations.	
  In	
  this	
  case,	
  the	
  simulation	
  starts	
  in	
  a	
  specific	
  moment	
  and	
  runs	
  
until	
  a	
  terminating	
  event	
  occurs.	
  The	
  output	
  process	
  is	
  not	
  expected	
  to	
  achieve	
  steady-­‐state	
  
behaviour	
  and	
  any	
  parameter	
  estimated	
  from	
  the	
  output	
  will	
  be	
  transient	
  in	
  a	
  sense	
  that	
  its	
  
value	
   will	
   depend	
   upon	
   the	
   initial	
   conditions	
   (e.g.	
   a	
   simulation	
   of	
   a	
   vehicle	
   storage	
   and	
  
distribution	
  facility	
  in	
  a	
  week	
  time).	
  
• Steady-­‐state	
  simulations.	
  The	
  purpose	
  of	
  a	
  steady-­‐state	
  simulation	
  is	
  the	
  study	
  of	
  the	
  long-­‐
run	
   behaviour	
   of	
   the	
   system	
   of	
   interest.	
   A	
   performance	
   measure	
   of	
   a	
   system	
   is	
   called	
   a	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
64	
  |	
  P a g e 	
  
steady-­‐state	
  parameter	
  if	
  it	
  is	
  a	
  characteristic	
  of	
  the	
  equilibrium	
  distribution	
  of	
  an	
  output	
  
stochastic	
  process	
  (e.g.	
  simulation	
  of	
  a	
  continuously	
  operating	
  communication	
  system	
  where	
  
the	
  objective	
  is	
  the	
  computation	
  of	
  the	
  mean	
  delay	
  of	
  a	
  data	
  packet).	
  
	
  
Future	
  research	
  
Referring	
   to	
   previous	
   cited	
   works	
   and	
   in	
   particular	
   to	
   Goldsman	
   (2010),	
   future	
   research	
   should	
  
further	
  explore	
  following	
  issues:	
  
• ICT	
  tools	
  for	
  supporting	
  or	
  automating	
  output/feedback	
  analysis	
  
• Allowing	
  an	
  incremental	
  understanding	
  of	
  the	
  model	
  (knowledge	
  synthesis)	
  
• Adapting	
  Design	
  Of	
  Experiment	
  (DOE)	
  for	
  policy	
  model	
  simulation	
  
• Use	
  and	
  integration	
  of	
  more-­‐sophisticated	
  variance	
  estimators	
  
• Better	
  ranking	
  and	
  selection	
  techniques.	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
65	
  |	
  P a g e 	
  
	
  
	
  
	
  
3.2. Data-­‐powered	
  Collaborative	
  Governance	
  
3.2.1. Big	
  Data	
  	
  
Summary	
  Overview	
  	
  
Current	
  free	
  tools	
   Top	
  market	
  tools	
   Current	
  and	
  Future	
  Research	
  
The	
  freely	
  available	
  tools	
  permit	
  to	
  
overcome	
   data	
   limitations,	
   simplify	
  
the	
   analytical	
   process	
   and	
   visualize	
  
results.	
  The	
  functionalities	
  provided	
  
by	
  these	
  software	
  are:	
  
-­‐Massively	
  parallel	
  processing	
  (MPP)	
  
database	
   product	
   for	
   large-­‐scale	
  
analytics	
   and	
   next-­‐gen	
   data	
  
warehousing	
  
-­‐Data-­‐parallel	
   implementations	
   of	
  
statistical	
   and	
   machine	
   learning	
  
methods	
  
-­‐Visual	
  data	
  mining	
  modelling	
  	
  
	
  
-­‐Data	
  storage	
  platforms	
  and	
  other	
  
information	
   infrastructure	
  
solutions	
  
-­‐Massive	
   parallel	
   processing	
  
(MPP)	
  
-­‐Dataflow	
   engines,	
   software	
  
interconnect	
  technologies	
  
-­‐Data	
   discovery	
   and	
   exploration	
  
tools	
  
-­‐Built-­‐in	
  text	
  analytics,	
  enterprise-­‐
grade	
   security	
   and	
   administrative	
  
tools	
  
-­‐Real-­‐time	
   analytics	
   processing	
  
(RTAP)	
  
-­‐Visualization	
   features	
   supporting	
  
exploratory	
   and	
   discovery	
  
analytics	
  
-­‐On-­‐line	
   analytical	
   processing	
  
(OLAP)	
  
-­‐Business	
   intelligence	
   (BI),	
   Data	
  
Warehouse	
  (DW)	
  	
  
-­‐Enterprise	
   Data	
   Warehouse	
  
(EDW)	
  	
  
	
  
-­‐Technologies	
   for	
   collecting	
   cleaning,	
  
storing	
   and	
   managing	
   data:	
   data	
  
warehouse;	
   pivotal	
   transformation;	
   ETL;	
  
I/O;	
   efficient	
   archiving,	
   storing,	
   indexing,	
  
retrieving,	
   and	
   recovery;	
   streaming,	
  
filtering,	
   compressed	
   sensing	
   sufficient	
  
statistics;	
  automatic	
  data	
  annotation;	
  Large	
  
Database	
   Management	
   Systems;	
   storage	
  
architectures;	
   data	
   validity,	
   integrity,	
  
consistency,	
   uncertainty	
   management;	
  
languages,	
   tools,	
   methodologies	
   and	
  
programming	
  environments	
  
-­‐Technologies	
   for	
   summarizing	
   data	
   and	
  
extracting	
   some	
   meaning:	
   reports;	
  
dashboard;	
   statistical	
   analysis	
   and	
  
inference;	
   Bayesian	
   techniques;	
  
information	
   extraction	
   from	
   unstructured,	
  
multimodal	
   data;	
   scalable	
   and	
   interactive	
  
data	
   visualization;	
   extraction	
   and	
  
integration	
   of	
   knowledge	
   from	
   massive,	
  
complex,	
   multi-­‐modal,	
   or	
   dynamic	
   data;	
  
data	
   mining;	
   scalable	
   machine	
   learning;	
  
data-­‐driven	
   high	
   fidelity	
   simulations;	
  
scalable	
   machine	
   learning;	
   predictive	
  
modelling,	
   hypothesis	
   generation	
   and	
  
automated	
   discovery	
  
-­‐Technologies	
   for	
   using	
   data	
   a	
   decision	
  
tool:	
  Decision	
  Trees,	
  Pro-­‐Con	
  Analysis,	
  Rule	
  
Based	
  Systems,	
  Neural	
  Networks,	
  Tradeoff	
  
based	
  Decisions	
  	
  
	
  
Introduction	
  and	
  definition	
  
Big	
  Data	
  refers	
  to	
  dataset	
  that	
  cannot	
  be	
  stored,	
  captured,	
  managed	
  and	
  analysed	
  by	
  the	
  mean	
  of	
  
conventional	
  database	
  software.	
  Thereby	
  Big	
  Data	
  is	
  a	
  subjective	
  rather	
  than	
  a	
  technical	
  definition,	
  
because	
   it	
   does	
   not	
   involve	
   a	
   quantitative	
   threshold	
   (e.g.	
   in	
   terms	
   of	
   terabytes),	
   but	
   instead	
   a	
  
moving	
  technological	
  one.	
  Keeping	
  that	
  in	
  mind,	
  the	
  definition	
  of	
  Big	
  Data	
  in	
  many	
  sectors	
  ranges	
  
from	
  a	
  few	
  terabytes30
	
  to	
  multiple	
  petabytes31
.	
  The	
  definition	
  of	
  Big	
  Data	
  does	
  not	
  merely	
  involve	
  
the	
   use	
   of	
   very	
   large	
   data	
   sets,	
   but	
   concerns	
   also	
   a	
   computational	
   turn	
   in	
   thought	
   and	
   research	
  
(Burkholder, L, ed. 1992).	
  	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
30
	
  	
  1	
  terabyte	
  is	
  equal	
  to	
  1	
  trillion	
  bytes	
  
31
	
  	
  1	
  petabyte	
  is	
  equal	
  to	
  1000	
  terabytes	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
66	
  |	
  P a g e 	
  
As	
  stated	
  by	
  Latour	
  (2009)	
  when	
  the	
  tool	
  is	
  changed,	
  also	
  the	
  entire	
  social	
  theory	
  going	
  with	
  it	
  is	
  
different.	
   In	
   this	
   view	
   Big	
   Data	
   has	
   emerged	
   a	
   system	
   of	
   knowledge	
   that	
   is	
   already	
   changing	
   the	
  
objects	
  of	
  knowledge	
  itself,	
  as	
  it	
  has	
  the	
  capability	
  to	
  inform	
  how	
  we	
  conceive	
  human	
  networks	
  and	
  
community.	
  Big	
  Data	
  creates	
  a	
  radical	
  shift	
  in	
  how	
  we	
  think	
  research	
  itself.	
  As	
  argued	
  by	
  Lazer	
  et	
  al.	
  
(2009),	
  not	
  only	
  we	
  are	
  offered	
  the	
  possibility	
  to	
  collect	
  and	
  analyze	
  data	
  at	
  an	
  unprecedented	
  depth	
  
and	
  scale,	
  but	
  also	
  there	
  is	
  a	
  change	
  in	
  the	
  processes	
  of	
  research,	
  the	
  constitution	
  of	
  knowledge,	
  the	
  
engagement	
   with	
   information	
   and	
   the	
   nature	
   and	
   the	
   categorization	
   of	
   reality.	
   The	
   potential	
  
stemming	
  from	
  the	
  availability	
  of	
  a	
  massive	
  amount	
  of	
  data	
  is	
  exemplified	
  by	
  Google.	
  It	
  is	
  widely	
  
believed	
   that	
   the	
   success	
   of	
   the	
   Mountain	
   View	
   company	
   is	
   due	
   to	
   its	
   brilliant	
   algorithms,	
   e.g.	
  
PageRank.	
   In	
   reality	
   the	
   main	
   novelties	
   introduced	
   in	
   1998,	
   which	
   brought	
   to	
   second	
   generation	
  
search	
  engines,	
  involved	
  the	
  recognition	
  that	
  hyperlinks	
  were	
  an	
  important	
  measure	
  of	
  popularity	
  
and	
  the	
  use	
  of	
  the	
  text	
  of	
  hyperlinks	
  (anchortext)	
  in	
  the	
  web	
  index,	
  giving	
  it	
  a	
  weight	
  close	
  to	
  the	
  
page	
  title.	
  This	
  is	
  because	
  first	
  generation	
  search	
  engines	
  used	
  only	
  the	
  text	
  of	
  the	
  web	
  pages,	
  while	
  
Google	
  added	
  two	
  data	
  set	
  (hyperlinks	
  and	
  anchortext),	
  so	
  that	
  even	
  a	
  less	
  than	
  perfect	
  algorithm	
  
exploiting	
  this	
  additional	
  data	
  would	
  obtain	
  roughly	
  the	
  same	
  results	
  as	
  PageRank.	
  Another	
  example	
  
is	
   the	
   Google’s	
   AdWords	
   keyword	
   auction	
   model.	
   Overture	
   had	
   previously	
   shown	
   that	
   ranking	
  
advertisers	
   for	
   a	
   given	
   keyword	
   based	
   purely	
   on	
   their	
   bids	
   was	
   an	
   efficient	
   mechanism.	
   Google	
  
improved	
  the	
  tool	
  by	
  adding	
  the	
  data	
  on	
  the	
  clickthrough	
  rate	
  (CTR)	
  on	
  each	
  advertiser's	
  ad,	
  so	
  that	
  
advertisers	
  were	
  ranked	
  by	
  their	
  bid	
  and	
  their	
  CTR.	
  	
  
	
  
Why	
  it	
  matters	
  in	
  governance	
  
Big	
  Data	
  have	
  a	
  huge	
  impact	
  also	
  in	
  governance	
  and	
  policy	
  making.	
  In	
  fact	
  their	
  benefits	
  apply	
  to	
  a	
  
wide	
  variety	
  of	
  subjects:	
  
• Health	
   care:	
   making	
   care	
   more	
   preventive	
   and	
   personalized	
   by	
   relying	
   on	
   a	
   home-­‐based	
  
continuous	
   monitoring,	
   thereby	
   reducing	
   hospitalization	
   costs	
   while	
   increasing	
   quality.	
  
Detection	
  of	
  infectious	
  disease	
  outbreaks	
  and	
  epidemic	
  development	
  
• Education:	
   by	
   collecting	
   all	
   the	
   data	
   on	
   students’	
   performance,	
   it	
   would	
   be	
   possible	
   to	
  
design	
  more	
  effective	
  approaches.	
  The	
  collection	
  of	
  these	
  data	
  is	
  made	
  possible	
  thanks	
  to	
  
massive	
  Web	
  deployment	
  of	
  educational	
  activities	
  
• Urban	
  planning:	
  huge	
  high	
  fidelity	
  geographical	
  datasets	
  describing	
  people	
  and	
  places	
  are	
  
generated	
  from	
  administrative	
  systems,	
  cell	
  phone	
  networks,	
  or	
  other	
  similar	
  sources.	
  
• Intelligent	
  transportation	
  based	
  on	
  the	
  analysis	
  and	
  visualization	
  of	
  road	
  network	
  data,	
  so	
  as	
  
to	
  implement	
  congestion	
  pricing	
  systems	
  and	
  reduce	
  traffic	
  
• The	
   use	
   of	
   ubiquitous	
   data	
   collection	
   through	
   sensors	
   networks	
   in	
   order	
   to	
   improve	
  
environmental	
  modelling	
  	
  
• Analysis	
  and	
  clarification	
  of	
  the	
  energy	
  pattern	
  use	
  through	
  data	
  analytics	
  and	
  smart	
  meters,	
  
which	
  can	
  be	
  useful	
  for	
  the	
  adoption	
  of	
  energy	
  saving	
  policies	
  avoiding	
  blackouts	
  
• Integrated	
  analysis	
  of	
  contracts	
  in	
  order	
  to	
  find	
  relations	
  and	
  dependencies	
  among	
  financial	
  
institutions	
  in	
  order	
  to	
  assess	
  the	
  financial	
  systemic	
  risk	
  
• The	
  analysis	
  of	
  conversation	
  in	
  social	
  media	
  and	
  networks,	
  as	
  well	
  as	
  the	
  analysis	
  of	
  financial	
  
transaction	
  carried	
  out	
  by	
  alleged	
  terrorists,	
  which	
  can	
  be	
  used	
  for	
  homeland	
  security	
  	
  
• Assessment	
   of	
   computer	
   security	
   by	
   the	
   mean	
   of	
   the	
   logged	
   information	
   analysis,	
   i.e.	
  
Security	
  Information	
  and	
  Event	
  Management	
  
• Better	
  track	
  of	
  food	
  and	
  pharmaceutical	
  production	
  and	
  distribution	
  chain	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
67	
  |	
  P a g e 	
  
• Collect	
  data	
  on	
  water	
  and	
  sewer	
  usage	
  in	
  order	
  to	
  reduce	
  water	
  consumption	
  by	
  detecting	
  
leaks	
  
• Use	
  of	
  sensors,	
  GPS,	
  cameras	
  and	
  communication	
  systems	
  for	
  crisis	
  detection,	
  management	
  
and	
  response	
  	
  
• Use	
  of	
  sensors’	
  data	
  for	
  carbon	
  footprint	
  management	
  
	
  	
  	
  
Policy	
  Applications	
  of	
  Big	
  Data	
  Tools	
  
There	
  is	
  a	
  growing	
  body	
  of	
  evidence	
  highlighting	
  the	
  applications	
  of	
  Big	
  Data	
  not	
  only	
  in	
  traditional	
  
hard	
  science	
  and	
  business,	
  but	
  also	
  in	
  policy	
  making	
  due	
  to	
  the	
  predictive	
  power	
  of	
  the	
  data.	
  Let	
  us	
  
see	
  some	
  applications:	
  
• Predictability	
  of	
  human	
  behaviour	
  and	
  social	
  events.	
  A	
  research	
  team	
  from	
  Northwestern	
  
University32
	
  was	
   able	
   to	
   predict	
   people’s	
   location	
   based	
   on	
   mobile	
   phone	
   information	
  
generated	
   from	
   past	
   movements.	
   Moreover	
   Pentland	
   from	
   MIT33
	
  conducted	
   a	
   research	
  
showing	
  that	
  mobile	
  phones	
  can	
  be	
  used	
  as	
  sensors	
  for	
  predicting	
  human	
  behaviour,	
  as	
  they	
  
can	
  quantify	
  human	
  movements	
  in	
  order	
  to	
  explain	
  changes	
  in	
  commuting	
  patterns	
  given	
  for	
  
example	
  by	
  unemployment.	
  Recently	
  another	
  research	
  team	
  from	
  Northeastern	
  University	
  
was	
  able	
  to	
  predict	
  the	
  voting	
  outcome	
  in	
  the	
  scope	
  of	
  a	
  famous	
  US	
  television	
  programme	
  
(American	
  Idol)	
  based	
  on	
  Twitter	
  activity	
  during	
  the	
  time	
  span	
  defined	
  by	
  the	
  TV	
  show	
  airing	
  
and	
  the	
  voting	
  period	
  following	
  it34
	
  
• Public	
   health.	
   Online	
   data	
   can	
   be	
   used	
   for	
   syndromic	
   surveillance,	
   also	
   called	
  
infodemiology35
.	
   As	
   an	
   example	
   Google	
   Flu	
   Trends	
   is	
   a	
   tool	
   based	
   on	
   the	
   prevalence	
   of	
  
Google	
  queries	
  for	
  flu-­‐like	
  symptoms.	
  As	
  shown	
  by	
  Ginsberg	
  et	
  al.	
  (2008)36
	
  it	
  is	
  then	
  possible	
  
to	
  use	
  search	
  queries	
  to	
  detect	
  influenza	
  epidemics	
  in	
  areas	
  with	
  a	
  large	
  population	
  of	
  web	
  
search	
  users.	
  In	
  fact	
  according	
  to	
  the	
  US	
  Center	
  for	
  Disease	
  Control	
  and	
  Prevention	
  (CDC)37
	
  a	
  
great	
  availability	
  of	
  data	
  coming	
  from	
  online	
  queries	
  can	
  help	
  to	
  detect	
  epidemic	
  outbursts	
  
before	
  laboratory	
  analysis.	
  Another	
  related	
  tool	
  is	
  the	
  Google	
  Dengue	
  Trend.	
  In	
  this	
  view	
  the	
  
analysis	
  of	
  health	
  related	
  Tweets	
  in	
  US	
  by	
  Paul	
  and	
  Dredze	
  (2011)38
	
  found	
  a	
  high	
  correlation	
  
between	
   the	
   modeled	
   and	
   the	
   actual	
   flu	
   rate.	
   In	
   the	
   same	
   way	
   Twitter’s	
   data	
   can	
   be	
  
analyzed	
  to	
  study	
  the	
  geographic	
  spread	
  of	
  a	
  virus	
  or	
  disease39
.	
  Finally	
  we	
  can	
  talk	
  about	
  
Healthmap 40
	
  in	
   which	
   data	
   from	
   online	
   news,	
   eyewitness	
   reports,	
   expert-­‐curated	
  
discussions,	
  official	
  reports,	
  are	
  used	
  to	
  get	
  a	
  thorough	
  view	
  of	
  the	
  current	
  global	
  state	
  of	
  
infectious	
  diseases	
  which	
  is	
  visualised	
  on	
  a	
  map	
  
• Global	
  food	
  security.	
  The	
  Food	
  and	
  Agriculture	
  Organization	
  of	
  the	
  UN	
  (FAO)	
  is	
  chartered	
  
with	
  ensuring	
  that	
  the	
  world’s	
  knowledge	
  of	
  food	
  and	
  agriculture	
  is	
  available	
  to	
  those	
  who	
  
need	
  it	
  when	
  they	
  need	
  it	
  and	
  in	
  a	
  form	
  which	
  they	
  can	
  access	
  and	
  use41
.	
  In	
  fact	
  human	
  
population	
   will	
   approach	
   9	
   billion	
   by	
   2050,	
   thereby	
   it	
   will	
   be	
   necessary	
   to	
   put	
   in	
   place	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
32
	
  http://online.wsj.com/article/SB10001424052748704547604576263261679848814.html	
  	
  
33
	
  http://www.nytimes.com/2011/04/24/business/24unboxed.html?_r=1&src=tptw>	
  
34
	
  http://www.mobs-­‐lab.org/uploads/6/7/8/7/6787877/american_idol_finale.pdf	
  
35
	
  http://yi.com/home/EysenbachGunther/publications/2006/eysenbach2006c-­‐infodemiologyamia-­‐proc.pdf	
  
36
	
  http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/p	
  
apers/detecting-­‐influenza-­‐epidemics.pdf	
  >	
  
37
	
  http://www.cdc.gov/ehrmeaningfuluse/Syndromic.html	
  
38
	
  http://www.cs.jhu.edu/%7Empaul/files/2011.icwsm.twitter_health.pdf	
  	
  
39
	
  http://www.ncbi.nlm.nih.gov/pubmed/21573238	
  	
  
40
	
  http://healthmap.org/en/	
  
41
	
  http://data.fao.org/	
  and	
  http://www.grdi2020.eu/Repository/FileScaricati/050e1e8a-­‐3e69-­‐4ba0-­‐86a5-­‐b8f7c8322ebe.pdf	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
68	
  |	
  P a g e 	
  
policies	
  aimed	
  at	
  ensuring	
  a	
  sufficient	
  and	
  fair	
  distribution	
  of	
  resources.	
  In	
  fact	
  the	
  world	
  
food	
  production	
  will	
  have	
  to	
  increase	
  by	
  60%	
  by	
  increasing	
  the	
  agricultural	
  production	
  and	
  
fighting	
  water	
  scarcity.	
  The	
  online	
  data	
  portal	
  to	
  be	
  launched	
  by	
  FAO	
  will	
  enhance	
  planners’	
  
and	
  decision	
  makers’	
  capacity	
  to	
  estimate	
  agricultural	
  production	
  potentials	
  and	
  variability	
  
under	
  different	
  climate	
  and	
  resources	
  scenarios	
  
• Environmental	
  analysis.	
  In	
  the	
  last	
  United	
  Nations	
  conference	
  on	
  climate	
  (i.e.	
  COP	
  17)	
  taking	
  
place	
  in	
  2011,	
  The	
  European	
  Environment	
  Agency,	
  the	
  geospatial	
  software	
  company	
  Esri	
  and	
  
Microsoft	
  presented	
  the	
  network	
  Eye	
  on	
  Earth42
,	
  which	
  can	
  be	
  used	
  to	
  create	
  an	
  online	
  site	
  
and	
   group	
   of	
   services	
   for	
   scientists,	
   researchers	
   and	
   policy	
   makers	
   in	
   order	
   to	
   share	
   and	
  
analyze	
   environmental	
   and	
   geospatial	
   data.	
   Other	
   three	
   projects	
   launched	
   by	
   these	
  
institutions	
  at	
  COP	
  17	
  include	
  WaterWatch	
  (using	
  EEA’s	
  water	
  data);	
  AirWatch,	
  (about	
  EEA’s	
  
air	
   quality	
   data);	
   and	
   finally	
   NoiseWatch,	
   which	
   is	
   a	
   combination	
   between	
   environmental	
  
data	
   with	
   user-­‐generated	
   information	
   provided	
   by	
   citizens.	
   Moreover	
   during	
   2010	
   United	
  
Nations	
   climate	
   meeting	
   (COP	
   16)	
   Google	
   launched	
   its	
   own	
   satellite	
   and	
   mapping	
   service	
  
Google	
  Earth	
  Engine43
,	
  which	
  is	
  a	
  combination	
  of	
  a	
  computing	
  platform,	
  an	
  open	
  API	
  and	
  
satellite	
  imagery	
  along	
  25	
  years.	
  All	
  these	
  tools	
  will	
  be	
  available	
  to	
  scientists,	
  researchers	
  
and	
   governmental	
   agencies	
   for	
   analyzing	
   the	
   environmental	
   conditions	
   in	
   order	
   to	
   make	
  
sustainability	
   decisions.	
   In	
   this	
   way	
   the	
   government	
   of	
   Mexico	
   created	
   a	
   map	
   of	
   the	
  
country’s	
   forest	
   incorporating	
   53,000	
   Landsat	
   images,	
   which	
   can	
   be	
   used	
   by	
   the	
   federal	
  
authority	
  and	
  the	
  NGOs	
  to	
  make	
  decisions	
  about	
  land	
  use	
  and	
  sustainable	
  agriculture.	
  
• Crisis	
   management	
   and	
   anticipation.	
   In	
   occasion	
   of	
   the	
   Haiti	
   earthquake44
:	
   an	
   European	
  
Commission’s	
   Joint	
   Research	
   Center	
   team	
   used	
   the	
   damage	
   reports	
   mapped	
   on	
   the	
  
Ushahidi-­‐Haiti	
  platform45
	
  to	
  show	
  that	
  this	
  crowdsourced	
  data	
  can	
  help	
  predict	
  the	
  spatial	
  
distribution	
  of	
  structural	
  damage	
  in	
  Port-­‐au-­‐Prince.	
  Their	
  model	
  based	
  on	
  1645	
  SMS	
  reports	
  
crowdsourced	
  data	
  almost	
  perfectly	
  predicts	
  the	
  structural	
  damage	
  of	
  most	
  affected	
  areas	
  
reported	
   in	
   the	
   World	
   Bank-­‐UNOSAT-­‐JRC	
   damage	
   assessment	
   performed	
   by	
   600	
   experts	
  
from	
  23	
  countries	
  in	
  66	
  days	
  based	
  on	
  high	
  resolution	
  aerial	
  imagery	
  of	
  structural	
  damage.	
  
As	
  for	
  future	
  developments,	
  some	
  researches46
	
  highlight	
  the	
  fact	
  that	
  Big	
  Data	
  can	
  be	
  used	
  
for	
  crisis	
  management	
  and	
  anticipation	
  by	
  building	
  up	
  crisis	
  observatories,	
  i.e.	
  laboratories	
  
devoted	
   to	
   the	
   collecting	
   and	
   processing	
   of	
   enormous	
   volumes	
   of	
   data	
   on	
   both	
   natural	
  
systems	
   and	
   human	
   techno-­‐socio-­‐economic	
   systems,	
   so	
   as	
   to	
   gain	
   early	
   warnings	
   of	
  
impending	
  events.	
  With	
  those	
  capacity	
  would	
  be	
  possible	
  to	
  set	
  up	
  Crisis	
  and	
  Observatories	
  
for	
  financial	
  and	
  economic,	
  for	
  armed	
  conflicts,	
  for	
  crime	
  and	
  corruption,	
  for	
  social	
  crisis,	
  for	
  
health	
  risks	
  and	
  disease	
  spreading,	
  for	
  environmental	
  changes.	
  
• Global	
   Development.	
  An	
  inspiring	
  example	
  is	
  given	
  by	
  Global	
  Pulse47
,	
  which	
  is	
  a	
  Big	
  Data	
  
based	
  innovation	
  programme	
  fostered	
  by	
  the	
  UN	
  Secretary-­‐General	
  and	
  aimed	
  at	
  harnessing	
  
today's	
   new	
   world	
   of	
   digital	
   data	
   and	
   real-­‐time	
   analytics	
   in	
   order	
   foster	
   international	
  
development,	
  protect	
  the	
  world's	
  most	
  vulnerable	
  populations,	
  and	
  strengthen	
  resilience	
  to	
  
global	
   shocks.	
   	
   The	
   programme	
   is	
   rooted	
   on	
   three	
   main	
   pillars:	
   research	
   on	
   new	
   data	
  
indicators	
   providing	
   real-­‐time	
   understanding	
   of	
   community’s	
   welfare	
   as	
   well	
   as	
   real-­‐time	
  
feedback	
  on	
  policies;	
  creation	
  of	
  a	
  toolkit	
  of	
  free	
  open-­‐source	
  software	
  for	
  mining	
  real-­‐time	
  
data	
   useful	
   for	
   shared	
   evidence-­‐based	
   decisions;	
   the	
   establishment	
   of	
   country-­‐level	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
42
	
  http://www.eyeonearth.org/	
  
43
	
  http://earthengine.google.org/#intro	
  	
  
44
	
  http://publications.jrc.ec.europa.eu/repository/handle/111111111/15684	
  
45
	
  http://haiti.ushahidi.com/	
  	
  
46
	
  http://arxiv.org/pdf/1012.0178v5.pdf	
  
47
	
  http://www.unglobalpulse.org/about-­‐new	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
69	
  |	
  P a g e 	
  
innovation	
  centres	
  (Pulse	
  Lab)	
  where	
  real-­‐time	
  data	
  are	
  applied	
  to	
  development	
  challenges.	
  
The	
  programme	
  encompasses	
  5	
  main	
  projects	
  carried	
  out	
  with	
  several	
  partners:	
  
o “Daily	
  Tracking	
  of	
  Commodity	
  Prices:	
  the	
  e-­‐Bread	
  Index”48
,	
  which	
  investigates	
  how	
  
scraping	
  online	
  prices	
  could	
  provide	
  real-­‐time	
  insights	
  on	
  price	
  dynamics	
  
o “Unemployment	
   through	
   the	
   Lens	
   of	
   Social	
   Media” 49
,	
   which	
   relates	
   the	
  
unemployment	
  statistics	
  with	
  unemployment-­‐related	
  conversation	
  from	
  open	
  social	
  
web	
  
o “Twitter	
   and	
   the	
   Perception	
   of	
   Crisis	
   Related	
   Stress”50
,	
   which	
   investigates	
   what	
  
indicators	
   can	
   help	
   in	
   understanding	
   people’s	
   concerns	
   on	
   food,	
   fuel,	
   finance,	
  
housing	
  	
  
o “Monitoring	
   Food	
   Security	
   Issues	
   through	
   New	
   Media”51
,	
   which	
   finds	
   emerging	
  
trends	
  related	
  to	
  food	
  security	
  using	
  text	
  analysis,	
  semantic	
  clustering	
  and	
  networks	
  
theory	
  
o “Global	
   Snapshot	
   of	
   Wellbeing	
   –	
   Mobile	
   Survey”52
,	
   aimed	
   at	
   experimenting	
   new	
  
tools	
  able	
  of	
  replicating	
  the	
  standards	
  of	
  traditional	
  household	
  surveys	
  in	
  real-­‐time	
  
on	
  a	
  global	
  scale	
  
• Intelligence	
  and	
  security.	
  As	
  examples	
  of	
  governments’	
  commitment	
  to	
  Big	
  Data	
  for	
  national	
  
security	
   we	
   can	
   present	
   the	
   Cyber-­‐Insider	
   Threat	
   (CINDER)53
	
  program,	
   which	
   aims	
   at	
  
developing	
  new	
  ways	
  for	
  detecting	
  cyber	
  espionage	
  activities	
  in	
  military	
  computer	
  networks	
  
as	
  well	
  as	
  at	
  increasing	
  the	
  accuracy,	
  rate	
  and	
  speed	
  with	
  which	
  cyber	
  threats	
  are	
  detected.	
  
Another	
  example	
  is	
  the	
  Anomaly	
  Detection	
  at	
  Multiple	
  Scales	
  (ADAMS)54
	
  program	
  led	
  by	
  the	
  
Defense	
   Advanced	
   Research	
   Project	
   Agency	
   (DARPA),	
   which	
   addresses	
   the	
   problem	
   of	
  
anomaly-­‐detection	
  and	
  characterization	
  in	
  massive	
  data	
  sets.	
  The	
  program	
  will	
  be	
  initially	
  
applied	
  to	
  insider-­‐threat	
  detection,	
  in	
  which	
  individual	
  actions	
  are	
  recognized	
  as	
  anomalous	
  
with	
   comparison	
   to	
   a	
   background	
   of	
   routine	
   network	
   activity.	
   Finally	
   the	
   Center	
   of	
  
Excellence	
   on	
   Visualization	
   and	
   Data	
   Analytics	
   (CVADA)	
   of	
   the	
   Department	
   for	
   Homeland	
  
Security	
  (DHS)	
  is	
  leading	
  a	
  research	
  effort	
  on	
  data	
  that	
  can	
  be	
  used	
  by	
  first	
  responders	
  to	
  
tackle	
  with	
  natural	
  disasters	
  and	
  terrorists	
  attacks,	
  by	
  law	
  enforcement	
  to	
  border	
  security	
  
concerns,	
  or	
  to	
  detect	
  explosives	
  and	
  cyber	
  threats.	
  
An	
  Interesting	
  Application:	
  Smart	
  Cities	
  
A	
  Smart	
  City	
  is	
  a	
  public	
  administration	
  or	
  authorities	
  delivering	
  services	
  and	
  infrastructure	
  based	
  on	
  
ICT	
  which	
  are	
  easy	
  to	
  use,	
  efficient,	
  responsive,	
  open	
  and	
  sustainable	
  for	
  the	
  environment.	
  We	
  can	
  
identify	
  six	
  main	
  dimensions55
:	
  	
  
• Smart	
  economy,	
   characterized	
   by	
   high	
   standard	
   of	
   living	
   and	
   competitive	
   elements:	
  
innovative	
   and	
   entrepreneurship,	
   high	
   productivity,	
   flexibility	
   of	
   labour	
   market,	
  
internationalism,	
  ability	
  to	
  transform;	
  
• Smart	
  mobility,	
   i.e.	
   efficient	
   public	
   transportation	
   system,	
   local	
   and	
   international	
  
accessibility,	
  availability	
  of	
  ICT-­‐infrastructure,	
  sustainability	
  and	
  safety;	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
48
	
  http://www.unglobalpulse.org/projects/comparing-­‐global-­‐prices-­‐local-­‐products-­‐real-­‐time-­‐e-­‐pricing-­‐bread	
  
49
	
  http://www.unglobalpulse.org/projects/can-­‐social-­‐media-­‐mining-­‐add-­‐depth-­‐unemployment-­‐statistics	
  
50
	
  http://www.unglobalpulse.org/projects/twitter-­‐and-­‐perceptions-­‐crisis-­‐related-­‐stress	
  
51
	
  http://www.unglobalpulse.org/projects/news-­‐awareness-­‐and-­‐emergent-­‐information-­‐monitoring-­‐system-­‐food-­‐security	
  
52
	
  http://www.unglobalpulse.org/projects/global-­‐snapshot-­‐wellbeing-­‐mobile-­‐survey	
  
53
	
  http://www.darpa.mil/Our_Work/I2O/Programs/Cyber-­‐Insider_Threat_%28CINDER%29.aspx	
  
54
	
  http://www.darpa.mil/Our_Work/I2O/Programs/Anomaly_Detection_at_Multiple_Scales_%28ADAMS%29.aspx	
  
55
	
  See	
  also	
  the	
  project	
  EuropeanSmartCities	
  at	
  http://www.smart-­‐cities.eu/model.html
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
70	
  |	
  P a g e 	
  
• Smart	
  environment	
   (sustainability	
   of	
   natural	
   resources):	
   low	
   pollution,	
   protection	
   of	
  
environment,	
  natural	
  attractiveness;	
  	
  
• Smart	
  people,	
   given	
   by	
   high	
   level	
   of	
   human	
   and	
   intellectual	
   capital,	
   high	
   level	
   of	
  
qualification,	
  lifelong	
  learning,	
  social	
  and	
  ethnic	
  diversity,	
  flexibility,	
  creativity;	
  	
  
• Smart	
  living	
   (high	
   quality	
   of	
   life);	
   presence	
   of	
   cultural	
   facilities,	
   healthy	
   environment	
  
conditions,	
  individual	
  safety,	
  housing	
  quality,	
  education	
  facilities,	
  touristic	
  attractiveness	
  and	
  
social	
  cohesion;	
  
• Smart	
  governance	
  given	
  by	
  citizens’	
  participation	
  in	
  decision-­‐making,	
  the	
  presence	
  of	
  public	
  
and	
  social	
  services	
  and	
  of	
  transparent	
  and	
  open	
  governance.	
  
The	
  combination	
  of	
  all	
  the	
  benefits	
  stemming	
  from	
  Big	
  Data	
  in	
  governance,	
  make	
  it	
  evident	
  that	
  the	
  
integration	
  of	
  heterogeneous	
  data	
  from	
  various	
  domains	
  holds	
  high	
  potential	
  to	
  provide	
  insights	
  on	
  
cities.	
  New	
  technologies	
  will	
  unlock	
  massive	
  amounts	
  of	
  data	
  about	
  all	
  the	
  aspects	
  of	
  the	
  city	
  as	
  well	
  
as	
  its	
  citizens.	
  For	
  instance	
  new	
  systems	
  involving	
  energy	
  use	
  at	
  fixed	
  locations	
  (point	
  sources,	
  like	
  
house	
  and	
  office)	
  are	
  being	
  implemented	
  by	
  the	
  mean	
  of	
  smart	
  metering	
  as	
  well	
  as	
  the	
  integration	
  
of	
  various	
  information	
  systems	
  used	
  to	
  record	
  pricing	
  and	
  activity.	
  Another	
  possibility	
  is	
  given	
  by	
  the	
  
extraction	
  of	
  positional	
  and	
  frequency	
  data	
  from	
  social	
  media	
  such	
  as	
  Twitter,	
  Facebook,	
  Flickr	
  and	
  
Foursquare.	
  All	
  this	
  data	
  will	
  be	
  used	
  for	
  fulfilling	
  the	
  Smart	
  Cities	
  targets.	
  Let	
  us	
  take	
  into	
  account	
  
for	
  instance	
  the	
  transportation	
  system,	
  where	
  diagnosing	
  and	
  anticipating	
  abnormal	
  events	
  such	
  as	
  
traffic	
   congestions	
   requires	
   integration	
   of	
   various	
   data	
   like	
   traffic	
   data,	
   weather	
   data,	
   road	
  
conditions,	
  or	
  traffic	
  light	
  strategy.	
  Another	
  possibility	
  will	
  be	
  given	
  by	
  e-­‐inclusion	
  technologies	
  and	
  
open	
  data	
  for	
  governance.	
  One	
  important	
  example	
  of	
  the	
  development	
  of	
  the	
  Smart	
  City	
  concept	
  at	
  
large	
  scale	
  is	
  the	
  New	
  York	
  City	
  project	
  “Roadmap	
  for	
  a	
  Digital	
  Future”56
,	
  which	
  outlines	
  a	
  path	
  to	
  
build	
  on	
  New	
  York	
  City's	
  successes	
  and	
  establish	
  it	
  as	
  the	
  world's	
  top-­‐ranked	
  Digital	
  City,	
  based	
  on	
  
indices	
  of	
  internet	
  access,	
  open	
  government,	
  citizen	
  engagement,	
  and	
  digital	
  industry	
  growth.	
  
	
  
Recent	
  Trends	
  
Big	
   Data	
   is	
   a	
   fast	
   growing	
   phenomenon:	
   as	
   the	
   Google	
   CEO	
   Eric	
   Schmidt	
   pointed	
   out	
   in	
   2010,	
  
currently	
  in	
  two	
  days	
  is	
  created	
  in	
  the	
  world	
  as	
  much	
  information	
  as	
  it	
  was	
  from	
  the	
  appearance	
  of	
  
man	
  till	
  2003.	
  Nowadays57
	
  it	
  is	
  possible	
  to	
  store	
  all	
  the	
  world’s	
  music	
  in	
  a	
  $600	
  worth	
  disk	
  drive,	
  
while	
  Facebook	
  content	
  shared	
  every	
  month	
  amounts	
  to	
  $30	
  billion.	
  According	
  to	
  the	
  forecast	
  global	
  
data	
  will	
  grow	
  at	
  a	
  40%	
  rate	
  next	
  year	
  while	
  the	
  total	
  IT	
  spending	
  will	
  grow	
  just	
  by	
  5%.	
  In	
  2010	
  users	
  
and	
  companies	
  stored	
  more	
  than	
  13	
  exabytes	
  of	
  new	
  data,	
  which	
  is	
  over	
  50,000	
  times	
  the	
  data	
  in	
  
the	
  Library	
  of	
  Congress.	
  	
  
Big	
  Data	
  is	
  also	
  a	
  potential	
  booster	
  for	
  the	
  economy,	
  bearing	
  a	
  $300	
  billion	
  potential	
  annual	
  value	
  to	
  
US	
  health	
  care	
  as	
  well	
  as	
  a	
  	
  $600	
  billion	
  potential	
  annual	
  consumers	
  surplus	
  from	
  using	
  personal	
  
location	
   data	
   globally	
   and	
   a	
   250	
   billion	
   Euro	
   potential	
   annual	
   value	
   to	
   European	
   public	
  
administration.	
  In	
  fact	
  the	
  European	
  Commission	
  is	
  expected	
  to	
  adopt	
  an	
  Open	
  Data	
  Strategy,	
  i.e.	
  a	
  
set	
   of	
   measures	
   aimed	
   at	
   increasing	
   government	
   transparency	
   and	
   creating	
   a	
   €32	
   billion	
   a	
   year	
  
market	
  for	
  public	
  data.	
  Finally	
  as	
  reported	
  last	
  year	
  by	
  the	
  McKinsey	
  Global	
  Institute58
,	
  the	
  United	
  
States	
  will	
  need	
  140,000	
  to	
  190,000	
  more	
  workers	
  with	
  deep	
  analytical	
  expertise	
  and	
  1.5	
  million	
  
more	
  data-­‐literate	
  managers.	
  Always	
  according	
  to	
  the	
  McKinsey	
  Global	
  Institute	
  the	
  potential	
  value	
  
of	
  global	
  personal	
  location	
  data	
  is	
  estimated	
  to	
  be	
  $700	
  billion	
  to	
  end	
  users,	
  and	
  it	
  can	
  result	
  in	
  an	
  up	
  
to	
  50%	
  decrease	
  in	
  product	
  development	
  and	
  assembly	
  costs.	
  What’s	
  the	
  growth	
  engine	
  of	
  big	
  data?	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
56
	
  http://www.nyc.gov/html/mome/digital/html/roadmap/theroadmap.shtml	
  
57
	
  See	
  McKinsey	
  Global	
  Institute	
  (2011)	
  “Big	
  data:	
  The	
  next	
  frontier	
  for	
  innovation,	
  competition,	
  and	
  productivity”	
  	
  
58
	
  http://www.mckinsey.com/Features/Big_Data	
  	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
71	
  |	
  P a g e 	
  
From	
  one	
  side	
  more	
  “old	
  world”	
  data	
  is	
  produced	
  through	
  “open	
  governance”	
  and	
  digitization.	
  From	
  
the	
  other	
  side	
  “new	
  world”	
  data	
  are	
  created	
  are	
  continuously	
  collected	
  in	
  domains	
  such	
  as	
  “in	
  silico”	
  
medicine,	
  “in	
  silico	
  engineering”	
  and	
  Internet	
  science.	
  Brand	
  new	
  fields	
  of	
  science	
  are	
  being	
  created:	
  
computational	
   chemistry,	
   biology,	
   economics,	
   engineering,	
   mechanics,	
   neuroscience,	
   geophysics,	
  
etc.	
  etc.	
  This	
  is	
  true	
  also	
  in	
  humanities,	
  such	
  as	
  the	
  birth	
  of	
  computational	
  social	
  science,	
  based	
  on	
  
mobile	
  phones	
  and	
  social	
  network	
  digital	
  traces.	
  A	
  wide	
  array	
  of	
  actors	
  including	
  humanities	
  and	
  
social	
   science	
   academics,	
   marketers,	
   governmental	
   organizations,	
   educational	
   institutions,	
   and	
  
motivated	
  individuals,	
  are	
  now	
  engaged	
  in	
  producing,	
  sharing,	
  interacting	
  with,	
  and	
  organizing	
  data.	
  
All	
  these	
  developments	
  are	
  allowed	
  by	
  the	
  rise	
  of	
  new	
  technologies	
  for	
  data	
  collections:	
  web	
  logs;	
  
RFID;	
  sensor	
  networks;	
  social	
  networks;	
  social	
  data	
  (due	
  to	
  the	
  Social	
  data	
  revolution),	
  Internet	
  text	
  
and	
   documents;	
   Internet	
   search	
   indexing;	
   call	
   detail	
   records;	
   astronomy,	
   atmospheric	
   science,	
  
genomics,	
  biogeochemical,	
  biological;	
  military	
  surveillance;	
  medical	
  records;	
  photography	
  archives;	
  
video	
  archives;	
  large-­‐scale	
  eCommerce.	
  
	
  
Inspiring	
  cases	
  
• The	
   Ion	
   ProtonTM
	
   Sequencer59
	
  is	
   a	
   rapid	
   genome-­‐scale	
   benchtop	
   sequencer.	
   The	
   tool	
  
allows	
  to	
  perform	
  data	
  analysis	
  	
  in	
  the	
  same	
  day	
  on	
  a	
  single	
  stand-­‐alone	
  server.	
  
• The	
   NIH	
   Human	
   Connectome	
   Project60
	
  aims	
   at	
   mapping	
   the	
   neural	
   pathways	
   that	
  
underlie	
  human	
  brain	
  function	
  in	
  order	
  to	
  acquire	
  and	
  share	
  data	
  about	
  the	
  structural	
  
and	
  functional	
  connectivity	
  of	
  the	
  human	
  brain.	
  
• The	
   Models	
   of	
   Infectious	
   Disease	
   Agent	
   Study61
	
  is	
   a	
   collaboration	
   of	
   research	
   and	
  
informatics	
   groups	
   to	
   develop	
   computational	
   models	
   of	
   the	
   interactions	
   between	
  
infectious	
   agents	
   and	
   their	
   hosts,	
   disease	
   spread,	
   prediction	
   systems	
   and	
   response	
  
strategies.	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
• MyTransport.sg62
	
  is	
   a	
   portal,	
   developed	
   by	
   the	
   Land	
   Transport	
   Authority	
   (LTA)	
   of	
  
Singapore,	
  providing	
  information	
  and	
  eServices	
  for	
  all	
  land	
  transport	
  users.	
  
• UN	
  Global	
  Pulse63
,	
  an	
  innovation	
  initiative	
  launched	
  by	
  the	
  United	
  Nations	
  Secretary-­‐
General	
   aimed	
   at	
   exploring	
   how	
   digital	
   data	
   sources	
   and	
   real-­‐time	
   analytics	
  
technologies	
  can	
  help	
  policymakers	
  to	
  better	
  protect	
  populations	
  from	
  shocks.	
  
Tools	
  on	
  the	
  market	
  
Freely	
  available	
  tools	
  
There	
  are	
  not	
  many	
  cases	
  of	
  freely	
  available	
  tools	
  for	
  Big	
  Data	
  analysis	
  on	
  the	
  market.	
  
The	
  presence	
  of	
  freely	
  available	
  tools	
  on	
  the	
  market	
  bear	
  many	
  benefits,	
  such	
  as:	
  
• Developers	
  and	
  analysts	
  will	
  use	
  them	
  to	
  experiment	
  with	
  emerging	
  types	
  of	
  data	
  structure	
  
so	
  as	
  to	
  develop	
  new	
  and	
  different	
  analytical	
  procedures,	
  he	
  added	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
59
http://www.lifetechnologies.com/global/en/home/about-­‐us/news-­‐gallery/press-­‐releases/2012/life-­‐techologies-­‐
itroduces-­‐the-­‐bechtop-­‐io-­‐proto.html.html	
  
60
	
  http://neuroscienceblueprint.nih.gov/connectome/index.htm	
  	
  
61
	
  http://www.nigms.nih.gov/Research/FeaturedPrograms/MIDAS/	
  	
  
62
	
  http://www.mytransport.sg/content/mytransport/home.html	
  	
  
63
	
  http://www.unglobalpulse.org/about-­‐new	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
72	
  |	
  P a g e 	
  
• Developers	
   and	
   IT	
   professionals	
   contribute	
   their	
   findings	
   and	
   know-­‐how	
   back	
   into	
   the	
  
industry	
  to	
  drive	
  knowledge	
  exchange	
  
The	
  freely	
  available	
  tools	
  permit	
  to	
  overcome	
  data	
  limitations,	
  simplify	
  the	
  analytical	
  process	
  and	
  
visualize	
  results.	
  The	
  functionalities	
  provided	
  by	
  these	
  software	
  are:	
  
• Massively	
  parallel	
  processing	
  (MPP)	
  database	
  product	
  for	
  large-­‐scale	
  analytics	
  and	
  next-­‐gen	
  
data	
  warehousing	
  
• Data-­‐parallel	
  implementations	
  of	
  statistical	
  and	
  machine	
  learning	
  methods	
  
• Visual	
  data	
  mining	
  modelling	
  	
  
In	
  this	
  view	
  are	
  very	
  important	
  the	
  free	
  Big	
  Data	
  tools	
  developed	
  by	
  Greenplum	
  for	
  data	
  scientists	
  
and	
  developers:	
  MADlib	
  and	
  Alpine	
  In-­‐Database	
  Miner64
	
  and	
  Greenplum	
  HD	
  Community	
  Edition65
.	
  
Some	
   other	
   software	
   partially	
   for	
   free	
   with	
   important	
   Big	
   Data	
   applications:	
   KNIME66
,	
   Weka	
   /	
  
Pentaho67
,	
  Rapid-­‐I	
  RapidAnalytics68
,	
  Rapid-­‐I	
  RapidMiner69
.	
  Finally	
  there	
  is	
  R70
,	
  which	
  although	
  was	
  
not	
  built	
  for	
  Big	
  Data,	
  it	
  has	
  interesting	
  application	
  in	
  this	
  realm.	
  
	
  
	
  
Enterprise-­‐level	
  software	
  
The	
  enterprise-­‐level	
  software	
  is	
  adopted	
  for	
  the	
  following	
  functionalities:	
  
• Open	
  source	
  software	
  based	
  on	
  Apache	
  Hadoop	
  	
  
• Data	
  storage	
  platforms	
  and	
  other	
  information	
  infrastructure	
  solutions	
  
• Shared-­‐nothing	
  massively	
  parallel	
  processing	
  (MPP)	
  database	
  architectures	
  
• Dataflow	
  engines,	
  software	
  interconnect	
  technologies	
  
• Data	
  discovery	
  and	
  exploration	
  tools	
  
• Built-­‐in	
  text	
  analytics,	
  enterprise-­‐grade	
  security	
  and	
  administrative	
  tools	
  
• Real-­‐time	
  analytic	
  processing	
  (RTAP)	
  platforms	
  
• Software-­‐as-­‐a-­‐service	
  (SaaS)	
  
• Visualization	
  features	
  supporting	
  exploratory	
  and	
  discovery	
  analytics	
  
• On-­‐line	
  analytical	
  processing	
  (OLAP)	
  
• BI/DW	
  (business	
  intelligence	
  and	
  data	
  warehousing)	
  
• EDW	
  (enterprise	
  data	
  warehousing).	
  Examples	
  of	
  these	
  software	
  include:	
  Tableau	
  BI	
  platform71
;	
  
SAS	
   Data	
   Integration	
   Studio72
;	
   SAS	
   High	
   Performance	
   Analytics73
;	
   SAS	
   On	
   Demand74
;	
   SAND	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
64
	
  http://www.greenplum.com/community/downloads/analytics-­‐tools/	
  
65
	
  http://www.greenplum.com/community/downloads/database-­‐ce/	
  
66
	
  http://www.knime.org/	
  	
  
67
	
  http://weka.pentaho.com/	
  
68
	
  http://rapid-­‐i.com/content/view/182/196/	
  	
  
69
	
  http://rapid-­‐i.com/content/view/181/196/	
  
70
	
  http://www.r-­‐project.org/	
  	
  
71
	
  http://www.tableausoftware.com/products/server	
  	
  
72
	
  http://support.sas.com/documentation/onlinedoc/etls/	
  	
  
73
	
  http://www.sas.com/software/high-­‐performance-­‐analytics/in-­‐memory-­‐analytics/analytics.html	
  	
  
74
	
  http://www.sas.com/solutions/ondemand/	
  	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
73	
  |	
  P a g e 	
  
Analytic	
   Platform75
;	
   SAP	
   BEx76
;	
   SAP	
   NetWeaver77
;	
   SAP	
   In-­‐Memory	
   Appliance	
   (SAP	
   HANA)78
;	
  	
  
ParAccel	
   Analytic	
   Database	
   (PADB) 79
;	
   IBM	
   Netezza 80
;	
   IBM	
   InfoSphere	
   BigInsights 81
;	
   IBM	
  
InfoSphere	
  Streams82
;	
  Kognitio	
  WX283
;	
  Kognitio	
  Pablo84
;	
  EMC	
  Greenplum	
  Database85
;	
  Greenplum	
  
HD86
;	
  EMC	
  Greenplum	
  Data	
  Computing	
  Appliance87
;	
  Greenplum	
  Chorus88
;	
  Cloudera	
  Enterprise89
,	
  
StatSoft	
  Statistica90
	
  .	
  
Some	
   other	
   software	
   which	
   have	
   not	
   been	
   built	
   specifically	
   for	
   Big	
   Data	
   applications,	
   but	
  
nonetheless	
  can	
  be	
  used	
  for	
  Big	
  Data	
  analytics	
  are:	
  Mathematica91
,	
  MatLab92
	
  and	
  Stata93
.	
  
	
  
Key	
  challenges	
  and	
  gaps	
  
In	
   order	
   to	
   enjoy	
   all	
   the	
   potential	
   stemming	
   from	
   Big	
   Data	
   it	
   would	
   be	
   necessary	
   to	
   remove	
   the	
  
technological	
   barrier	
   preventing	
   the	
   exchange	
   of	
   data,	
   information	
   and	
   knowledge	
   between,	
  
disciplines,	
  as	
  well	
  as	
  to	
  integrate	
  activities	
  which	
  are	
  based	
  on	
  different	
  ontological	
  foundations.	
  
Even	
  though	
  Big	
  Data	
  have	
  provided	
  a	
  lot	
  of	
  benefits,	
  many	
  challenges	
  are	
  still	
  to	
  be	
  coped	
  with.	
  For	
  
instance	
  Gartner	
  (2011)94
	
  argues	
  that	
  the	
  challenges	
  are	
  not	
  only	
  given	
  by	
  the	
  volume	
  of	
  data,	
  but	
  
also	
  by	
  the	
  variety	
  (heterogeneity	
  of	
  data	
  types	
  and	
  representation,	
  semantic	
  interpretation)	
  and	
  
velocity	
   (rate	
   of	
   data	
   arrival	
   and	
   action	
   timing).	
   According	
   to	
   the	
   recent	
   research	
   those	
  
advancements	
  include95
	
  
• Data	
  modelling	
  challenges:	
  data	
  models	
  coherent	
  to	
  the	
  data	
  representation	
  needs;	
  data	
  
models	
   able	
   to	
   describe	
   discipline	
   specific	
   aspects;	
   data	
   models	
   for	
   representation	
   and	
  
query	
  of	
  data	
  provenance	
  and	
  contextual	
  information;	
  data	
  models	
  and	
  query	
  languages	
  
representing	
  and	
  managing	
  data	
  uncertainty,	
  and	
  representing	
  and	
  querying	
  data	
  quality	
  
information	
  
• Data	
   management	
   challenges:	
   provide	
   quality,	
   cost-­‐effective,	
   reliable	
   preservation	
   and	
  
access	
   to	
   the	
   data;	
   protect	
   property	
   rights,	
   privacy	
   and	
   security	
   of	
   sensible	
   data;	
   ensure	
  
data	
  search	
  and	
  discovery	
  across	
  a	
  wide	
  variety	
  of	
  sources;	
  connect	
  data	
  sets	
  from	
  different	
  
domains	
   in	
   order	
   to	
   create	
   open	
   linked	
   data	
   space	
   data	
   can	
   be	
   unstructured	
   or	
   semi-­‐
structured	
  with	
  no	
  context;	
  different	
  data	
  format;	
  different	
  data	
  labels	
  used	
  for	
  same	
  data	
  
elements;	
  different	
  data	
  entry	
  conventions	
  and	
  vocabularies	
  used;	
  -­‐	
  data	
  entry	
  errors;	
  data	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
75
	
  http://www.sand.com/analytics/architecture/	
  	
  
76
	
  http://scn.sap.com/community/business-­‐explorer	
  	
  
77
	
  http://www.sap.com/platform/netweaver/index.epx	
  	
  
78
http://www.sap.com/solutions/technology/in-­‐memory-­‐computing-­‐platform/hana/overview/index.epx	
  
79
	
  http://www.paraccel.com/	
  
80
	
  http://www-­‐01.ibm.com/software/data/netezza/	
  
81
	
  http://www-­‐01.ibm.com/software/data/infosphere/biginsights/	
  
82
	
  http://www-­‐01.ibm.com/software/data/infosphere/streams/	
  
83
	
  http://www.kognitio.com/analyticalplatform	
  
84
	
  http://www.kognitio.com/pablo	
  
85
	
  http://www.greenplum.com/products/greenplum-­‐database	
  
86
	
  http://www.greenplum.com/products/greenplum-­‐hd	
  
87
	
  http://www.greenplum.com/products/greenplum-­‐dca	
  
88
	
  http://www.greenplum.com/products/chorus	
  
89
	
  http://www.cloudera.com/products-­‐services/enterprise/	
  
90
	
  http://www.statsoft.com/	
  	
  
91
	
  http://www.wolfram.com/mathematica/	
  	
  
92
	
  http://www.mathworks.com.au/products/matlab/	
  
93
	
  http://www.stata.com/	
  	
  
94
	
  http://my.gartner.com/portal/server.pt?open=512&objID=202&mode=2&PageID=5553&resId=1727219	
  
2011.	
  	
  Available	
  at	
  http://www.gartner.com/it/page.jsp?id=1731916	
  
95
	
  http://www.grdi2020.eu/Repository/FileScaricati/6bdc07fb-­‐b21d-­‐4b90-­‐81d4-­‐d909fdb96b87.pdf	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
74	
  |	
  P a g e 	
  
sets	
   can	
   be	
   so	
   large	
   they	
   cannot	
   be	
   effectively	
   processed	
   by	
   a	
   single	
   machine;	
   	
   data	
  
parallelization	
  and	
  task	
  parallelization96
.	
  
• Data	
   service/tools	
   challenges:	
  data	
  tools	
  for	
  most	
  scientific	
  disciplines	
  are	
  inadequate	
  to	
  
support	
  research	
  in	
  all	
  its	
  phases	
  so	
  that	
  scientists	
  are	
  less	
  productive	
  than	
  what	
  they	
  might	
  
be.	
  In	
  fact	
  there	
  is	
  the	
  need	
  of	
  software	
  able	
  to	
  “clean”,	
  analyse	
  and	
  visualize	
  huge	
  amounts	
  
of	
   data.	
   Moreover	
   are	
   missing	
   data	
   tools	
   and	
   policies	
   for	
   the	
   ensuring	
   the	
   cross	
  
collaboration	
  and	
  fertilization	
  among	
  different	
  disciplines	
  and	
  scientific	
  realms	
  
	
  
As	
  for	
  other	
  issues	
  concerning	
  Big	
  Data,	
  Boyd	
  and	
  Crawford	
  (2011)	
  highlight	
  some	
  of	
  them:	
  
• Relationship	
  between	
  automatic	
  search	
  and	
  the	
  definition	
  of	
  knowledge.	
  At	
  the	
  beginning	
  
of	
   the	
   20th
	
   century	
   Ford	
   introduced	
   the	
   mass	
   production,	
   automation	
   and	
   assembly	
   line,	
  
reshaping	
  not	
  only	
  the	
  way	
  things	
  are	
  produced,	
  but	
  also	
  the	
  general	
  understanding	
  of	
  labor,	
  
the	
   human	
   relationship	
   to	
   work,	
   and	
   the	
   society	
   at	
   large.	
   Fordism	
   consisted	
   in	
   breaking	
  
down	
  holistic	
  tasks	
  into	
  atomized	
  and	
  independent	
  ones.	
  In	
  the	
  same	
  way	
  Big	
  Data	
  is	
  a	
  new	
  
system	
  of	
  knowledge	
  characterized	
  by	
  a	
  computational	
  turn	
  in	
  science	
  leading	
  to	
  a	
  change	
  in	
  
the	
  constitution	
  of	
  knowledge,	
  the	
  process	
  of	
  research	
  and	
  the	
  categorization	
  of	
  reality.	
  But	
  
as	
  the	
  Fordism	
  had	
  limits	
  (indeed	
  has	
  been	
  overcome	
  by	
  the	
  Just	
  in	
  Time	
  paradigm),	
  also	
  the	
  
specialized	
   Big	
   Data	
   tools	
   are	
   not	
   flawless.	
   Big	
   Data,	
   as	
   a	
   new	
   system	
   of	
   knowledge	
   can	
  
change	
   the	
   very	
   meaning	
   of	
   learning	
   itself,	
   with	
   all	
   the	
   possibilities	
   and	
   limitations	
  
embedded	
  in	
  the	
  systems	
  of	
  knowing	
  
	
  
• Big	
  Data	
  may	
  produce	
  misleading	
  claims	
  of	
  objectivity	
  and	
  accuracy.	
  In	
  the	
  science	
  there	
  is	
  
a	
   deep	
   cleavage	
   between	
   qualitative	
   and	
   quantitative	
   scientists.	
   Apparently	
   qualitative	
  
scientists	
  would	
  be	
  engaged	
  in	
  creating	
  and	
  interpreting	
  stories,	
  while	
  quantitative	
  scientists	
  
would	
   be	
   in	
   the	
   business	
   of	
   producing	
   facts.	
   Needless	
   to	
   say,	
   that	
   is	
   not	
   case	
   as	
   all	
   the	
  
objectivity	
   claims	
   come	
   from	
   subjects,	
   who	
   make	
   subjective	
   observation	
   and	
   choices.	
  
Moreover	
   data	
   analysis	
   is	
   based	
   on	
   a	
   large	
   number	
   of	
   assumptions	
   (see	
   for	
   instance	
   the	
  
asymptotic	
   theory	
   in	
   statistics)	
   and	
   on	
   the	
   other	
   hand	
   even	
   though	
   a	
   model	
   may	
   be	
  
mathematically	
   or	
   an	
   experiment	
   may	
   be	
   scientifically	
   valid,	
   the	
   final	
   interpretation	
   is	
  
subjective.	
   Other	
   examples	
   are	
   the	
   difficulty	
   of	
   integrating	
   in	
   a	
   consistent	
   way	
   different	
  
datasets,	
   the	
   arbitrary	
   choices	
   inherent	
   data	
   cleaning	
   and	
   finally	
   the	
   fact	
   that	
   internet	
  
databases	
  may	
  well	
  be	
  affected	
  by	
  bias	
  such	
  as	
  frictions	
  and	
  self-­‐selection.	
  In	
  this	
  view,	
  by	
  
increasing	
   the	
   quantification	
   space,	
   especially	
   in	
   social	
   sciences,	
   Big	
   Data	
   might	
   support	
  
objectivity	
  and	
  accuracy	
  claims	
  which	
  are	
  not	
  really	
  grounded	
  on	
  good	
  sense	
  and	
  reality.	
  	
  
	
  
• A	
   higher	
   quantity	
   of	
   data	
   does	
   not	
   always	
   mean	
   better	
   data.	
   In	
   all	
   sciences	
   there	
   is	
   a	
  
massive	
   amount	
   of	
   literature	
   (interpretation	
   bias,	
   design	
   standardization,	
   sampling	
  
mechanism	
  and	
  question	
  bias,	
  statistical	
  significance	
  and	
  diagnostics)	
  aimed	
  at	
  ensuring	
  the	
  
consistency	
  of	
  data	
  collection	
  and	
  analysis.	
  Curiously,	
  Big	
  Data	
  scientists	
  sometimes	
  assume	
  
a	
  priori	
  quality	
  of	
  their	
  data	
  and	
  completely	
  neglects	
  the	
  methodological	
  issues	
  proper	
  of	
  
global	
   sciences.	
   A	
   clear	
   example	
   is	
   given	
   by	
   social	
   media	
   data,	
   which	
   are	
   subject	
   to	
   self-­‐
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
96
	
  Some	
  Big	
  Data	
  challenges	
  are	
  deeply	
  related	
  with	
  policy	
  making,	
  such	
  as	
  the	
  fact	
  that	
  many	
  agencies	
  pay	
  a	
  high	
  premium	
  
to	
  both	
  internal	
  resources	
  and	
  external	
  third	
  parties	
  to	
  manage	
  their	
  data.	
  Additionally,	
  data	
  management	
  can	
  sometimes	
  
be	
  redundant	
  if	
  not	
  properly	
  set	
  up.	
  Moreover	
  regulations	
  do	
  not	
  take	
  into	
  account	
  the	
  new,	
  expanded	
  capabilities	
  that	
  IT	
  
offers	
  as	
  it	
  takes	
  time	
  to	
  issue	
  a	
  new	
  law	
  and	
  bureaucrats	
  are	
  not	
  so	
  keen	
  to	
  novelty	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
75	
  |	
  P a g e 	
  
selection	
  bias	
  as	
  people	
  using	
  social	
  media	
  is	
  not	
  representative	
  of	
  the	
  society	
  itself.	
  Even	
  
the	
  definition	
  of	
  active	
  user	
  and	
  account	
  of	
  a	
  social	
  media	
  might	
  not	
  be	
  innocuous:	
  in	
  fact	
  it	
  
is	
  estimated	
  that	
  40%	
  of	
  Twitter’s	
  users	
  are	
  merely	
  “listeners”,	
  i.e.	
  do	
  not	
  proactively	
  take	
  
part.	
  Finally	
  it	
  has	
  to	
  be	
  recognized	
  that	
  in	
  my	
  contexts	
  high	
  quality	
  research	
  is	
  purposely	
  
carried	
  out	
  with	
  a	
  limited	
  amount	
  of	
  data,	
  such	
  as	
  for	
  instance	
  in	
  game	
  theory	
  experimental	
  
analysis.	
  
	
  
• Big	
  Data	
  and	
  Ethical	
  Issues.	
  The	
  use	
  for	
  research	
  purposes	
  of	
  “public”	
  data	
  on	
  social	
  media	
  
websites	
  opens	
  the	
  door	
  to	
  deontological	
  issues.	
  The	
  problem	
  is:	
  can	
  those	
  data	
  be	
  used	
  
without	
   any	
   ethical	
   of	
   privacy	
   consideration?	
   Obviously	
   Big	
   Data	
   is	
   an	
   emerging	
   field	
   of	
  
science,	
  thereby	
  ethical	
  consideration	
  are	
  yet	
  to	
  be	
  fully	
  considered.	
  How	
  the	
  researchers	
  
can	
   be	
   sure	
   that	
   their	
   activity	
   is	
   not	
   harmful	
   for	
   some	
   of	
   their	
   subjects?	
   On	
   one	
   hand	
   is	
  
impossible	
  to	
  ask	
  for	
  data	
  use	
  permission	
  from	
  all	
  the	
  subjects	
  present	
  in	
  a	
  database.	
  On	
  the	
  
other	
   hand,	
   the	
   mere	
   fact	
   that	
   the	
   data	
   are	
   available	
   does	
   not	
   justify	
   their	
   use.	
  
Accountability	
  to	
  the	
  field	
  of	
  research	
  and	
  accountability	
  to	
  the	
  research	
  subjects	
  are	
  the	
  
ethical	
   keys	
   for	
   Big	
   Data.	
   In	
   all	
   the	
   traditional	
   fields	
   of	
   science,	
   researcher	
   must	
   follow	
   a	
  
series	
   of	
   professional	
   standards	
   aimed	
   at	
   protecting	
   the	
   rights	
   and	
   well	
   being	
   of	
   human	
  
subjects.	
  On	
  the	
  other	
  hand	
  the	
  ethical	
  implications	
  of	
  Big	
  Data	
  research	
  are	
  not	
  yet	
  clear.	
  
	
  
• Digital	
  divides	
  created	
  by	
  Big	
  Data.	
  It	
  is	
  widely	
  accepted	
  that	
  doing	
  research	
  on	
  Big	
  Data	
  
automatically	
  involves	
  having	
  a	
  quick	
  and	
  easy	
  access	
  to	
  databases.	
  This	
  is	
  not	
  the	
  case,	
  as	
  
only	
   social	
   media	
   companies	
   have	
   access	
   to	
   large	
   datasets,	
   and	
   sell	
   those	
   data	
   at	
   a	
   high	
  
price,	
  offering	
  only	
  small	
  data	
  sets	
  to	
  university	
  based	
  researchers.	
  So	
  researchers	
  with	
  a	
  
considerable	
  amount	
  of	
  founding	
  or	
  based	
  inside	
  those	
  firms	
  can	
  have	
  access	
  to	
  data	
  that	
  
the	
  outsiders	
  will	
  not.	
  Thereby	
  their	
  methodologies	
  and	
  claims	
  cannot	
  be	
  verified.	
  In	
  this	
  
view	
   Big	
   Data	
   can	
   create	
   a	
   new	
   digital	
   divide,	
   between	
   researchers	
   belonging	
   to	
   the	
   top	
  
universities	
  and	
  working	
  with	
  the	
  top	
  companies,	
  and	
  scholar	
  belonging	
  to	
  the	
  periphery.	
  
But	
  the	
  digital	
  divide	
  can	
  be	
  also	
  skills	
  based:	
  in	
  fact	
  only	
  people	
  with	
  a	
  strong	
  computational	
  
background	
   are	
   able	
   to	
   wrangle	
   through	
   APIs	
   and	
   analyse	
   massive	
   quantities	
   of	
   data.	
  
Concluding	
  there	
  is	
  a	
  new	
  digital	
  divide	
  between	
  the	
  Big	
  Data	
  reach,	
  who	
  are	
  able	
  to	
  analyze	
  
and	
  to	
  buy	
  datasets,	
  and	
  belong	
  to	
  top	
  universities	
  and	
  companies,	
  and	
  the	
  Big	
  Data	
  poor,	
  
who	
  are	
  outsiders	
  
	
  
Finally	
  according	
  to	
  the	
  UN97
	
  the	
  Big	
  Data	
  challenges	
  can	
  be	
  divided	
  along	
  two	
  main	
  dimensions.	
  
The	
  data	
  management:	
  
• Privacy.	
   The	
   development	
   of	
   new	
   technologies	
   always	
   raises	
   privacy	
   concerns	
   for	
  
individuals,	
   companies	
   and	
   societies.	
   This	
   is	
   a	
   very	
   crucial	
   issue	
   as	
   privacy,	
   safety	
   and	
  
diversity	
  are	
  important	
  for	
  defending	
  the	
  freedom	
  of	
  citizens,	
  and	
  obviously	
  companies	
  have	
  
the	
   right	
   to	
   retain	
   their	
   confidential	
   information.	
   In	
   the	
   era	
   of	
   Big	
   Data,	
   the	
   primary	
  
producers,	
   who	
   are	
   the	
   citizens	
   using	
   services	
   and	
   devices	
   generating	
   data,	
   are	
   seldom	
  
aware	
  that	
  they	
  are	
  doing	
  so	
  or	
  how	
  their	
  data	
  will	
  be	
  used.	
  Sometimes	
  it	
  is	
  also	
  unclear	
  to	
  
what	
  extent	
  users	
  of	
  social	
  media	
  such	
  as	
  Twitter	
  consent	
  to	
  the	
  analysis	
  of	
  their	
  data.	
  The	
  
pool	
  of	
  individual	
  information	
  shared	
  by	
  mobile	
  phones	
  and	
  credit	
  card	
  companies,	
  social	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
97
	
  http://unglobalpulse.org/	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
76	
  |	
  P a g e 	
  
media	
   and	
   research	
   engines	
   is	
   simply	
   astonishing.	
   People	
   must	
   be	
   conscious	
   of	
   that,	
   as	
  
privacy	
  is	
  a	
  freedom	
  pillar.	
  	
  	
  
	
  
• Access	
  and	
  sharing.	
  A	
  great	
  amount	
  of	
  data	
  is	
  available	
  online	
  for	
  the	
  most	
  disparate	
  uses.	
  
On	
  the	
  other	
  hand	
  much	
  data	
  is	
  retained	
  by	
  companies	
  which	
  are	
  concerned	
  about	
  their	
  
reputation,	
  the	
  necessity	
  to	
  protect	
  their	
  competitiveness	
  or	
  simply	
  lack	
  the	
  right	
  incentive	
  
to	
  do	
  so.	
  On	
  the	
  other	
  hand	
  there	
  is	
  a	
  bunch	
  of	
  technical	
  and	
  regulatory	
  arrangements	
  which	
  
has	
  to	
  be	
  put	
  in	
  place	
  in	
  order	
  to	
  ensure	
  inter-­‐comparability	
  of	
  data	
  and	
  interoperability	
  of	
  
systems.	
  	
  
	
  
Data	
  analysis:	
  
• Summarising	
  the	
  data.	
  Sometimes	
  the	
  data	
  might	
  be	
  simply	
  false	
  or	
  fabricated,	
  especially	
  
with	
   user-­‐generated	
   text-­‐based	
   data	
   (blogs,	
   news,	
   social	
   media	
   messages).	
   In	
   addition	
  
sometimes	
   data	
   are	
   derived	
   from	
   people’s	
   perceptions,	
   as	
   in	
   calls	
   to	
   health	
   hotlines	
   and	
  
online	
   searches	
   for	
   symptoms.	
   Another	
   case	
   is	
   related	
   to	
   opinion	
   mining	
   and	
   sentiment	
  
analysis,	
  in	
  which	
  the	
  true	
  significance	
  of	
  the	
  statements	
  can	
  be	
  misled,	
  so	
  that	
  the	
  human	
  
factor	
   is	
   always	
   crucial	
   in	
   the	
   analysis.	
   Another	
   problem	
   is	
   that	
   sometimes	
   data	
   are	
  
generated	
  from	
  expressed	
  intentions	
  in	
  blogposts,	
  online	
  searches,	
  mobile-­‐phone	
  systems	
  
for	
   checking	
   market	
   price,	
   which	
   are	
   not	
   a	
   sure	
   indicator	
   of	
   actual	
   intentions	
   and	
   final	
  
decisions.	
  So	
  there	
  is	
  a	
  huge	
  problem	
  in	
  summarizing	
  facts	
  from	
  users’	
  generated	
  text,	
  as	
  
there	
  might	
  be	
  a	
  difficulty	
  in	
  distinguishing	
  feeling	
  from	
  facts.	
  
• Interpreting	
  data.	
  A	
  very	
  important	
  concern	
  is	
  given	
  by	
  the	
  sample	
  selection	
  bias,	
  given	
  by	
  
the	
   fact	
   that	
   people	
   generating	
   data	
   are	
   not	
   representative	
   of	
   the	
   entire	
   population.	
   For	
  
instance	
   younger	
   generations	
   use	
   more	
   internet	
   and	
   mobile	
   devices.	
   In	
   this	
   way	
   the	
  
conclusions	
  of	
  the	
  analysis	
  are	
  valid	
  only	
  for	
  the	
  sample	
  at	
  hand	
  and	
  cannot	
  therefore	
  be	
  
generalized.	
  Sometimes	
  dealing	
  with	
  huge	
  amounts	
  of	
  data	
  leads	
  the	
  researchers	
  to	
  focus	
  on	
  
finding	
   patterns	
   or	
   correlations	
   without	
   concentrating	
   on	
   the	
   underlying	
   dynamics.	
   One	
  
thing	
  is	
  to	
  find	
  a	
  correlation,	
  another	
  is	
  to	
  detect	
  a	
  causal	
  relationship.	
  Even	
  more	
  difficult	
  is	
  
to	
  identify	
  the	
  direction	
  of	
  the	
  causal	
  relationship	
  without	
  using	
  a	
  founding	
  theory.	
  A	
  final	
  
issue	
   is	
   very	
   much	
   linked	
   with	
   using	
   data	
   from	
   different	
   sources,	
   which	
   can	
   magnify	
   the	
  
existing	
  flaws	
  in	
  each	
  database	
  
	
  
Finally	
   we	
   have	
   the	
   challenges	
   identified	
   by	
   the	
   community	
   white	
   paper	
   drafted	
   with	
   the	
  
collaboration	
  of	
  a	
  group	
  of	
  leading	
  researchers	
  across	
  the	
  United	
  States98
:	
  
• Heterogeneity	
   and	
   incompleteness.	
   Data	
   must	
   be	
   structured	
   prior	
   to	
   the	
   analysis	
   in	
   an	
  
homogeneous	
   way,	
   as	
   algorithms	
   unlike	
   humans	
   are	
   not	
   able	
   to	
   grasp	
   nuance.	
   Most	
  
computer	
  systems	
  work	
  better	
  if	
  multiple	
  items	
  are	
  stored	
  in	
  an	
  identical	
  size	
  and	
  structure.	
  
But	
   an	
   efficient	
   representation,	
   access	
   and	
   analysis	
   of	
   semi-­‐structured	
   data	
   is	
   necessary	
  
because	
  as	
  a	
  less	
  structured	
  design	
  is	
  more	
  useful	
  for	
  certain	
  analysis	
  and	
  purposes.	
  Even	
  
after	
   cleaning	
   and	
   error	
   correction	
   in	
   the	
   database,	
   some	
   errors	
   and	
   incompleteness	
   will	
  
remain,	
  challenging	
  the	
  precision	
  of	
  the	
  analysis.	
  
• Human	
   collaboration.	
   Even	
   if	
   analytical	
   instruments	
   gained	
   tremendous	
   advancements,	
  
there	
  are	
  still	
  many	
  realms	
  in	
  which	
  the	
  human	
  factor	
  is	
  able	
  to	
  discover	
  patterns	
  algorithms	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
98
	
  http://imsc.usc.edu/research/bigdatawhitepaper.pdf	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
77	
  |	
  P a g e 	
  
cannot.	
  An	
  example	
  can	
  be	
  found	
  in	
  the	
  use	
  of	
  CAPTCHAs,	
  which	
  can	
  discern	
  human	
  users	
  
from	
  computer	
  programmes.	
  In	
  this	
  view	
  a	
  Big	
  Data	
  system	
  cannot	
  must	
  involve	
  a	
  human	
  
presence.	
  Given	
  the	
  complexity	
  of	
  today’s	
  world,	
  there	
  is	
  the	
  necessity	
  to	
  harness	
  human	
  
ingenuity	
   from	
   different	
   domains	
   through	
   crowdsourcing.	
   Thereby	
   a	
   Big	
   Data	
   system	
  
requires	
  technologies	
  able	
  to	
  support	
  this	
  kind	
  of	
  collaboration	
  even	
  in	
  case	
  of	
  conflicting	
  
statements	
  and	
  judgments.	
  	
  
	
  
Current	
  Big	
  Data	
  Techniques	
  	
  
Big	
   datasets	
   can	
   be	
   analysed	
   by	
   the	
   mean	
   of	
   several	
   techniques	
   coming	
   from	
   statistics	
   and	
  
computers	
  science.	
  	
  
A	
  list	
  of	
  the	
  principal	
  categories	
  is:	
  
• Cluster	
   analysis.	
   Statistical	
   technique	
   consisting	
   in	
   splitting	
   an	
   heterogeneous	
   group	
   into	
  
smaller	
  subsets	
  of	
  similar	
  elements,	
  whose	
  characteristics	
  of	
  similarities	
  are	
  not	
  known	
  in	
  
advance.	
  A	
  typical	
  example	
  is	
  to	
  identify	
  consumers	
  with	
  similar	
  patterns	
  of	
  past	
  purchases	
  
in	
  order	
  to	
  tailor	
  most	
  accurately	
  a	
  given	
  marketing	
  strategy	
  
• Crowdsourcing.	
  Technique	
  for	
  the	
  collection	
  of	
  data	
  which	
  have	
  been	
  drawn	
  from	
  a	
  large	
  
group	
  or	
  community	
  in	
  response	
  to	
  an	
  open	
  call	
  through	
  a	
  networked	
  media	
  such	
  as	
  the	
  
internet.	
  This	
  category	
  bears	
  a	
  crucial	
  importance	
  in	
  our	
  case	
  as	
  it	
  is	
  a	
  mass	
  collaboration	
  
instance	
  of	
  using	
  Web	
  2.0	
  
• Data	
   mining.	
   Combination	
   of	
   database	
   management,	
   statistics	
   and	
   machine	
   learning	
  
methods	
  useful	
  for	
  extracting	
  patterns	
  from	
  large	
  datasets.	
  Some	
  examples	
  include	
  mining	
  
human	
  resources	
  data	
  in	
  order	
  to	
  assess	
  some	
  employee	
  characteristics	
  or	
  consumer	
  bundle	
  
analysis	
  to	
  model	
  the	
  behavior	
  of	
  customers	
  
• Machine	
   learning.	
   Subfield	
   of	
   computer	
   science	
   (in	
   the	
   scope	
   of	
   artificial	
   intelligence)	
  
regarding	
  the	
  definition	
  and	
  the	
  implementation	
  of	
  algorithms	
  allowing	
  computers	
  to	
  evolve	
  
their	
  behaviour	
  based	
  on	
  empirical	
  evidence.	
  An	
  example	
  of	
  machine	
  learning	
  is	
  the	
  natural	
  
language	
  processing.	
  
• Natural	
   language	
   processing.	
   Set	
   of	
   computer	
   science	
   and	
   linguistic	
   methods	
   adopting	
  
algorithms	
  to	
  analyse	
  natural	
  human	
  language.	
  Basically	
  this	
  field,	
  which	
  began	
  as	
  a	
  branch	
  
of	
  artificial	
  intelligence,	
  deals	
  with	
  the	
  interaction	
  between	
  computer	
  and	
  human	
  language	
  
• Neural	
   networks.	
   Computational	
   models	
   which	
   are	
   structured	
   and	
   work	
   similarly	
   to	
  
biological	
  neural	
  networks	
  existing	
  among	
  brain	
  cells,	
  and	
  that	
  are	
  used	
  to	
  find	
  in	
  particular	
  
non-­‐linear	
  patterns	
  in	
  the	
  data.	
  Some	
  applications	
  include	
  game-­‐playing	
  and	
  decision	
  making	
  
(backgammon,	
  chess,	
  poker)	
  and	
  knowledge	
  discovery	
  in	
  data	
  bases	
  	
  
• Network	
   analysis.	
   Part	
   of	
   graph	
   theory	
   and	
   network	
   science	
   which	
   describes	
   the	
  
relationships	
  among	
  discrete	
  nodes	
  in	
  a	
  graph	
  or	
  a	
  network.	
  In	
  particular	
  the	
  social	
  network	
  
analysis	
  studies	
  the	
  structure	
  of	
  relationship	
  among	
  social	
  entities.	
  Some	
  applications	
  are	
  
include	
  the	
  role	
  of	
  trust	
  in	
  exchange	
  relationships	
  and	
  the	
  study	
  of	
  recruitment	
  into	
  political	
  
movements	
  and	
  social	
  organizations	
  
• Predictive	
  modelling.	
  Branch	
  a	
  mathematical	
  model	
  used	
  to	
  best	
  predict	
  the	
  probability	
  of	
  
an	
  outcome.	
  This	
  technique	
  is	
  widely	
  used	
  in	
  customer	
  relationship	
  management	
  to	
  produce	
  
customer-­‐level	
  models	
  able	
  to	
  assess	
  the	
  probability	
  that	
  a	
  customer	
  would	
  take	
  a	
  particular	
  
action,	
  such	
  as	
  cross-­‐sell,	
  product	
  deep-­‐sell	
  and	
  churn	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
78	
  |	
  P a g e 	
  
• Regression.	
  Statistical	
  method	
  for	
  assessing	
  how	
  the	
  value	
  of	
  a	
  dependent	
  variable	
  changes	
  
with	
   one	
   or	
   more	
   dependent	
   variables.	
   Examples	
   of	
   applications	
   include	
   the	
   change	
   in	
  
consumer’s	
  behaviour	
  due	
  to	
  manufacturing	
  parameters	
  or	
  economic	
  fundamentals	
  
• Sentiment	
  analysis.	
  Natural	
  language	
  processing	
  methods	
  for	
  extracting	
  information	
  such	
  as	
  
polarity,	
  degree	
  and	
  strength	
  of	
  the	
  sentiment	
  over	
  a	
  given	
  feature,	
  aspect	
  of	
  product.	
  Many	
  
companies	
   assess	
   how	
   different	
   customers	
   and	
   stakeholders	
   react	
   to	
   their	
   products	
   and	
  
action	
  by	
  applying	
  this	
  analysis	
  to	
  blogs,	
  social	
  networks	
  and	
  other	
  social	
  media	
  
• Spatial	
   analysis.	
   Methods	
   for	
   assessing	
   the	
   geographical,	
   geometric	
   or	
   topological	
  
characteristics	
  of	
  a	
  data	
  set.	
  The	
  spatial	
  data	
  are	
  often	
  drawn	
  from	
  geographical	
  information	
  
systems	
  (GIS)	
  including	
  addresses	
  or	
  latitude/longitude	
  coordinates,	
  to	
  be	
  incorporated	
  into	
  
spatial	
  regressions	
  (correlation	
  between	
  commodity	
  price	
  and	
  location)	
  or	
  simulations	
  
• Simulation.	
  Consists	
  in	
  modelling	
  the	
  behavior	
  of	
  a	
  complex	
  system	
  for	
  performing	
  forecast	
  
and	
   scenario	
   analysis.	
   As	
   example	
   we	
   can	
   mention	
   Monte	
   Carlo	
   simulations,	
   which	
   are	
   a	
  
class	
  of	
  computational	
  algorithms	
  that	
  rely	
  on	
  repeated	
  random	
  sampling	
  to	
  compute	
  their	
  
results	
  	
  
Current	
  and	
  Future	
  Research	
  
• Technologies	
   for	
   collecting	
   cleaning,	
   storing	
   and	
   managing	
   data:	
   datawarehouse;	
   pivotal	
  
transformation;	
   ETL;	
   I/O;	
   efficient	
   archiving,	
   storing,	
   indexing,	
   retrieving,	
   and	
   recovery;	
  
streaming,	
   filtering,	
   compressed	
   sensing	
   sufficient	
   statistics;	
   automatic	
   data	
   annotation;	
  
Large	
   Database	
   Management	
   Systems;	
   storage	
   architectures;	
   data	
   validity,	
   integrity,	
  
consistency,	
   uncertainty	
   management;	
   languages,	
   tools,	
   methodologies	
   and	
   programming	
  
environments	
  
• Technologies	
   for	
   summarizing	
   data	
   and	
   extracting	
   some	
   meaning:	
   reports;	
   dashboard;	
  
statistical	
   analysis	
   and	
   inference;	
   Bayesian	
   techniques;	
   information	
   extraction	
   from	
  
unstructured,	
   multimodal	
   data;	
   scalable	
   and	
   interactive	
   data	
   visualization;	
   extraction	
   and	
  
integration	
   of	
   knowledge	
   from	
   massive,	
   complex,	
   multi-­‐modal,	
   or	
   dynamic	
   data;	
   data	
  
mining;	
   scalable	
   machine	
   learning;	
   data-­‐driven	
   high	
   fidelity	
   simulations;	
   scalable	
   machine	
  
learning;	
   predictive	
   modelling,	
   hypothesis	
   generation	
   and	
   automated	
   discovery	
  
Technologies	
   for	
   using	
   data	
   a	
   decision	
   tool:	
   Decision	
   Trees,	
   Pro-­‐Con	
   Analysis,	
   Rule	
   Based	
  
Systems,	
   Neural	
   Networks,	
   Tradeoff	
   based	
   Decisions	
   (which	
   incorporates	
   Reporting,	
  
Statistics,	
  Knowledge	
  Based	
  systems)	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
79	
  |	
  P a g e 	
  
	
  
3.2.2. Opinion	
  Mining	
  and	
  Sentiment	
  Analysis	
  
Summary	
  overview	
  
	
  
Current	
  free	
  tools	
   Top	
  
market	
  
tools	
  
Current	
  research	
   Short	
   term	
  
future	
  research	
  
Long	
   term	
  
future	
  
research	
  
-­‐	
  Filtering	
  opinion	
  based	
  
on	
   rating;	
   assessing	
  
sentiments	
   based	
   on	
  
keywords;	
   visual	
   word	
  
counting	
  
-­‐	
  Argument	
  mapping	
  
-­‐	
  	
  Machine	
  
learning	
  +	
  
human	
  
analysis	
  
·∙	
  	
  -­‐	
  	
  	
  	
  -­‐	
  Statistical	
  +	
  Semantic	
  analysis	
  
through	
  lexicon/corpus	
  of	
  words	
  
with	
  known	
  sentiment	
  for	
  
sentiment	
  classification	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Identification	
  of	
  policy	
  -­‐	
  
opinionated	
  material	
  to	
  be	
  
analysed	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Computer-­‐generated	
  reference	
  
corpuses	
  in	
  political/governance	
  
field	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Visual	
  mapping	
  of	
  bipolar	
  
opinion	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Identification	
  of	
  highly	
  rated	
  
experts	
  
·∙	
  	
  	
  	
  	
  	
  	
  -­‐	
  Visual	
  
representation	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Audiovisual	
  
opinion	
  mining	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Real-­‐time	
  opinion	
  
mining	
  
	
  	
  	
  	
  	
  ·∙	
  	
  	
  -­‐	
  Machine	
  learning	
  
algorithms	
  
	
  	
  	
  	
  	
  ·∙	
  	
  -­‐	
  Natural	
  language	
  
interfaces	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  SNA	
  applied	
  to	
  
opinion	
  and	
  
expertise	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Bipolar	
  assessment	
  
of	
  opinions	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Multilingual	
  
reference	
  corpora	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Recommendation	
  
algorithms	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Multilingual	
  
audiovisual	
  
opinion	
  mining	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Usable,	
  peer-­‐to-­‐
peer	
  opinion	
  
mining	
  tools	
  for	
  
citizens	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Non-­‐bipolar	
  
assessment	
  of	
  
opinion	
  
·∙	
  	
  	
  	
  	
  	
  	
  	
  -­‐	
  Automatic	
  irony	
  
detection	
  
	
  
	
  
Introduction	
  and	
  definition	
  
The	
  explosion	
  of	
  social	
  media	
  has	
  created	
  unprecedented	
  opportunities	
  for	
  citizens	
  to	
  publicly	
  voice	
  
their	
  opinions,	
  but	
  has	
  created	
  serious	
  bottlenecks	
  when	
  it	
  comes	
  to	
  making	
  sense	
  of	
  these	
  opinions.	
  
At	
  the	
  same	
  time,	
  the	
  urgency	
  to	
  gain	
  a	
  real-­‐time	
  understanding	
  of	
  citizens	
  concerns	
  has	
  grown:	
  
because	
   of	
   the	
   viral	
   nature	
   of	
   social	
   media	
   (where	
   attention	
   is	
   very	
   unevenly	
   distributed)	
   some	
  
issues	
  rapidly	
  and	
  unpredictably	
  become	
  important	
  through	
  word-­‐of-­‐mouth.	
  
Policy-­‐makers	
  and	
  citizens	
  don’t	
  yet	
  have	
  an	
  effective	
  way	
  to	
  make	
  sense	
  of	
  this	
  mass	
  conversation	
  
and	
  interact	
  meaningfully	
  with	
  thousands	
  of	
  others.	
  
As	
  a	
  result	
  of	
  this	
  paradox,	
  the	
  public	
  debate	
  in	
  social	
  media	
  is	
  characterized	
  by	
  short-­‐termism	
  and	
  
auto-­‐referentiality.	
   Many	
   experts	
   consider	
   social	
   media	
   as	
   a	
   missed	
   opportunity	
   for	
   better	
   policy	
  
debate.	
  
At	
  the	
  same	
  time,	
  the	
  sheer	
  amount	
  of	
  raw	
  data	
  is	
  also	
  an	
  opportunity	
  to	
  better	
  make	
  sense	
  of	
  
opinions.	
   The	
   key	
   asset	
   that	
   Google	
   exploited	
   to	
   reach	
   dominance	
   in	
   the	
   search	
   market	
   is	
   not	
   a	
  
better	
  algorithm,	
  but	
  the	
  power	
  of	
  more	
  data.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
80	
  |	
  P a g e 	
  
We	
  are	
  therefore	
  at	
  a	
  crucial	
  underpinning	
  where	
  the	
  challenge	
  of	
  information	
  overload	
  can	
  become	
  
not	
  a	
  problem,	
  but	
  an	
  opportunity	
  for	
  making	
  sense	
  of	
  a	
  thousand	
  voices	
  and	
  identify	
  problems	
  as	
  
soon	
  as	
  they	
  arise.	
  
	
  
Opinion	
   mining	
   can	
   be	
   defined	
   as	
   a	
   sub-­‐discipline	
   of	
   computational	
   linguistics	
   that	
   focuses	
   on	
  
extracting	
   people’s	
   opinion	
   from	
   the	
   web.	
   The	
   recent	
   expansion	
   of	
   the	
   web	
   encourages	
   users	
   to	
  
contribute	
  and	
  express	
  themselves	
  via	
  blogs,	
  videos,	
  social	
  networking	
  sites,	
  etc.	
  All	
  these	
  platforms	
  
provide	
  a	
  huge	
  amount	
  of	
  valuable	
  information	
  that	
  we	
  are	
  interested	
  to	
  analyse.	
  Given	
  a	
  piece	
  of	
  
text,	
  opinion-­‐mining	
  systems	
  analyse:	
  
·∙	
  	
  	
  	
  	
  	
  	
  Which	
  part	
  is	
  opinion	
  expressing;	
  
·∙	
  	
  	
  	
  	
  	
  	
  Who	
  wrote	
  the	
  opinion;	
  
·∙	
  	
  	
  	
  	
  	
  	
  What	
  is	
  being	
  commented.	
  
Sentiment	
   analysis,	
  on	
  the	
  other	
  hand,	
  is	
  about	
  determining	
  the	
  subjectivity,	
  polarity	
  (positive	
  or	
  
negative)	
  and	
  polarity	
  strength	
  (weakly	
  positive,	
  mildly	
  positive,	
  strongly	
  positive,	
  etc.)	
  of	
  a	
  piece	
  of	
  
text	
  –	
  in	
  other	
  words:	
  
·∙	
  	
  	
  	
  	
  	
  	
  What	
  is	
  the	
  opinion	
  of	
  the	
  writer	
  	
  
	
  
Opinion	
  mining	
  and	
  sentiment	
  analysis	
  cover	
  a	
  wide	
  range	
  of	
  applications.	
  	
  
1. Argument	
  mapping	
  software	
  helps	
  organising	
  in	
  a	
  logical	
  way	
  these	
  policy	
  statements,	
  by	
  
making	
  explicit	
  the	
  logical	
  links	
  between	
  them.	
  Under	
  the	
  research	
  field	
  of	
  Online	
  
Deliberation,	
  tools	
  like	
  Compendium,	
  Debatepedia,	
  Cohere,	
  Debategraph	
  have	
  been	
  
developed	
  to	
  give	
  a	
  logical	
  structure	
  to	
  a	
  number	
  of	
  policy	
  statement,	
  and	
  to	
  link	
  arguments	
  
with	
  the	
  evidence	
  to	
  back	
  it	
  up99
.	
  	
  
2. Voting	
  Advise	
  Applications	
  help	
  voters	
  understanding	
  which	
  political	
  party	
  (or	
  other	
  voters)	
  
have	
  closer	
  positions	
  to	
  theirs.	
  For	
  instance,	
  SmartVote.ch	
  asks	
  the	
  voter	
  to	
  declare	
  its	
  
degree	
  of	
  agreement	
  with	
  a	
  number	
  of	
  policy	
  statements,	
  then	
  matches	
  its	
  position	
  with	
  the	
  
political	
  parties.	
  
3. Automated	
  content	
  analysis	
  helps	
  processing	
  large	
  amount	
  of	
  qualitative	
  data.	
  There	
  are	
  
today	
  on	
  the	
  market	
  many	
  tools	
  that	
  combine	
  statistical	
  algorithm	
  with	
  semantics	
  and	
  
ontologies,	
  as	
  well	
  as	
  machine	
  learning	
  with	
  human	
  supervision.	
  These	
  solutions	
  are	
  able	
  to	
  
identify	
  relevant	
  comments	
  and	
  assign	
  positive	
  or	
  negative	
  connotations	
  to	
  it	
  (the	
  so-­‐called	
  
sentiment).	
  
	
  
The	
   first	
   two	
   point	
   reflect	
   mature	
   application	
   areas,	
   while	
   the	
   third	
   area	
   is	
   emerging	
   and	
   with	
  
relevant	
  research	
  issues.	
  We	
  will	
  therefore	
  mainly	
  focus	
  on	
  this	
  area	
  for	
  the	
  research	
  issues.	
  
	
  
Why	
  it	
  matters	
  in	
  governance	
  
These	
  applications	
  are	
  the	
  basic	
  infrastructure	
  of	
  large	
  scale	
  collaborative	
  policy-­‐making.	
  They	
  help	
  
making	
  sense	
  of	
  thousands	
  of	
  interventions.	
  They	
  help	
  to	
  detect	
  early	
  warning	
  system	
  of	
  possible	
  
disruption	
   in	
   a	
   timely	
   manner,	
   by	
   detecting	
   early	
   feedback	
   from	
   citizens.	
   Traditionally,	
   ad	
   hoc	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
99
	
  Other	
  similar	
  tools	
  include	
  Rationale	
  (http://rationale.austhink.com/tour)	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
81	
  |	
  P a g e 	
  
surveys	
  are	
  used	
  to	
  collect	
  feedback	
  in	
  a	
  structured	
  manner.	
  However,	
  this	
  kind	
  of	
  data	
  collection	
  is	
  
expensive,	
  as	
  it	
  deserves	
  an	
  investment	
  in	
  design	
  and	
  data	
  collection;	
  it	
  is	
  difficult,	
  as	
  people	
  are	
  not	
  
interested	
   in	
   answering	
   surveys;	
   and	
   ultimately	
   it	
   is	
   not	
   very	
   valuable,	
   as	
   it	
   detects	
   “known	
  
problems”	
  through	
  pre-­‐defined	
  questions	
  and	
  interviewees,	
  but	
  fails	
  to	
  detect	
  the	
  most	
  important	
  
problems,	
   the	
   famous	
   “unknown	
   unknown”.	
   Opinion	
   mining	
   is	
   helpful	
   to	
   identify	
   problems	
   by	
  
listening,	
  rather	
  than	
  by	
  asking,	
  thereby	
  ensuring	
  a	
  more	
  accurate	
  reflection	
  of	
  reality.	
  	
  
Argument	
  mapping	
  software	
  is	
  then	
  useful	
  to	
  ensure	
  that	
  policy	
  debates	
  are	
  logical	
  and	
  evidence-­‐
based,	
  and	
  do	
  not	
  repeat	
  the	
  same	
  arguments	
  again	
  and	
  again.	
  
These	
  tools	
  would	
  finally	
  be	
  helpful	
  not	
  only	
  for	
  policy-­‐makers,	
  but	
  also	
  for	
  citizens	
  who	
  could	
  more	
  
easily	
  understand	
  the	
  key	
  points	
  of	
  a	
  discussion	
  and	
  participate	
  to	
  the	
  policy-­‐making	
  process.	
  
	
  
Recent	
  trends	
  
Opinion	
  mining	
  is	
  not	
  in	
  itself	
  a	
  new	
  research	
  theme.	
  Automated	
  methods	
  for	
  content	
  analysis	
  have	
  
been	
   increasingly	
   used,	
   and	
   have	
   increased	
   at	
   least	
   6	
   folds	
   from	
   1980	
   to	
   2002	
   (Neuendorf,	
   K.	
   A.	
  
2002.	
   The	
   Content	
   Analysis	
   Guidebook.	
   Sage).	
   The	
   research	
   theme	
   is	
   based	
   in	
   long	
   established	
  
computer	
  science	
  disciplines,	
  such	
  as	
  Natural	
  Language	
  Processing,	
  Text	
  Mining,	
  Machine	
  Learning	
  
and	
  Artificial	
  Intelligence,	
  Automated	
  Content	
  Analysis,	
  and	
  Voting	
  Advise	
  Applications.	
  
However,	
  according	
  to	
  Pang	
  and	
  Lee	
  (2008),	
  since	
  2001	
  we	
  see	
  a	
  growing	
  awareness	
  of	
  the	
  problems	
  
and	
  opportunities,	
  and	
  “subsequently	
  there	
  have	
  been	
  literally	
  hundreds	
  of	
  papers	
  published	
  on	
  the	
  
subject.”	
  
What	
   is	
   new	
   today	
   is	
   the	
   sheer	
   increase	
   in	
   the	
   quantity	
   of	
  unstructured	
   data,	
   mainly	
   due	
   to	
   the	
  
adoption	
  of	
  social	
  media,	
  that	
  are	
  available	
  for	
  machine	
  learning	
  algorithm	
  to	
  be	
  trained	
  on.	
  Social	
  
media	
  content	
  by	
  nature	
  reflects	
  opinions	
  and	
  sentiments,	
  while	
  traditional	
  content	
  analysis	
  tended	
  
to	
   focus	
   on	
   identifying	
   topics	
   ((Pang,	
   Lee,	
   and	
   Vaithyanathan	
   2002).	
   As	
   such,	
   it	
   deals	
   with	
   more	
  
complex	
  natural	
  language	
  problems.	
  Because	
  of	
  the	
  combination	
  of	
  increase	
  in	
  the	
  volume	
  of	
  data	
  
available	
   and	
   more	
   complex	
   concepts	
   to	
   analyse,	
   in	
   recent	
   years	
   there	
   has	
   been	
   a	
   decrease	
   in	
  
interest	
   on	
   semantic-­‐based	
   application,	
   and	
   a	
   move	
   towards	
   greater	
   use	
   of	
   statistics	
   and	
  
visualisation.	
  Just	
  as	
  any	
  other	
  scientific	
  discipline,	
  also	
  automated	
  content	
  analysis	
  is	
  becoming	
  a	
  
data-­‐intensive	
  science.	
  
Inspiring	
  cases	
  
• Usage	
  of	
  DiscoverText	
  	
  in	
  government100
	
  
• OpinionSpace101
	
  
• Project	
  NOMAD102
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
100
	
  http://www.discovertext.com/Government.html	
  
101
	
  http://www.state.gov/opinionspace/	
  
102
	
  http://www.nomad-­‐project.eu/	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
82	
  |	
  P a g e 	
  
Tools	
  on	
  the	
  market	
  
The	
  market	
  of	
  opinion	
  mining	
  tools	
  is	
  crowded	
  with	
  solution	
  providers.	
  Most	
  of	
  these	
  applications	
  
are	
   geared	
   towards	
   analysing	
   customers	
   feedback	
   about	
   products	
   and	
   services,	
   and	
   therefore	
  
skewed	
  towards	
  sentiment	
  analysis	
  that	
  detects	
  positive/negative	
  feelings	
  by	
  interpreting	
  natural	
  
language.	
  
Freely	
  available	
  tools	
  
Most	
  of	
  the	
  state-­‐of-­‐the-­‐art	
  argument	
  mapping	
  and	
  voter	
  advise	
  applications	
  are	
  freely	
  available,	
  
because	
  they	
  derive	
  largely	
  from	
  academic	
  community	
  or	
  NGOs.	
  A	
  comprehensive	
  list	
  of	
  such	
  tools	
  
is	
   available	
   in	
   http://groups.diigo.com/group/CROSSOVERproject/content/tag/argumentmapping	
  
and	
  http://groups.diigo.com/group/CROSSOVERproject/content/tag/VAA	
  	
  
There	
  are	
  currently	
  freely	
  available	
  applications	
  that	
  simply	
  analyse	
  terms	
  based	
  on	
  a	
  pre-­‐defined	
  
glossary,	
  and	
  giver	
  highly	
  simplified	
  and	
  unreliable	
  results.	
  One	
  example	
  is	
  http://twitrratr.com/	
  	
  
	
  
	
  	
  Figure	
  11:	
  Twitrratr	
  	
  
	
  
Another	
   stream	
   of	
   simple,	
   free	
   and	
   popular	
   solutions	
   is	
   the	
   word	
   visualisation.	
   Wordclouds	
   are	
  
becoming	
   more	
   and	
   more	
   used	
   to	
   make	
   sense	
   of	
   large	
   quantities	
   of	
   information	
   in	
   a	
   snapshot.	
  
Obviously,	
   such	
   tools	
   are	
   also	
   extremely	
   simplified	
   and	
   only	
   offer	
   a	
   visualisation	
   of	
   the	
   most	
  
common	
  used	
  terms,	
  which	
  is	
  helpful	
  to	
  have	
  an	
  idea	
  of	
  what	
  the	
  document	
  is	
  about,	
  but	
  little	
  more.	
  
Tools	
  such	
  as	
  wordle.com	
  provide	
  an	
  appealing	
  design	
  solution	
  that	
  can	
  serve	
  as	
  an	
  entry	
  level	
  in	
  the	
  
opinion	
  mining	
  market.	
  They	
  are	
  therefore	
  important	
  to	
  involve	
  a	
  much	
  wider	
  public	
  in	
  this	
  kind	
  of	
  
activities.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
83	
  |	
  P a g e 	
  
	
  
Figure	
  12:	
  Wordclouds	
  
	
  
Finally,	
  another	
  way	
  of	
  making	
  sense	
  of	
  large	
  amount	
  of	
  information	
  is	
  by	
  relying	
  on	
  human	
  effort,	
  
by	
   crowdsourcing	
   and	
   collective	
   intelligence:	
   people	
   are	
   not	
   only	
   submitting	
   their	
   opinions,	
   but	
  
actually	
   filtering	
   them	
   by	
   signalling	
   the	
   most	
   important	
   ones.	
   Tools	
   such	
   as	
   uservoice.com	
   allow	
  
customers	
  to	
  submit	
  feedback	
  and	
  to	
  rank	
  other	
  people	
  ideas,	
  thereby	
  allowing	
  the	
  emergence	
  of	
  
the	
  most	
  popular	
  ideas.	
  These	
  tools	
  are	
  available	
  at	
  very	
  low	
  cost,	
  but	
  research	
  shows	
  that	
  they	
  are	
  
effective	
  in	
  gathering	
  feedback	
  but	
  not	
  in	
  identifying	
  good	
  ideas,	
  as	
  voting	
  tends	
  to	
  focus	
  on	
  easier	
  
and	
  most	
  popular	
  issues.	
  
	
  	
  
	
  	
  Figure	
  13:	
  UserVoice	
  
	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
84	
  |	
  P a g e 	
  
Enterprise-­‐level	
  software	
  
Beside	
   these	
   simple	
   and	
   free	
   applications,	
   there	
   is	
   then	
   a	
   flourishing	
   market	
   of	
   enterprise-­‐level	
  
software	
  for	
  opinion	
  mining	
  which	
  much	
  more	
  advanced	
  features.	
  These	
  tools	
  are	
  largely	
  in	
  use	
  by	
  
companies	
   to	
   monitor	
   their	
   reputation	
   and	
   the	
   feedback	
   about	
   products	
   on	
   social	
   media.	
   In	
   the	
  
government	
  context,	
  opinion	
  mining	
  has	
  long	
  been	
  in	
  use	
  as	
  an	
  intelligence	
  tool,	
  to	
  detect	
  hostile	
  or	
  
negative	
   communications	
   (Abbasi	
   2007).	
   More	
   recently,	
   politics	
   has	
   become	
   a	
   key	
   area	
   of	
  
applications,	
  as	
  politicians	
  monitor	
  public	
  opinion	
  on	
  social	
  media	
  to	
  understand	
  public	
  reaction	
  to	
  
their	
  position.	
  
Technically,	
   these	
   tools	
   rely	
   on	
   machine	
   learning	
   with	
   regard	
   to	
   identifying	
   and	
   classify	
   relevant	
  
comments,	
   through	
   a	
   combination	
   of	
   latent	
   semantic	
   analysis,	
   support	
   vector	
   machines,	
   "bag	
   of	
  
words"	
  and	
  Semantic	
  Orientation.	
  This	
  process	
  requires	
  significant	
  human	
  effort	
  aided	
  by	
  machines:	
  
all	
  the	
  tools	
  on	
  the	
  market	
  rely	
  on	
  a	
  combination	
  of	
  machine	
  and	
  human	
  analysis,	
  typically	
  using	
  
machines	
  to	
  augment	
  human	
  capacity	
  to	
  classify,	
  code	
  and	
  label	
  comments.	
  
Automated	
  analysis	
  is	
  based	
  on	
  a	
  combination	
  of	
  semantic	
  and	
  statistical	
  analysis.	
  Recently,	
  because	
  
of	
   the	
   sheer	
   increase	
   in	
   the	
   quantity	
   of	
   datasets	
   available,	
   statistical	
   analysis	
   is	
   becoming	
   more	
  
important.	
  
Key	
  challenges	
  and	
  gaps	
  
Current	
   solutions	
   for	
   opinion	
   mining	
   and	
   sentiment	
   analysis	
   are	
   rapidly	
   evolving,	
   typically	
   by	
  
reducing	
  the	
  amount	
  of	
  human	
  effort	
  needed	
  to	
  classify	
  comments.	
  
Among	
  the	
  challenges	
  identified	
  we	
  can	
  select:	
  
-­‐ The	
  detection	
  of	
  spam	
  and	
  fake	
  reviews,	
  mainly	
  through	
  the	
  identification	
  of	
  duplicates,	
  the	
  
comparison	
   of	
   qualitative	
   with	
   summary	
   reviews,	
   the	
   detection	
   of	
   outliers,	
   and	
   the	
  
reputation	
  of	
  the	
  reviewer	
  (Liu	
  2008)	
  
-­‐ The	
  limits	
  of	
  collaborative	
  filtering,	
  which	
  tends	
  to	
  identify	
  most	
  popular	
  concepts	
  and	
  to	
  
overlook	
  most	
  innovative	
  /	
  out	
  of	
  the	
  box	
  thinking	
  
-­‐ The	
  risk	
  of	
  a	
  filter	
  bubble	
  (Pariser	
  2011),	
  where	
  automated	
  content	
  analysis	
  combined	
  with	
  
behavioural	
  analysis	
  leads	
  to	
  a	
  very	
  effective	
  but	
  ultimately	
  deviating	
  selection	
  of	
  relevant	
  
opinions	
  and	
  content,	
  so	
  that	
  the	
  user	
  is	
  not	
  aware	
  of	
  content	
  which	
  is	
  somehow	
  different	
  
from	
  his	
  expectations	
  
-­‐ The	
  asymmetry	
  in	
  availability	
  of	
  opinion	
  mining	
  software,	
  which	
  can	
  currently	
  be	
  afforded	
  
only	
   by	
   organisations	
   and	
   government,	
   but	
   not	
   by	
   citizens.	
   In	
   other	
   words,	
   government	
  
have	
   the	
   means	
   today	
   to	
   monitor	
   public	
   opinion	
   in	
   ways	
   that	
   are	
   not	
   available	
   to	
   the	
  
average	
   citizens.	
   While	
   content	
   production	
   and	
   publication	
   has	
   democratized,	
   content	
  
analysis	
  has	
  not.	
  
-­‐ The	
  integration	
  of	
  opinion	
  with	
  behaviour	
  and	
  implicit	
  data,	
  in	
  order	
  to	
  validate	
  and	
  provide	
  
further	
  analysis	
  into	
  the	
  data	
  beyond	
  opinion	
  expressed	
  
-­‐ The	
   continuous	
   need	
   for	
   better	
   usability	
   and	
   user-­‐friendliness	
   of	
   the	
   tools,	
   which	
   are	
  
currently	
  usable	
  mainly	
  by	
  data	
  analysts	
  
Current	
  research	
  
Current	
  research	
  is	
  focussing	
  on:	
  	
  
● Improving	
  the	
  accuracy	
  of	
  algorithm	
  for	
  opinion	
  detection	
  
● Reduction	
  of	
  human	
  effort	
  needed	
  to	
  analyze	
  content	
  
● Semantic	
  analysis	
  through	
  lexicon/corpus	
  of	
  words	
  with	
  known	
  sentiment	
  for	
  
sentiment	
  classification	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
85	
  |	
  P a g e 	
  
● Identification	
  of	
  policy	
  opinionated	
  material	
  to	
  be	
  analysed	
  
● Computer-­‐generated	
  reference	
  corpuses	
  in	
  political/governance	
  field	
  
● Visual	
  mapping	
  of	
  bipolar	
  opinion	
  
● Identification	
  of	
  highly	
  rated	
  experts	
  
Future	
  research:	
  long	
  term	
  and	
  short	
  term	
  issues	
  
We	
  can	
  distinguish	
  between	
  long	
  and	
  short	
  term	
  research	
  efforts.	
  As	
  for	
  the	
  first	
  one	
  we	
  have:	
  
• Enhanced	
  discoverability	
  of	
  content	
  through	
  Linked	
  Data	
  
• Visual	
  representation	
  
• Audio-­‐visual	
  opinion	
  mining	
  
• Real-­‐time	
  opinion	
  mining	
  
• Machine	
  learning	
  algorithms	
  
• SNA	
  applied	
  to	
  opinion	
  and	
  expertise	
  
• Bipolar	
  assessment	
  of	
  opinions	
  
• Multilingual	
  reference	
  corpora	
  
• Comment	
  and	
  opinion	
  recommendation	
  algorithm	
  
• Cross-­‐platform	
  opinion	
  mining	
  
• Collaborative	
  sharing	
  of	
  annotating/labelling	
  resources	
  	
  
	
  
On	
  the	
  other	
  hand,	
  for	
  the	
  long-­‐term:	
  
● Autonomous	
  machine	
  learning	
  and	
  artificial	
  intelligence	
  
● Usable,	
  peer-­‐to-­‐peer	
  opinion	
  mining	
  tools	
  for	
  citizens	
  
● Non-­‐bipolar	
  assessment	
  of	
  opinion	
  
● Automatic	
  irony	
  detection	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
86	
  |	
  P a g e 	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
3.2.3. Visual	
  Analytics	
  for	
  collaborative	
  governance:	
  the	
  opportunities	
  and	
  the	
  
research	
  challenges	
  
Summary Overview
Market	
  availability	
   Challenges	
  and	
  gaps	
   Current	
  research	
  
Short	
  term	
  future	
  
research	
  
Long	
  term	
  future	
  
research	
  
-­‐Information	
  
visualisation	
  
requirements	
  for	
  
business	
  intelligence	
  
and	
  situational	
  
awareness	
  
-­‐Enterprise	
  knowledge	
  
visualisation	
  linking	
  
-­‐Online	
  analytical	
  
processing	
  and	
  data	
  
mining	
  -­‐Advanced	
  social	
  
network	
  analysis	
  and	
  
visualisation	
  
-­‐Data	
  mining	
  and	
  
interactive	
  visualisation	
  
communication	
  of	
  
location-­‐based	
  
statistical	
  data	
  
-­‐Information	
  
visualisation	
  tools	
  for	
  
high	
  dimensional	
  non-­‐
linear	
  data	
  
-­‐Visual	
  analysis	
  of	
  data	
  
in	
  spreadsheet	
  format	
  
-­‐	
  Demographics	
  
visualisations,	
  allowing	
  
stakeholders	
  and	
  
decision	
  makers	
  to	
  have	
  
a	
  clear	
  picture	
  of	
  the	
  
data	
  and	
  of	
  their	
  trends	
  
over	
  time	
  
-­‐	
  Legal	
  Arguments	
  
visualisation:	
  text	
  
analysis,	
  argumentation	
  
mappings	
  and	
  
visualisation	
  algorithms	
  
-­‐	
  Discussion	
  Arguments	
  
visualisation,	
  making	
  
use	
  of	
  visualisation	
  
techniques	
  for	
  
visualizing	
  a	
  
discussion’s	
  flow	
  
-­‐Geographic	
  
visualisation	
  tools	
  
-­‐Financial	
  markets	
  
monitoring	
  and	
  
visualizing	
  in	
  real	
  time	
  
-­‐Advanced	
  applications	
  
for	
  security	
  and	
  defense	
  
-­‐Close	
  the	
  loop	
  of	
  
information	
  selection,	
  
preparation	
  and	
  
visualisation	
  
-­‐Simultaneous	
  multiple	
  
visualisation	
  
-­‐Integration	
  of	
  
visualisation	
  with	
  
comments	
  /	
  wiki	
  /	
  blogs	
  
-­‐Collaborative	
  platform	
  
display	
  
Interaction	
  between	
  
visualisation	
  and	
  models	
  
-­‐Mobile	
  visual	
  analytics	
  
tools	
  
-­‐Geo-­‐visualisation	
  of	
  
government	
  data	
  
-­‐Integration	
  with	
  opinion	
  
mining	
  and	
  participatory	
  
sensing	
  
-­‐Evaluation	
  framework	
  
for	
  visualisation	
  
effectiveness	
  
-­‐Visualisation	
  
infrastructures	
  for	
  policy	
  
modelling	
  issues	
  
-­‐Re-­‐usable,	
  
mashable	
  tools	
  for	
  
visual	
  analytics	
  
-­‐Tighter	
  integration	
  
between	
  automatic	
  
computation	
  and	
  
interactive	
  
visualisation	
  
-­‐Bias	
  identification	
  
and	
  signalling	
  in	
  
visualisation	
  
-­‐Perceptual,	
  
cognitive	
  and	
  
graphical	
  principles	
  
-­‐Efficiency	
  of	
  the	
  
visualisation	
  
techniques	
  to	
  
enable	
  interactive	
  
exploration	
  
interaction	
  
techniques	
  such	
  as	
  
focus	
  &	
  context	
  
-­‐Impact	
  evaluation	
  
of	
  visual	
  analytics	
  
on	
  policy	
  choices	
  
-­‐Learning	
  adaptive	
  
algorithm	
  for	
  users	
  
intent	
  
-­‐Advanced	
  visual	
  
analytics	
  interfaces	
  
-­‐Intuitive	
  affordable	
  
visual	
  analytics	
  
interface	
  for	
  
citizens	
  
-­‐Development	
  of	
  
novel	
  interaction	
  
algorithms	
  
incorporating	
  
machine	
  
recognition	
  of	
  the	
  
actual	
  user	
  intent	
  
and	
  appropriate	
  
adaptation	
  of	
  main	
  
display	
  parameters	
  
such	
  as	
  the	
  level	
  of	
  
detail,	
  data	
  
selection,	
  etc.	
  by	
  
which	
  the	
  data	
  is	
  
presented
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
87	
  |	
  P a g e 	
  
Introduction	
  and	
  definition	
  	
  
The	
  explosion	
  in	
  computing	
  techniques	
  led	
  to	
  the	
  generation	
  of	
  a	
  tremendous	
  amount	
  of	
  data	
  which	
  
are	
  stored	
  in	
  the	
  internet	
  and	
  processed	
  in	
  the	
  IT	
  infrastructures	
  all	
  over	
  the	
  world.	
  Some	
  examples	
  
of	
  new	
  technologies	
  for	
  data	
  collection	
  are:	
  web	
  logs;	
  RFID;	
  sensor	
  networks;	
  social	
  networks;	
  social	
  
data	
  (due	
  to	
  the	
  Social	
  data	
  revolution),	
  Internet	
  text	
  and	
  documents;	
  Internet	
  search	
  indexing;	
  call	
  
detail	
   records;	
   astronomy,	
   atmospheric	
   science,	
   genomics,	
   biogeochemical,	
   biological;	
   military	
  
surveillance;	
  medical	
  records;	
  photography	
  archives;	
  video	
  archives;	
  large-­‐scale	
  eCommerce.	
  
In	
  managing	
  this	
  huge	
  amount	
  of	
  data,	
  when	
  it	
  comes	
  to	
  human-­‐computer	
  interaction	
  there	
  is	
  a	
  
need	
  to	
  distil	
  the	
  most	
  important	
  information	
  to	
  be	
  presented	
  it	
  in	
  a	
  humanly	
  understandable	
  and	
  
comprehensive	
  way.	
  Here	
  it	
  comes	
  visualisation,	
  which	
  is	
  a	
  way	
  to	
  interpret	
  and	
  translate	
  data	
  from	
  
computer	
   understandable	
   formats	
   to	
   human	
   ones	
   by	
   employing	
   graphical	
   models,	
   charts,	
   graphs	
  
and	
  other	
  images	
  that	
  are	
  conventional	
  for	
  humans	
  (Bederson	
  and	
  Shneiderman	
  2003).	
  From	
  one	
  
hand	
  we	
  can	
  define	
  visualisation	
  as	
  any	
  technique	
  for	
  creating	
  create	
  insight,	
  preferably	
  by	
  allowing	
  
users	
   to	
   interact	
   and	
   alter	
   with	
   the	
   visualisation	
   to	
   iteratively	
   solve	
   questions	
   and	
   form	
   new	
  
questions	
  based	
  on	
  previous	
  findings.	
  On	
  the	
  other	
  hand	
  visualisation	
  can	
  be	
  defined	
  as	
  a	
  set	
  of	
  
techniques	
  for	
  communicating	
  knowledge	
  that	
  can	
  be	
  supported	
  by	
  data.	
  
In	
   contrast	
   with	
   visualisation	
   traditionally	
   seen	
   as	
   the	
   output	
   of	
   the	
   analytical	
   process,	
   visual	
  
analytics103
	
  considers	
   visualisation	
   as	
   a	
   dynamic	
   tool	
   that	
   aims	
   at	
   integrating	
   the	
   outstanding	
  
capabilities	
   of	
   humans	
   in	
   terms	
   of	
   visual	
   information	
   exploration	
   and	
   the	
   enormous	
   processing	
  
power	
   of	
   computers	
   to	
   form	
   a	
   powerful	
   knowledge	
   discovery	
   environment.	
   In	
   this	
   view	
   visual	
  
analytics	
  is	
  useful	
  for	
  tackling	
  the	
  increasing	
  amount	
  of	
  data	
  available,	
  and	
  for	
  using	
  in	
  the	
  best	
  way	
  
the	
  information	
  contained	
  in	
  the	
  data	
  itself.	
  Moreover	
  visual	
  analytics	
  aims	
  at	
  present	
  the	
  data	
  in	
  
way	
  suitable	
  for	
  informing	
  the	
  policy	
  making	
  process.	
  
More	
  in	
  particular	
  the	
  interdisciplinary	
  field	
  of	
  visual	
  analytics	
  aims	
  at	
  combining	
  human	
  perception	
  
and	
  computing	
  power	
  in	
  order	
  to	
  solve	
  the	
  information	
  overload	
  problem.	
  In	
  Thomas	
  and	
  Cooks	
  
(2005)	
   definition,	
   visual	
   analytics	
   is	
   “the	
   science	
   of	
   analytical	
   reasoning	
   supported	
   by	
   interactive	
  
visual	
   interfaces”.	
   Precisely	
   visual	
   analytics	
   is	
   an	
   iterative	
   process	
   that	
   involves	
   information	
  
gathering,	
   data	
   preprocessing,	
   knowledge	
   representation,	
   interaction	
   and	
   decision	
   making.	
   The	
  
characteristic	
   of	
   this	
   field	
   is	
   that	
   it	
   entails	
   the	
   association	
   of	
   data-­‐mining	
   and	
   text-­‐mining	
  
technologies,	
   used	
   for	
   preprocessing	
   massive	
   amounts	
   of	
   data,	
   and	
   information	
   visualisation104
,	
  
which	
   is	
   useful	
   for	
   disentangling	
   important	
   from	
   trivial	
   and	
   useless	
   information.	
   In	
   a	
   certain	
   way	
  
information	
  visualisation	
  becomes	
  a	
  tool	
  in	
  a	
  semi-­‐automated	
  analytical	
  process	
  characterized	
  by	
  
the	
  cooperation	
  between	
  humans	
  and	
  computers,	
  in	
  which	
  is	
  the	
  user	
  who	
  decides	
  the	
  direction	
  of	
  
the	
  analysis	
  relating	
  to	
  a	
  particular	
  task,	
  while	
  the	
  system	
  works	
  as	
  an	
  interaction	
  tool.	
  It	
  is	
  somehow	
  
difficult	
  to	
  distinguish	
  among	
  information	
  visualisation	
  and	
  visual	
  analytics.	
  In	
  poor	
  terms	
  we	
  can	
  say	
  
that	
  information	
  visualisation	
  handles	
  abstract	
  data	
  structures	
  such	
  as	
  trees	
  or	
  graphs,	
  and	
  finally	
  
visual	
   analytics	
   deals	
   properly	
   with	
   sense-­‐making	
   and	
   reasoning.	
   More	
   in	
   particular	
   information	
  
visualisation	
   is	
   mostly	
   applied	
   to	
   data	
   not	
   belonging	
   to	
   scientific	
   inquiry,	
   e.g.	
   graphical	
  
representations	
  of	
  data	
  for	
  business,	
  government,	
  news	
  and	
  social	
  media.	
  Visualisation	
  work	
  does	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
103
	
  http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1573625	
  
104
	
  We	
  can	
  define	
  Information	
  visualisation	
  as	
  a	
  way	
  of	
  making	
  data	
  easier	
  to	
  understand	
  using	
  direct	
  sensory	
  experience,	
  
rather	
   than	
   linguistic	
   or	
   logical	
   reasoning.	
   Or	
   in	
   the	
   words	
   of	
   Friendly,	
   information	
   visualisation	
   is	
   the	
   study	
   of	
  
"the	
  visual	
  representation	
  of	
   large-­‐scale	
   collections	
   of	
   non-­‐numerical	
   information,	
   such	
   as	
   files	
   and	
   lines	
   of	
   code	
  
in	
  software	
  systems,	
  library	
  and	
  bibliographic	
  databases,	
  networks	
  of	
  relations	
  on	
  the	
  internet,	
  and	
  so	
  forth".	
  (See	
  Michael	
  
Friendly	
  2008)	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
88	
  |	
  P a g e 	
  
not	
  necessarily	
  deal	
  with	
  an	
  analysis	
  task	
  nor	
  does	
  it	
  always	
  use	
  advanced	
  data	
  analysis	
  algorithms.	
  
On	
   the	
   other	
   hand	
   visual	
   analytics	
   can	
   be	
   seen	
   as	
   an	
   integral	
   approach	
   to	
   decision-­‐making,	
  
combining	
  visualisation,	
  human	
  factors	
  and	
  data	
  analysis.	
  It	
  entails	
  identifying	
  the	
  best	
  algorithm	
  for	
  
a	
  given	
  analysis	
  task,	
  to	
  be	
  integrated	
  with	
  the	
  best	
  automated	
  analysis	
  algorithms	
  with	
  appropriate	
  
visualisation	
  and	
  interaction	
  techniques.	
  
Visualisation	
  and	
  visual	
  analytics	
  should	
  be	
  considered	
  in	
  strict	
  integration	
  with	
  other	
  research	
  areas,	
  
such	
  as	
  modelling	
  and	
  simulation105
,	
  social	
  network	
  analysis,	
  participatory	
  sensing,	
  open	
  linked	
  data,	
  
visual	
  computing.	
  
The	
   disciplines	
   in	
   the	
   domain	
   of	
   visualisation	
   and	
   visual	
   analytics	
   include:	
   Human-­‐Computer	
  
Interaction	
   (HCI),	
   Computer	
   Science,	
   Graphic	
   and	
   Information	
   Design,	
   Usability	
   Engineering,	
  
Cognitive	
   and	
   Perceptual	
   Science,	
   Decision	
   Science,	
   Information	
   Visualisation,	
   Scientific	
  
Visualisation,	
   Databases,	
   Data	
   Mining,	
   Statistics,	
   Knowledge	
   Discovery,	
   Data	
   Management	
   &	
  
Knowledge	
   Representation,	
   Presentation,	
   Production	
   and	
   Dissemination,	
   Statistics,	
   Interaction,	
  
Geospatial	
  Analytics,	
  Graphics	
  and	
  Rendering,	
  Cognition,	
  Perception,	
  and	
  Interaction.	
  
As	
   far	
   the	
   visual	
   analytics	
   methodologies	
   are	
   concerned,	
   in	
   the	
   CROSSOVER	
   taxonomy	
   we	
   can	
  
identify	
  the	
  following:	
  visualisation	
  of	
  a	
  single,	
  static,	
  embedded	
  data	
  set;	
  visualisation	
  of	
  multiple	
  
static	
  data	
  sets;	
  visualisation	
  of	
  a	
  single	
  live	
  data	
  feed	
  or	
  updating	
  data	
  set;	
  and	
  finally	
  visualisation	
  
of	
  multiple	
  data	
  points,	
  including	
  live	
  feeds	
  or	
  updates.	
  
Why	
  it	
  matters	
  in	
  governance	
  
Today’s	
   governments	
   face	
   the	
   challenge	
   of	
   understanding	
   an	
   increasingly	
   complex	
   and	
  
interdependent	
   world,	
   and	
   the	
   fast	
   pace	
   of	
   change	
   and	
   increased	
   instability	
   in	
   all	
   the	
   areas	
   of	
  
regulation	
  requires	
  rapid	
  decision	
  making	
  able	
  to	
  draw	
  on	
  the	
  wider	
  amount	
  of	
  available	
  evidence	
  in	
  
real-­‐time.	
  How	
  can	
  visualization	
  and	
  visual	
  analytics	
  help?	
  
• Generate	
  high	
  involvement	
  of	
  citizens	
  in	
  policy-­‐making.	
  One	
  of	
  the	
  main	
  applications	
  
of	
  visualisation	
  is	
  in	
  making	
  sense	
  of	
  large	
  datasets	
  and	
  identifying	
  key	
  variables	
  and	
  
causal	
  relationships	
  in	
  a	
  non-­‐technical	
  way.	
  Similarly,	
  it	
  enables	
  non-­‐technical	
  users	
  to	
  
make	
  sense	
  of	
  data	
  and	
  interact	
  with	
  them.	
  For	
  instance,	
  the	
  GapMinder106
	
  software	
  
helps	
  to	
  understand	
  the	
  main	
  global	
  demographic	
  changes	
  and	
  raise	
  awareness	
  on	
  the	
  
implications	
  of	
  sound	
  health	
  policies	
  in	
  developing	
  countries.	
  	
  
• Understand	
  the	
  impact	
  of	
  policies:	
  visualisation	
  is	
  instrumental	
  in	
  making	
  evaluation	
  of	
  
policy	
  impact	
  more	
  effective.	
  For	
  instance,	
  Farmsubsidy107
	
  helps	
  understanding	
  who	
  are	
  
the	
  main	
  beneficiaries	
  of	
  the	
  common	
  agricultural	
  policy	
  by	
  geo-­‐referencing	
  the	
  single	
  
beneficiary.	
  
• Identify	
  problems	
  at	
  an	
  early	
  stage,	
  detect	
  the	
  “unknown	
  unknown”	
  and	
  anticipate	
  
crisis:	
  visual	
  analytics	
  are	
  largely	
  used	
  in	
  the	
  intelligence	
  community	
  because	
  they	
  help	
  
exploiting	
  the	
  human	
  capacity	
  to	
  detect	
  unexpected	
  patterns	
  and	
  connections	
  between	
  
data.	
   Thereby	
   they	
   help	
   early	
   detection	
   of	
   potential	
   threats	
   at	
   an	
   early	
   stage.	
   For	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
105
	
  The	
  connections	
  between	
  simulation	
  and	
  visualisation	
  appears	
  even	
  more	
  clear	
  when	
  dealing	
  with	
  user	
  interfaces,	
  
which	
  enable	
  the	
  visualisation	
  to	
  take	
  user	
  commands	
  
106
	
  http://www.gapminder.org/	
  
107
	
  http://farmsubsidy.org/	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
89	
  |	
  P a g e 	
  
instance,	
  the	
  VisAware108
	
  project	
  in	
  the	
  US	
  provides	
  situational	
  awareness	
  in	
  situation	
  
of	
  emergencies,	
  helping	
  the	
  coordination	
  of	
  different	
  resources	
  involved	
  in	
  emergencies	
  
	
  
History	
  and	
  trends	
  
Since	
  from	
  the	
  beginning	
  of	
  human	
  history,	
  visualisation	
  has	
  been	
  an	
  effective	
  way	
  to	
  communicate	
  
both	
  abstract	
  and	
  concrete	
  ideas.	
  The	
  appearance	
  of	
  digital	
  visualisation	
  led	
  to	
  the	
  development	
  of	
  
graphic	
  hardware	
  as	
  well	
  as	
  to	
  a	
  wide	
  array	
  of	
  technique	
  used	
  to	
  visualize	
  data	
  in	
  a	
  number	
  of	
  ways	
  
(van	
  Wijk,	
  2005).	
  	
  
One	
  of	
  the	
  best-­‐known	
  examples	
  of	
  visualisation	
  dates	
  back	
  to	
  the	
  19th	
  century	
  with	
  the	
  drawings109
	
  
by	
   Charles	
   Joseph	
   Minard,	
   who	
   developed	
   a	
   format	
   to	
   show	
   data	
   tied	
   to	
   a	
   timescale	
   with	
   a	
  
landscape	
  background.	
  In	
  particular	
  Minard	
  conveyed	
  a	
  complex	
  series	
  of	
  events	
  through	
  various	
  
data	
  measures,	
  explained	
  together	
  with	
  their	
  causes	
  and	
  consequences	
  in	
  a	
  single	
  graphic.	
  Minard's	
  
drawings	
  are	
  applied	
  to	
  show	
  the	
  march	
  of	
  Napoleon’s	
  army	
  towards	
  Moscow,	
  starting	
  with	
  422,000	
  
and	
  ending	
  with	
  10,000	
  men,	
  and	
  Hannibal's	
  crossing	
  of	
  the	
  Alps,	
  starting	
  with	
  97,000	
  and	
  ending	
  
with	
  6,000	
  men.	
  The	
  modern	
  visualisation	
  field,	
  making	
  use	
  of	
  computer	
  graphics,	
  originated	
  in	
  the	
  
late	
  1980s	
  with	
  the	
  studies	
  on	
  scientific	
  visualisation	
  applied	
  to	
  fluid	
  dynamics,	
  volume	
  visualisation,	
  
molecular	
  modelling,	
  imaging	
  remote-­‐sensing	
  data,	
  and	
  medical	
  imaging	
  (Rosenblum	
  1994).	
  From	
  
scientific	
  visualisation	
  took	
  place	
  some	
  more	
  recent	
  areas,	
  such	
  as	
  information	
  visualisation,	
  mobile	
  
visualisation,	
  location-­‐aware	
  computing	
  and	
  visual	
  analytics.	
  Information	
  visualisation	
  arose	
  when	
  
Robertson,	
   Card	
   and	
   Mackinlay	
   in	
   the	
   1980s	
   started	
   to	
   use	
   the	
   work	
   of	
   Bertin	
   (1967)	
   and	
   Tufte	
  
(1983)	
   in	
   interactive	
   computer	
   applications.	
   Later	
   Shneiderman	
   (1996)	
   inter	
   al.	
   formalized	
   the	
  
process	
   of	
   information	
   visualisation.	
   Finally	
   Ware	
   (2004)	
   emphasized	
   the	
   important	
   of	
   human	
  
perception	
  in	
  information	
  visualisation.	
  In	
  parallel	
  with	
  information	
  visualisation	
  raised	
  the	
  field	
  of	
  
data	
  mining,	
  aimed	
  at	
  discovering	
  information	
  hidden	
  in	
  massive	
  amounts	
  of	
  data.	
  A	
  characteristic	
  of	
  
the	
  field	
  is	
  that	
  it	
  aimed	
  at	
  substituting	
  the	
  human	
  analysis	
  with	
  automatic	
  computer	
  operations,	
  not	
  
supporting	
  human	
  perception	
  with	
  interactive	
  visualisation.	
  In	
  order	
  to	
  avoid	
  that	
  was	
  developed	
  
the	
   interdisciplinary	
   field	
   of	
   visual	
   analytics,	
   which	
   combines	
   human	
   perception	
   abilities	
   with	
  
computers’	
  processing	
  power	
  in	
  order	
  to	
  tackle	
  massive	
  amounts	
  of	
  information.	
  Visual	
  analytics	
  can	
  
therefore	
  be	
  seen	
  as	
  the	
  combination	
  between	
  human	
  factors	
  and	
  data	
  analysis	
  on	
  one	
  side,	
  and	
  
information	
   visualisation	
   (Keim	
   et	
   el.	
   2008).	
   Future	
   developments	
   of	
   visual	
   analytics	
   include	
   the	
  
fields	
  of	
  enhanced	
  collaboration	
  capabilities,	
  more	
  intuitive	
  interaction,	
  support	
  of	
  non-­‐computing	
  
devices,	
  as	
  well	
  as	
  the	
  integration	
  of	
  quantitative	
  and	
  qualitative	
  data.	
  In	
  fact	
  visual	
  analytics	
  require	
  
particular	
  technological	
  advances,	
  as	
  traditional	
  data	
  mining	
  tools	
  are	
  unsuitable	
  for	
  some	
  necessary	
  
functionalities	
  such	
  as	
  the	
  algorithm	
  speed	
  required	
  for	
  iterative	
  visualisation.	
  
Inspiring	
  cases	
  in	
  information	
  visualisation	
  and	
  visual	
  analytics:	
  
• 	
  GapMinder110
	
  
• 	
  US	
  Labour	
  Force	
  visualisation111
	
  
• 	
  State	
  Cancer	
  Profiles112
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
108
	
  http://www.sci.utah.edu/publications/yarden05/VisAware.pdf	
  
109
	
  http://www.math.yorku.ca/SCS/Gallery/minbib/index.htm.	
   For	
   other	
   examples	
   please	
   refer	
   to	
  
http://www.infovis.net/printMag.php?num=110&lang=2	
  
110
	
  http://www.gapminder.org/	
  
111
	
  http://flare.prefuse.org/launch/apps/job_voyager	
  
112
	
  http://statecancerprofiles.cancer.gov/micromaps/	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
90	
  |	
  P a g e 	
  
• 	
  Instant	
  Atlas113
	
  
• Rennes	
  Metropole114
	
  
• City	
  Dashboard115
	
  
• OECD	
  Better	
  Life	
  Index116
	
  	
  
• Gain	
  Index117
	
  	
  
• IBM	
  Many	
  Bills118
	
  
• Graphical	
  Contingency	
  Analysis119
	
  
• DeepCity3D120
	
  
• Vis	
  Sense121
	
  
	
  
	
  
	
  
	
  
Projects	
  in	
  information	
  visualization	
  and	
  visual	
  analytics:	
  
• Jigsaw122
:	
  visualization	
  for	
  investigative	
  analysis	
  	
  
• Ploceus123
:	
  network-­‐based	
  visualization	
  of	
  tabular	
  data	
  	
  
• Dotlink360124
:	
  visual	
  analytics	
  for	
  exploring	
  converging	
  business	
  ecosystems	
  	
  
• SportsVis125
:	
  visualization	
  to	
  analyze	
  sports	
  data	
  	
  
• Intelligence	
  Analysis126
:	
  visual	
  analytics	
  to	
  help	
  intelligence	
  analysts	
  	
  
• SellTrend127
:	
  visualizing	
  temporal,	
  categorical	
  event	
  transactions	
  	
  
• Dust	
  &	
  Magnet128
:	
  InfoVis	
  via	
  a	
  magnet	
  metaphor	
  	
  
• Fund	
  Explorer129
:	
  stock	
  portfolio	
  diversification	
  through	
  Context	
  Treemaps	
  	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
113
	
  http://www.instantatlas.com/CDC_story.xhtml	
  
114
	
  http://dataviz.rennesmetropole.fr/quisommesnous/en/	
  
115
	
  http://citydashboard.org/choose.php	
  
116
	
  http://www.oecdbetterlifeindex.org/	
  
117
	
  http://index.gain.org/	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
91	
  |	
  P a g e 	
  
• InfoCanvas130
:	
  peripheral	
  information	
  art	
  	
  
• Information	
  Mural131
:	
  squeezing	
  large	
  data	
  sets	
  into	
  small	
  views	
  	
  
• NetVizor132
:	
  visualizing	
  network	
  topologies	
  	
  
• SunBurst133
:	
  radial	
  space-­‐filling	
  views	
  of	
  hierarchies	
  	
  
• Tarantula134
:	
  testing	
  and	
  debugging	
  large	
  software	
  systems	
  
	
  
Policy	
  applications	
  of	
  visualisation	
  and	
  visual	
  analytics	
  tools	
  
With	
  regard	
  to	
  the	
  governance	
  and	
  policy	
  making	
  context,	
  some	
  visualisation	
  tools	
  can	
  be	
  applicable	
  
to	
   a	
   wide	
   array	
   of	
   issues	
   and	
   situation	
   (education,	
   environment,	
   public	
   health,	
   urban	
   growth,	
  
national	
  defense,	
  etc.).	
  In	
  the	
  public	
  context,	
  visual	
  analytics	
  of	
  public	
  data	
  is	
  an	
  exploding	
  field,	
  with	
  
particular	
   relation	
   to	
   the	
   open	
   data	
   movement,	
   in	
   order	
   to	
   monitor	
   policy	
   context	
   and	
   evaluate	
  
government	
  policies.	
  Most	
  basic	
  mash-­‐up	
  tools	
  are	
  available	
  to	
  visualize	
  government.	
  
Let	
  us	
  see	
  some	
  other	
  examples:	
  
• Demographics	
  visualisations,	
  allowing	
  stakeholders	
  and	
  decision	
  makers	
  to	
  have	
  a	
  clear	
  
picture	
   of	
   the	
   data	
   and	
   of	
   their	
   trends	
   over	
   time.	
   Visualisation	
   of	
   demographic	
   data	
  
make	
  easier	
  the	
  design	
  and	
  evaluation	
  of	
  various	
  policies,	
  as	
  there	
  is	
  no	
  need	
  to	
  dig	
  
through	
  acres	
  of	
  numbers.	
  In	
  fact	
  advanced	
  algorithms	
  are	
  able	
  to	
  create	
  figures	
  and	
  
illustrations	
   easy	
   to	
   interpret.	
   Typical	
   examples	
   are	
   the	
   aforementioned	
   GapMinder	
  
(which	
  embeds	
  visualisations	
  of	
  various	
  demographic	
  data	
  at	
  global	
  level),	
  as	
  well	
  as	
  
Dynamic	
   Choropleth	
   Maps135
,	
   DataPlace136
,	
   Hive	
   Group137
,	
   Name	
   Voyager138
,	
   State	
  
Cancer	
  Profiles139
.	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
118
	
  http://manybills.researchlabs.ibm.com/	
  
119
	
  http://availabletechnologies.pnnl.gov/technology.asp?id=288	
  
120
	
  http://www.deepcity3d.eu/default.aspx	
  
121
	
  http://www.vis-­‐sense.eu/	
  
122
	
  http://www.cc.gatech.edu/gvu/ii/jigsaw/	
  
123
	
  http://www.cc.gatech.edu/gvu/ii/ploceus/	
  
124
	
  http://www.cc.gatech.edu/gvu/ii/dotlink/	
  
125
	
  http://www.cc.gatech.edu/gvu/ii/sportvis/	
  
126
	
  http://www.cc.gatech.edu/gvu/ii/intell/	
  
127
	
  http://www.cc.gatech.edu/gvu/ii/selltrend/	
  
128
	
  http://www.cc.gatech.edu/gvu/ii/dnm/	
  
129
	
  http://www.cc.gatech.edu/gvu/ii/fundexplorer/	
  
130
	
  http://www.cc.gatech.edu/gvu/ii/infoart/	
  
131
	
  http://www.cc.gatech.edu/gvu/ii/mural/	
  
132
	
  http://www.cc.gatech.edu/gvu/ii/netviz/	
  
133
	
  http://www.cc.gatech.edu/gvu/ii/sunburst/	
  
134
	
  http://pleuma.cc.gatech.edu/aristotle/Tools/tarantula/	
  
135
	
  http://www.turboperl.com/dcmaps.html	
  
136
	
  http://www.knowledgeplex.org/dataplace.html	
  
137
	
  http://www.hivegroup.com/gallery/worldpop/	
  
	
  
138
	
  http://www.babynamewizard.com/voyager#	
  
	
  
139
	
  http://statecancerprofiles.cancer.gov/micromaps/	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
92	
  |	
  P a g e 	
  
• Legal	
  Arguments	
  visualisation:	
  text	
  analysis,	
  argumentation	
  mappings	
  and	
  visualisation	
  
algorithms	
  can	
  be	
  applied	
  to	
  legal	
  documents	
  in	
  order	
  to	
  simplify	
  legislation	
  making	
  it	
  
more	
  accessible	
  and	
  comprehensible	
  to	
  the	
  general	
  public	
  (Many	
  Bills140
,	
  Clear	
  Congress	
  
Project141
),	
  or	
   in	
   order	
   to	
  visually	
   represent	
   corroborative	
   evidence	
   (e.g.	
   the	
   tools	
  
Carneades142
,	
  Deflog143
)	
  
• Discussion	
   Arguments	
   visualisation,	
   making	
   use	
   of	
   visualisation	
   techniques	
   for	
  
visualizing	
  the	
  flow	
  of	
  a	
  discussion	
  that	
  include	
  various	
  arguments,	
  in	
  order	
  to	
  instantly	
  
get	
  awareness	
  of	
  the	
  topics	
  discussed,	
  as	
  well	
  as	
  of	
  the	
  arguments	
  and	
  the	
  support	
  such	
  
arguments	
   gain.	
   In	
   this	
   view	
   visualisation	
   supports	
   all	
   interested	
   stakeholders	
   to	
  
understand	
  the	
  flow	
  of	
  a	
  discussion,	
  which	
  is	
  presented	
  to	
  them	
  in	
  a	
  structured	
  and	
  
interactive	
  format,	
  avoiding	
  numerous	
  discussion	
  threads.	
  Example	
  of	
  such	
  visualisation	
  
tools	
   include	
   DebateGraph144
,	
   which	
   is	
   intensively	
   used	
   for	
   building	
   argumentation	
  
maps,	
  as	
  well	
  as	
  Araucaria145
,	
  Compendium146
,	
  Argublogging147
	
  and	
  Rationale148
.	
  
• Geovisualisation,	
   which	
   is	
   based	
   on	
   the	
   provision	
   of	
   theory,	
   tools	
   and	
   methods	
   for	
  
visual	
   analysis,	
   synthesis,	
   exploration	
   and	
   representation	
   of	
   geographical	
   data	
   and	
  
information	
  in	
  order	
  to	
  derive	
  problem	
  specific	
  models	
  and	
  design	
  task	
  specific	
  maps	
  for	
  
incorporating	
   geographical	
   knowledge	
   into	
   planning	
   and	
   decision	
   making.	
   Some	
  
examples	
  of	
  such	
  tools	
  include	
  ESTAT149
,	
  GeoViz	
  Toolkit150
,	
  the	
  geovisualisation	
  tools	
  at	
  
the	
  US	
  National	
  Cancer	
  Institute151
,	
  some	
  applications	
  of	
  InstantAtlas152
.	
  
• Advanced	
  visualisation	
  applications	
  used	
  for	
  security	
  and	
  national	
  defense.	
  In	
  this	
  fields,	
  
software	
  advances	
  are	
  being	
  led	
  both	
  on	
  the	
  military	
  and	
  on	
  the	
  corporate	
  front.	
  In	
  fact	
  
business	
   organizations	
   also	
   have	
   urgent	
   information	
   visualisation	
   requirements	
   that	
  
support	
   their	
   business	
   intelligence	
   and	
   situational	
   awareness	
   capability,	
   data	
   mining	
  
and	
  reporting	
  requirements.	
  In	
  this	
  view	
  many	
  of	
  the	
  software	
  innovations	
  are	
  being	
  
targeted	
  at	
  financial	
  and	
  corporate	
  requirements,	
  but	
  are	
  also	
  applicable	
  to	
  the	
  defense	
  
domain	
  due	
  to	
  common	
  data	
  mining	
  and	
  information	
  visualisation	
  challenges.	
  Examples	
  
of	
  such	
  tools	
  are:	
  DataMontage153
,	
  HoneyComb154
,	
  Oculus	
  GeoTime155
	
  and	
  Starlight156
.	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
140
	
  http://researcher.watson.ibm.com/researcher/view_project.php?id=1232	
  
141
	
  http://clearcongressproject.com/	
  
142
	
  http://carneades.berlios.de/downloads/	
  
143
	
  http://www.ai.rug.nl/~verheij/aaa/	
  
144
	
  http://www.debategraph.org	
  
145
	
  http://araucaria.computing.dundee.ac.uk/	
  	
  
146
	
  http://compendium.open.ac.uk/institute/	
  
147
	
  http://www.arg.dundee.ac.uk/?p=624	
  
148
	
  http://rationale.austhink.com/	
  
149
	
  http://www.geovista.psu.edu/ESTAT/	
  
150
	
  http://www.geovista.psu.edu/geoviztoolkit/index.html	
  
151
	
  http://gis.cancer.gov/nci/geovisualisation.html	
  
152
	
  http://www.instantatlas.com/clients.xhtml#government	
  
153
	
  http://www.stottlerhenke.com/datamontage/examples/madcap/Air_force_wargame_simulation.htm	
  
154
	
  http://www.hivegroup.com/solutions/demos/merit.html	
  
155
	
  http://www.oculusinfo.com/papers/GeoTime_Brochure_06.pdf	
  
156
	
  http://starlight.pnl.gov/	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
93	
  |	
  P a g e 	
  
Other	
   very	
   interesting	
   examples	
   are	
   Analyst’s	
   Notebook 157
	
  is	
   Visual	
   Sentinel	
  
Visualizer158
,	
  adopted	
  by	
  intelligence	
  agencies	
  such	
  as	
  the	
  CIA.	
  
• Visualisation	
   applications	
   adopted	
   for	
   financial	
   markets	
   monitoring	
   and	
   visualizing	
   in	
  
real	
  time.	
  An	
  example	
  of	
  such	
  tool	
  is	
  SmartMoney159
.	
  
• Visualisation	
   applied	
   to	
   governmental	
   finances/expenditure	
   monitoring,	
   such	
   as	
  
USAspending.gov160
,	
  OffenerHaushalt161
	
  and	
  Where	
  Does	
  My	
  Money	
  Go162
.	
  
Tools	
  on	
  the	
  market	
  
There	
  is	
  a	
  massive	
  quantity	
  of	
  visualisation	
  tools	
  in	
  the	
  market,	
  both	
  freely	
  available	
  and	
  enterprise	
  
level,	
  critical	
  for	
  analysts	
  and	
  researchers,	
  but	
  also	
  for	
  common	
  people,	
  is	
  now	
  available	
  online.	
  
Freely	
  available	
  tools	
  
First	
   of	
   all	
   we	
   have	
   visualisation	
   websites	
   useful	
   for	
   sharing	
   and	
   presenting	
   data,	
   provide	
   clear	
  
context	
   on	
   important	
   cultural,	
   environmental,	
   social	
   and	
   economic	
   issue,	
   build	
   chart	
   and	
   share	
  
visualisation	
  and	
  discoveries.	
  Such	
  examples	
  include	
  Data360163
.	
  Moreover	
  there	
  are	
  “do	
  it	
  yourself”	
  
infographic	
  tools	
  such	
  as	
  Vizify164
,	
  Visual.ly165
,	
  Easel.ly166
	
  and	
  Vizualize.me167
.	
  	
  
Then	
   we	
   have	
   data	
   visualisation	
   tools	
   used	
   for	
   plotting	
   data	
   on	
   maps,	
   frameworks	
   for	
   creating	
  
charts,	
   graphs	
   and	
   diagrams	
   and	
   tools	
   to	
   simplify	
   the	
   handling	
   of	
   data	
   transforming	
   them	
   into	
  
spreadsheets,	
   visual	
   data	
   mining	
   and	
   database	
   exploration	
   system,	
   data	
   visualisation	
   system	
   for	
  
high-­‐dimensional	
  data,	
  visualisation	
  framework	
  for	
  animating	
  data.	
  Some	
  examples	
  of	
  those	
  tools	
  
are:	
  Data	
  Wrangler168
,	
  JavaScript	
  InfoVis	
  Toolkit169
,	
  VisDB170
,	
  Graphviz171
,	
  IBM	
  OpenDX172
,	
  Gephi173
,	
  
GeoCommons174
,	
  Miso	
  Dataset175
,	
  Polymaps176
,	
  Tableau	
  Public177
.	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
157
	
  http://www.i2group.com/us/products/analysis-­‐product-­‐line/ibm-­‐i2-­‐analysts-­‐notebook	
  
158
	
  http://www.fmsasg.com/LinkAnalysis/Government/Solutions.asp	
  
159
	
  http://www.smartmoney.com/map-­‐of-­‐the-­‐market/	
  
160
	
  http://usaspending.gov/	
  
161
	
  http://bund.offenerhaushalt.de/	
  
162
	
  http://www.wheredoesmymoneygo.org/	
  
163
	
  http://www.data360.org/index.aspx	
  
164
	
  https://www.vizify.com/	
  
165
	
  http://visual.ly/	
  
166
	
  http://www.easel.ly/	
  
167
	
  http://vizualize.me/	
  
168
	
  http://vis.stanford.edu/wrangler/	
  
169
	
  http://philogb.github.com/jit/	
  
170
	
  http://bib.dbvis.de/uploadedFiles/202.pdf	
  
171
	
  http://www.graphviz.org/	
  
172
	
  http://www.opendx.org/	
  
173
	
  https://gephi.org/	
  
174
	
  http://geocommons.com/	
  
175
	
  http://misoproject.com/dataset/	
  
176
	
  http://polymaps.org/	
  
177
	
  http://www.tableausoftware.com/public/	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
94	
  |	
  P a g e 	
  
Tools	
  available	
  in	
  the	
  market	
  
Apart	
  from	
  free	
  visualisation	
  tools,	
  there	
  are	
  also	
  much	
  more	
  advanced	
  software	
  which	
  are	
  used	
  by	
  
firms	
   in	
   order	
   to	
   satisfy	
   their	
   information	
   visualisation	
   requirements	
   for	
   business	
   intelligence	
  
support	
   and	
   situational	
   awareness	
   capability,	
   as	
   well	
   as	
  data	
  mining	
  and	
  reporting	
  requirements.	
  
Other	
   uses	
   include	
   enterprise	
   knowledge	
   visualisation,	
   linking	
   knowledge	
   to	
   spatial	
   data,	
   online	
  
analytical	
   processing	
   and	
   data	
   mining,	
   advanced	
   social	
   network	
   analysis	
   and	
   visualisation,	
   data	
  
mining	
  and	
  interactive	
  visualisation,	
  communication	
  of	
  location-­‐based	
  statistical	
  data,	
  on-­‐line	
  and	
  
batch	
  environment	
  for	
  business	
  graphics,	
  information	
  visualisation	
  tools	
  for	
  high	
  dimensional	
  non-­‐
linear	
  data,	
  visual	
  analysis	
  of	
  data	
  in	
  spreadsheet	
  format,	
  analysis	
  of	
  high	
  volumes	
  of	
  unstructured	
  
text,	
  analysis	
  of	
  high-­‐dimensional	
  data	
  in	
  large	
  complex	
  data	
  sets	
  and	
  of	
  multivariate	
  time-­‐oriented	
  
data.	
  
Some	
   examples	
   of	
   such	
   software	
   are:	
   CViz	
   Cluster178
	
  visualisation,	
   IBM	
   ILOG 179
	
  visualisation,	
  
Spotfire180
,	
  Survey	
  Visualizer181
,	
  Infoscope182
,	
  Sentinel	
  Visualizer183
,	
  Grapheur	
  2.0184
,	
  InstantAtlas185
,	
  
Miner3D186
,	
  VisuMap187
,	
  Drillet188
,	
  Eaagle189
,	
  GraphInsight190
,	
  Gsharp191
,	
  Tableau192
.	
  
Other	
  examples	
  of	
  visualisation	
  software	
  can	
  be	
  found	
  in	
  
• http://groups.diigo.com/group/CROSSOVERproject/content/tag/visualisation	
  
Key	
  Challenges	
  and	
  Gaps	
  	
  
New	
   tools	
   like	
   the	
   Many	
   Eyes	
   Word	
   Tree193
,	
   Treemap194
,	
   Tag	
   Cloud195
	
  and	
   Bubble	
   Chart196
	
  are	
  
available	
   but	
   lack	
   interactivity.	
   What	
   is	
   also	
   missing	
   is	
   a	
   better	
   interaction	
   of	
   visualisation	
  
approaches	
   and	
   analytical	
   processes	
   of	
   text	
   mining,	
   as	
   well	
   as	
   a	
   better	
   integration	
   between	
   new	
  
opportunities	
  for	
  data	
  collection,	
  such	
  as	
  open	
  data	
  and	
  participatory	
  sensing,	
  policy	
  modelling	
  and	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
	
  
178
	
  http://www.alphaWorks.ibm.com/formula/CViz	
  
179
	
  http://www-­‐01.ibm.com/software/websphere/ilog/	
  
180
	
  http://spotfire.tibco.com/	
  
181
	
  http://www.macrofocus.com/public/products/surveyvisualizer/	
  
182
	
  http://www.macrofocus.com/public/products/infoscope/	
  
183
	
  http://www.fmsasg.com/	
  
184
	
  http://grapheur.com/	
  
185
	
  http://www.instantatlas.com/	
  
186
	
  http://www.miner3d.com/	
  	
  
187
	
  http://www.visumap.net/	
  	
  
188
	
  http://drillet.appspot.com/	
  
189
	
  http://wp.eaagle.com/	
  
190
	
  http://www.graphinsight.com/	
  
191
	
  http://www.avs.com/products/gsharp/index.html	
  
192
	
  http://www.tableausoftware.com/	
  
193
	
  http://www-­‐958.ibm.com/software/data/cognos/manyeyes/page/Word_Tree.html	
  
194
	
  http://www.treemap.com/	
  
195
	
  http://www.tagcloud.com/	
  
196
	
  See	
   http://manyeyes.alphaworks.ibm.com/manyeyes/,	
   which	
   can	
   be	
   also	
   found	
   in	
   the	
   project	
   CROSSOVER	
   Diigo	
  
collection	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
95	
  |	
  P a g e 	
  
visual	
  analytics	
  tools.	
  Most	
  applications	
  related	
  to	
  visual	
  analytics	
  of	
  public	
  data	
  remain	
  at	
  the	
  level	
  
of	
  visualisation	
  only,	
  with	
  limited	
  analytical	
  functionalities.	
  
Visualisation	
  tools	
  are	
  still	
  largely	
  design	
  for	
  analyst	
  and	
  are	
  not	
  accessible	
  to	
  non-­‐experts.	
  Intuitive	
  
interfaces	
   and	
   devices	
   are	
   needed	
   to	
   interact	
   with	
   data	
   results	
   through	
   clear	
   visualisations	
   and	
  
meaningful	
  representations.	
  User	
  acceptability	
  is	
  a	
  challenge	
  in	
  this	
  sense,	
  and	
  clear	
  comparisons	
  
with	
   previous	
   systems	
   to	
   assess	
   its	
   adequacy	
   and	
   objective	
   rules	
   of	
   thumbs	
   to	
   facilitate	
   design	
  
decisions	
  would	
  be	
  a	
  great	
  contribution	
  to	
  the	
  community.	
  
Scalability	
  of	
  visualisation	
  in	
  face	
  of	
  big	
  data	
  availability	
  is	
  a	
  permanent	
  challenge,	
  since	
  visualisation	
  
requires	
  additional	
  performances	
  with	
  respect	
  to	
  traditional	
  analytics	
  in	
  order	
  to	
  allow	
  for	
  real	
  time	
  
interaction	
  and	
  reduce	
  latency.	
  
Finally,	
  visualisation	
  is	
  largely	
  a	
  demand-­‐	
  and	
  design-­‐driven	
  research	
  area.	
  In	
  this	
  sense	
  one	
  of	
  the	
  
main	
  challenge	
  is	
  to	
  ensure	
  the	
  multidisciplinary	
  collaboration	
  of	
  engineering,	
  statistics,	
  computer	
  
science	
  and	
  graphic	
  design.	
  
A	
   relevant	
   challenge	
   of	
   visualisation	
   and	
   visual	
   analytics	
   is	
   to	
   adapt	
   existing	
   techniques	
   to	
   policy	
  
modelling:	
  
• RelaNet	
  (Landesberger	
  et	
  al.	
  2008),	
  which	
  displays	
  the	
  network	
  relations	
  and	
  thereby	
  is	
  
able	
  to	
  show	
  the	
  connections	
  and	
  co-­‐variances	
  of	
  the	
  different	
  opinions	
  overtime	
  
• CirVis3D	
  (Landesberger	
  et	
  al.	
  2009),	
  which	
  can	
  visualize	
  clustered	
  opinion	
  snippets	
  as	
  
well	
  as	
  display	
  time	
  series	
  in	
  order	
  to	
  show	
  the	
  opinion	
  trends	
  over	
  time	
  
Following	
  Chen	
  (2005),	
  who	
  builds	
  on	
  Rhyne	
  et	
  al.	
  (2004),	
  we	
  can	
  enumerate	
  a	
  number	
  of	
  challenges	
  
in	
  the	
  topic:	
  
• Usability:	
   the	
   availability	
   of	
   low	
   cost,	
   ready	
   to	
   use	
   and	
   reconfigurable	
   information	
  
visualisation	
   systems,	
   as	
   well	
   as	
   a	
   balanced	
   portfolio	
   of	
   general	
   purpose	
   fully	
   functional	
  
information	
  visualisation	
  systems	
  is	
  used	
  is	
  crucial	
  	
  
• Understanding	
   elementary	
   perceptual–cognitive	
   tasks:	
   research	
   should	
   not	
   only	
   focus	
   on	
  
relatively	
   high	
   level	
   cognitive	
   activities	
   such	
   as	
   browsing	
   and	
   searching,	
   or	
   judging	
   the	
  
relevance	
   of	
   information.	
   Rather	
   it	
   should	
   primarily	
   focus	
   on	
   the	
   identification	
   and	
   de-­‐
codification	
   of	
   visualized	
   objects	
   would	
   be	
   a	
   fundamental	
   step	
   toward	
   engineering	
  
information	
  visualisation	
  systems	
  
• Prior	
  knowledge:	
  in	
  order	
  to	
  understand	
  the	
  underlying	
  message	
  in	
  visualized	
  information	
  
users	
  need	
  a	
  prior	
  knowledge	
  of	
  how	
  to	
  operate	
  the	
  information	
  visualisation	
  system,	
  as	
  
well	
  as	
  the	
  domain	
  knowledge	
  of	
  how	
  to	
  interpret	
  the	
  content	
  
• Education	
  and	
  training:	
  on	
  the	
  one	
  hand	
  there	
  is	
  the	
  need	
  for	
  the	
  need	
  for	
  researchers	
  and	
  
practitioners	
  within	
  the	
  field	
  of	
  information	
  visualisation	
  to	
  learn	
  and	
  share	
  principles	
  and	
  
skills	
   of	
   visual	
   communication.	
   On	
   the	
   other	
   hand	
   potential	
   users	
   from	
   other	
   fields	
   must	
  
realize	
  the	
  value	
  of	
  information	
  visualisation	
  and	
  how	
  it	
  might	
  contribute	
  to	
  their	
  work	
  
• Intrinsic	
  quality	
  measures:	
  finding	
  quality	
  metrics	
  is	
  crucial	
  for	
  the	
  evaluation	
  and	
  selection	
  
of	
   visual	
   information	
   advances,	
   and	
   for	
   understanding	
   to	
   what	
   extent	
   an	
   information	
  
visualisation	
  design	
  represents	
  the	
  underlying	
  data	
  faithfully	
  and	
  efficiently,	
  and	
  preserves	
  
intrinsic	
  properties	
  of	
  the	
  underlying	
  phenomenon	
  
• Scalability:	
   need	
   for	
   the	
   adoption	
   of	
   parallel	
   computing	
   and	
   other	
   high-­‐performance	
  
computing	
  techniques	
  in	
  information	
  visualisation	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
96	
  |	
  P a g e 	
  
• Aesthetics:	
   it	
   is	
   important	
   to	
   assess	
   how	
   insights	
   and	
   aesthetics	
   interact	
   in	
   sustaining	
  
insightful	
   and	
   visually	
   appealing	
   information	
   visualisation.	
   What	
   visual	
   properties	
   make	
  
users	
  think	
  a	
  graph	
  is	
  pretty	
  or	
  visually	
  appealing?	
  
• Paradigm	
   shift	
   from	
   structures	
   to	
   dynamics:	
   shift	
   from	
   the	
   study	
   of	
   the	
   structure	
   of	
  
visualisation	
   to	
   the	
   assessment	
   of	
   the	
   dynamic	
   properties	
   of	
   underlying	
   phenomena,	
  
providing	
  built-­‐in	
  trend	
  detection	
  mechanisms	
  embedded	
  in	
  the	
  data	
  modelling	
  component	
  
• Causality,	
  visual	
  inference,	
  and	
  predictions:	
  there	
  is	
  a	
  strong	
  necessity	
  for	
  the	
  elaboration	
  of	
  
sensitive	
   and	
   selective	
   algorithms	
   that	
   can	
   resolve	
   conflicting	
   evidence	
   and	
   suppress	
  
background	
   noises.	
   To	
   this	
   respect	
   a	
   great	
   role	
   is	
   played	
   by	
   complex	
   network	
   and	
   link	
  
analysis	
  	
  
• Knowledge	
  domain	
  visualisation:	
  it	
  encompasses	
  several	
  of	
  the	
  aforementioned	
  challenges,	
  
and	
   it	
   is	
   linked	
   to	
   the	
   fact	
   that	
   it	
   is	
   not	
   only	
   the	
   information	
   conveyed	
   to	
   be	
   important,	
  
rather	
  is	
  its	
  structure,	
  which	
  is	
  a	
  social	
  construction	
  
	
  
	
  
Current	
  research	
  
• Close	
  the	
  loop	
  of	
  information	
  selection,	
  preparation	
  and	
  visualisation	
  
• Multiple,	
  coordinated	
  views	
  in	
  visualisation/visual	
  analytics197
	
  
• Integration	
  of	
  visualisation	
  with	
  comments	
  /	
  wiki	
  /	
  blogs	
  
• Collaborative	
  platform	
  display	
  
• Interaction	
  between	
  visualisation	
  and	
  models	
  
• Mobile	
  visual	
  analytics	
  tools,	
  e.g.	
  Sitegeist198
	
  
• Geo-­‐visualisation	
  of	
  government	
  data	
  
• Integration	
  with	
  opinion	
  mining	
  and	
  participatory	
  sensing	
  
• Evaluation	
  framework	
  for	
  visualisation	
  effectiveness	
  
• Visualisation	
  infrastructures	
  for	
  policy	
  modelling	
  issues	
  
	
  
A	
  list	
  of	
  EU	
  funded	
  projects	
  in	
  visual	
  analytics	
  include:	
  
• 	
  VisMaster-­‐Visual	
  Analytics:	
  Mastering	
  the	
  Information	
  Age199
	
  
• 	
  VisSense-­‐	
   Visual	
   Analytic	
   Representation	
   of	
   Large	
   Datasets	
   for	
   Enhancing	
   Network	
  
Security200
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
197
	
  See	
  Heer,	
  Jeffrey,	
  Fernanda	
  B.	
  Viégas,	
  and	
  Martin	
  Wattenberg.	
  2007.	
  Voyagers	
  and	
  Voyeurs:	
  Supporting	
  Asynchronous	
  
Collaborative	
  Information	
  visualisation.	
  In	
  CHI	
  2007,	
  April	
  28–May	
  3,	
  2007,	
  San	
  Jose,	
  California,	
  USA.	
  See	
  also	
  the	
  
presentation	
  on	
  social	
  visualisation	
  carried	
  out	
  by	
  Fernanda	
  Viégas	
  and	
  Martin	
  Wattenberg	
  at	
  the	
  University	
  of	
  
Harvard,	
  April	
  13	
  2009	
  
198
	
  http://sunlightfoundation.com/blog/2012/12/13/sitegeist-­‐uncover-­‐the-­‐data-­‐around-­‐you/	
  
199
	
  http://www.visual-­‐analytics.eu/	
  
200
	
  http://cordis.europa.eu/projects/rcn/94912_en.html	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
97	
  |	
  P a g e 	
  
• 	
  CUBIST-­‐	
  Combining	
  and	
  Uniting	
  Business	
  Intelligence	
  and	
  Semantic	
  Technologies201
	
  
• 	
  WATTALIST	
  –	
  Modelling	
  and	
  Analysing	
  Demand	
  Response	
  Systems202
	
  
• 	
  CODE	
  –	
  Commercially	
  empowered	
  Linked	
  Open	
  Data	
  Ecosystems	
  in	
  Research203
	
  
• 	
  SemSeg-­‐4D	
  Space-­‐Time	
  Topology	
  for	
  Semantic	
  Flow	
  Segmentation204
	
  
	
  
	
  
Future	
  research:	
  long	
  term	
  and	
  short	
  term	
  issues	
  
Short-­‐term	
  research	
  
• Reusability	
  of	
  mashup	
  tools	
  (mashup	
  is	
  a	
  web	
  application	
  which	
  combines	
  data	
  from	
  
one	
  or	
  more	
  sources	
  into	
  a	
  single	
  integrated	
  tool	
  or	
  application)	
  for	
  visual	
  analytics	
  
• Tighter	
  integration	
  between	
  automatic	
  computation	
  and	
  interactive	
  visualisation,	
  which	
  
consists	
   in	
   the	
   availability	
   of	
   complex	
   and	
   powerful	
   algorithms	
   that	
   allow	
   for	
  
manipulating	
   the	
   data	
   under	
   analysis,	
   transforming	
   it	
   in	
   order	
   to	
   feed	
   suitable	
  
visualisations	
  
• Bias	
  identification	
  and	
  signalling	
  in	
  visualisation	
  
• Techniques	
  and	
  algorithms	
  for	
  creating	
  effective	
  visualisation	
  tools	
  based	
  on	
  perceptual	
  
psychology	
  (dealing	
  with	
  the	
  process	
  by	
  which	
  the	
  physical	
  energy	
  received	
  by	
  sense	
  
organs	
  forms	
  the	
  basis	
  of	
  perceptual	
  experience),	
  cognitive	
  science	
  (focusing	
  on	
  how	
  
information	
  is	
  represented,	
  processed,	
  and	
  transformed)	
  and	
  graphical	
  principles	
  
• Visualisations	
   enabling	
   interactive	
   exploration	
   techniques	
   such	
   as	
   focus	
   &	
   context,	
   in	
  
order	
  for	
  the	
  viewers	
  to	
  be	
  able	
  to	
  see	
  the	
  object	
  of	
  primary	
  interest	
  presented	
  in	
  full	
  
detail	
   while	
   at	
   the	
   same	
   time	
   getting	
   a	
   overview–impression	
   of	
   all	
   the	
   surrounding	
  
information	
  —	
  or	
  context	
  —	
  available	
  
• Exploiting	
  visualisation	
  as	
  a	
  medium	
  to	
  engage	
  citizens	
  in	
  policy-­‐related	
  complex	
  matter	
  
• Visualisation	
   as	
   a	
   way	
   to	
   provide	
   (persuasive)	
   feedback	
   and	
   change	
   in	
   attitudes,	
  
opinions,	
  behaviors	
  
• Visualisation	
  as	
  a	
  medium	
  for	
  grassroots/crowd-­‐sourced	
  participation,	
  collaboration	
  on	
  
data-­‐related	
  issues	
  	
  
• Impact	
  evaluation	
  of	
  visual	
  analytics	
  on	
  policy	
  choices	
  
• Research	
  in	
  making	
  visualisation	
  accessible	
  for	
  non-­‐experts	
  	
  
	
  
Long-­‐term	
  research	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
201
	
  http://cordis.europa.eu/projects/rcn/95904_en.html	
  
202
	
  http://cordis.europa.eu/projects/rcn/100984_en.html	
  
203
	
  http://cordis.europa.eu/projects/rcn/103419_en.html	
  
204
	
  http://www.semseg.eu/	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
98	
  |	
  P a g e 	
  
• Learning	
  adaptive	
  algorithm	
  for	
  users	
  intent.	
  Note	
  that	
  learning/adaptive	
  algorithms	
  are	
  
defined	
   as	
   being	
   capable	
   to	
   automatically	
   change	
   behaviour	
   based	
   on	
   its	
   execution	
  
context	
   (data	
   handled	
   by	
   the	
   algorithm,	
   configuration	
   parameters	
   of	
   the	
   runtime	
  
environment,	
  resources	
  used)	
  in	
  order	
  to	
  obtain	
  optimal	
  performances	
  
• Advanced	
  visual	
  analytics	
  interfaces:	
  visual	
  interfaces	
  in	
  which	
  neither	
  the	
  analytics	
  nor	
  
the	
  visualisation	
  needs	
  to	
  be	
  advanced	
  in	
  itself	
  but	
  synergy	
  between	
  automation	
  and	
  
visualisation	
  is	
  in	
  fact	
  advanced	
  
• Intuitive	
  and	
  affordable	
  visual	
  analytics	
  interface	
  for	
  citizens	
  
• Development	
  of	
  novel	
  interaction	
  algorithms	
  incorporating	
  machine	
  recognition	
  of	
  the	
  
actual	
  user	
  intent	
  or	
  of	
  the	
  actual	
  relevance	
  for	
  the	
  user	
  and	
  appropriate	
  adaptation	
  of	
  
main	
  display	
  parameters	
  such	
  as	
  the	
  level	
  of	
  detail,	
  data	
  selection,	
  etc.	
  by	
  which	
  the	
  
data	
  is	
  presented	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
99	
  |	
  P a g e 	
  
3.2.4. Serious	
  Gaming	
  for	
  Behavioural	
  Change	
  
Introduction	
  and	
  definition	
  
So	
   far,	
   collaborative	
   ICTs	
   have	
   dramatically	
   augmented	
   the	
   capacity	
   of	
   people	
   to	
   connect	
   and	
  
collaborate.	
   Yet,	
   less	
   impact	
   has	
   been	
   achieved	
   in	
   terms	
   of	
   actual	
   change	
   and	
   action,	
   as	
   most	
  
collaboration	
  remain	
  confined	
  to	
  an	
  elite	
  of	
  highly-­‐motivated	
  individuals	
  and	
  faces	
  the	
  traditional	
  
limits	
  of	
  human	
  attention	
  and	
  motivation.	
  As	
  illustrated	
  in	
  other	
  challenges,	
  ICT	
  can	
  improve	
  data	
  
collection	
   and	
   analysis,	
   but	
   if	
   attention	
   and	
   motivation	
   are	
   not	
   present,	
   little	
   impact	
   can	
   be	
  
achieved.	
  This	
  challenge	
  depicts	
  ICT	
  solutions	
  that	
  enable	
  behavioural	
  change	
  and	
  action.	
  Even	
  when	
  
citizens	
  and	
  government	
  are	
  fully	
  aware	
  of	
  necessary	
  policy	
  choices,	
  they	
  might	
  irrationally	
  choose	
  
short-­‐term	
  benefits.	
  	
  
Simulation	
  and	
  serious	
  gaming	
  (also	
  known	
  as	
  interactive	
  learning	
  environments)	
  offer	
  opportunities	
  
to	
  impact	
  on	
  personal	
  incentives	
  to	
  action	
  and	
  showing	
  long-­‐term	
  and	
  systemic	
  effects	
  of	
  individual	
  
choices,	
  thereby	
  lowering	
  the	
  engagement	
  barrier	
  to	
  collaborative	
  governance	
  and	
  augmenting	
  its	
  
impact.	
   In	
   particular,	
   serious	
   games	
   have	
   been	
   developed	
   for	
   educational	
   purposes	
   and	
   raising	
  
awareness	
  on	
  particular	
  issues	
  while	
  not	
  requiring	
  high	
  levels	
  of	
  engagement.	
  
Simulation	
   tools	
   enable	
   users	
   to	
   see	
   the	
   systemic	
   and	
   long-­‐term	
   impact	
   of	
   their	
   action	
   in	
   a	
   very	
  
concrete	
   and	
   tangible	
   form,	
   thereby	
   encouraging	
   more	
   responsible	
   behaviour	
   and	
   long-­‐term	
  
thinking.	
   Gaming	
   engages	
   users	
   through	
   the	
   “fun”	
   and	
   “social”	
   dimension,	
   thereby	
   providing	
  
incentives	
  towards	
  action.	
  Feedback	
  and	
  simulation	
  systems	
  include	
  both	
  individual	
  and	
  government	
  
behaviour,	
  thereby	
  allowing	
  policy-­‐makers	
  and	
  citizens	
  to	
  detect	
  the	
  impact	
  of	
  both	
  individual	
  and	
  
policy	
  choices.	
  
Engagement	
  of	
  domain	
  experts	
  is	
  a	
  crucial	
  issue	
  for	
  building	
  reliable	
  games	
  and	
  simulation	
  tools.	
  
Toolkits	
  and	
  modules	
  enable	
  a	
  wider	
  audience	
  of	
  stakeholders	
  to	
  take	
  a	
  direct,	
  active	
  role	
  in	
  games	
  
development,	
  thereby	
  enabling	
  all	
  relevant	
  knowledge	
  to	
  be	
  elicited	
  and	
  captured	
  by	
  the	
  simulation	
  
and	
  gaming	
  scenarios	
  and	
  models.	
  Pre-­‐built	
  toolkit	
  enables	
  the	
  creation	
  directly	
  by	
  thematic	
  experts	
  
and	
  not	
  by	
  technology	
  experts.	
  
	
  
Why	
  it	
  matters	
  in	
  governance	
  
Most	
   applications	
   of	
   simulation	
   and	
   gaming	
   are	
   developed	
   into	
   the	
   context	
   of	
   education	
   and	
  
learning,	
  while	
  more	
  interactive	
  feedback	
  producing	
  systems	
  have	
  been	
  applied	
  to	
  personal	
  health	
  
and	
  energy	
  conservation.	
  The	
  specific	
  challenges	
  of	
  gaming	
  for	
  public	
  policy	
  awareness	
  and	
  action	
  
are	
   currently	
   less	
   researched,	
   but	
   are	
   very	
   specific	
   because	
   of	
   their	
   large-­‐scale	
   interaction	
   and	
  
systemic	
  effects	
  of	
  individual	
  behaviour,	
  which	
  characterised	
  this	
  field.	
  	
  
Furthermore,	
  the	
  availability	
  of	
  a	
  simulation	
  toolkit	
  is	
  necessary	
  to	
  empower	
  a	
  diverse	
  and	
  inclusive	
  
simulation	
  landscape,	
  where	
  the	
  most	
  diverse	
  set	
  of	
  ideas	
  can	
  be	
  influential	
  and	
  listened	
  to.	
  	
  
	
  
Recent	
  trends	
  
Simulation	
  and	
  gaming	
  have	
  started	
  to	
  be	
  applied	
  in	
  different	
  policy	
  contexts	
  in	
  order	
  to	
  engage	
  
wider	
   audiences.	
   Games	
   are	
   developed	
   “on	
   purpose”,	
   by	
   highly	
   skilled	
   developers,	
   in	
   the	
   public	
  
sector	
   and	
   by	
   civil	
   society,	
   therefore	
   requiring	
   significant	
   investment	
   and	
   without	
   the	
   specific	
  
thematic	
   knowledge	
   of	
   the	
   field.	
   Furthermore,	
   existing	
   serious	
   games	
   lack	
   flexibility	
   to	
   allow	
   for	
  
unpredictable	
  developments	
  and	
  non-­‐linear	
  behaviours,	
  where	
  scenarios	
  evolve	
  and	
  adapt	
  to	
  users	
  
choices	
  rather	
  than	
  being	
  rigidly	
  prescribed.	
  Commercial	
  solutions	
  that	
  turn	
  long-­‐term	
  effects	
  into	
  
short-­‐term	
  feedback	
  are	
  available,	
  but	
  still	
  lack	
  usability	
  as	
  well	
  as	
  the	
  fun	
  dimension	
  of	
  games	
  and	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
100	
  |	
  P a g e 	
  
finally	
   require	
   high	
   levels	
   of	
   engagement.	
   They	
   are	
   designed	
   for	
   individual	
   feedback	
   and	
   do	
   not	
  
cover	
  the	
  complexity	
  of	
  systemic	
  interactions,	
  which	
  are	
  typical	
  of	
  public	
  governance	
  issues.	
  	
  
To	
   sum	
   up,	
   serious	
   gaming	
   is	
   still	
   requiring	
   high	
   level	
   of	
   engagement,	
   and	
   progress	
   is	
   needed	
   in	
  
terms	
  of	
  usability	
  and	
  appeal	
  in	
  order	
  to	
  reach	
  “casual	
  gamers”	
  including,	
  immersive	
  and	
  emotion	
  
aware	
  games.	
  
	
  
Current	
  practice	
  
•	
   Purpose-­‐built	
   gaming	
   and	
   simulation	
   for	
   understanding	
   of	
   policy	
   issues	
   and	
   of	
   individual	
  
behaviour	
  
	
  
Public	
  Policy	
  Applications	
  
Simulation	
  and	
  gaming	
  can	
  be	
  useful	
  to	
  policy	
  makers	
  in	
  the	
  following	
  terms	
  (Mayer	
  et	
  al.	
  2004	
  ,	
  
Bots	
  and	
  van	
  Daalen	
  2007	
  ):	
  
•	
   Research	
  and	
  analyse	
  a	
  policy	
  issue	
  when	
  it	
  is	
  not	
  feasible	
  to	
  tackle	
  the	
  real	
  system	
  (due	
  to	
  
time	
   constraint	
   or	
   just	
   because	
   it	
   does	
   not	
   exists)	
   or	
   to	
   include	
   human	
   behaviour	
   by	
   way	
   of	
   a	
  
computer	
  model	
  (due	
  to	
  unrealistic	
  assumptions	
  such	
  as	
  perfect	
  rationality).	
  In	
  this	
  view	
  the	
  game	
  
becomes	
  a	
  laboratory	
  which	
  can	
  produce	
  a	
  great	
  deal	
  of	
  data	
  which	
  provide	
  useful	
  insights	
  
•	
   Design	
  alternative	
  solutions	
  to	
  a	
  problem	
  analyse	
  and	
  assess	
  the	
  possible	
  consequences	
  of	
  
the	
  alternative	
  solutions	
  in	
  order	
  to	
  recommend	
  a	
  course	
  of	
  action	
  for	
  the	
  policy-­‐maker.	
  In	
  this	
  view	
  
the	
   game	
   can	
   be	
   seen	
   as	
   a	
   virtual	
   design	
   studio	
   useful	
   to	
   boost	
   out-­‐of-­‐the-­‐box	
   thinking	
   about	
  
alternative	
  solutions	
  to	
  a	
  policy	
  issue,	
  and	
  also	
  to	
  ponder	
  recommendations’	
  consequences	
  	
  
•	
   A	
  game	
  can	
  be	
  used	
  to	
  provide	
  strategic	
  advice	
  acting	
  as	
  a	
  virtual	
  practice	
  ring	
  in	
  which	
  the	
  
policy	
  maker	
  can	
  rehearsal	
  different	
  strategies.	
  	
  A	
  typical	
  example	
  of	
  such	
  kind	
  is	
  given	
  by	
  the	
  war	
  
games,	
  in	
  which	
  the	
  other	
  players	
  act	
  as	
  sparring	
  partners	
  for	
  the	
  policy	
  makers,	
  playing	
  the	
  role	
  of	
  
another	
  stakeholder	
  as	
  opportunistically	
  as	
  possible	
  
•	
   Many	
  policy	
  issues	
  require	
  mediation	
  so	
  that	
  it	
  is	
  necessary	
  to	
  seek	
  for	
  consensus	
  among	
  
stakeholders.	
  This	
  can	
  be	
  done	
  by	
  putting	
  the	
  players	
  around	
  a	
  virtual	
  negotiation	
  table	
  by	
  the	
  mean	
  
of	
  a	
  mediation	
  game.	
  In	
  this	
  way	
  the	
  changes	
  in	
  attitude	
  and	
  the	
  discovery	
  of	
  new	
  opportunities	
  for	
  
conflict	
  resolution	
  are	
  eased	
  by	
  the	
  interaction	
  among	
  stakeholders	
  during	
  the	
  game	
  
•	
   Normally	
   experts	
   and	
   elites	
   are	
   involved	
   in	
   the	
   policy-­‐making	
   process,	
   while	
   citizens	
   and	
  
ordinary	
   people	
   are	
   completely	
   neglected.	
   However,	
   by	
   defining	
   virtual	
   consultation	
   forums	
   it	
   is	
  
possible	
  to	
  allow	
  equal	
  access	
  for	
  all	
  the	
  actors	
  carrying	
  views	
  and	
  opinions,	
  which	
  would	
  have	
  been	
  
otherwise	
  disregarded.	
  In	
  this	
  respect	
  using	
  games	
  and	
  simulations	
  bears	
  and	
  advantage	
  given	
  by	
  
the	
  fact	
  that	
  ordinary	
  people	
  can	
  focus	
  and	
  express	
  themselves	
  more	
  easily	
  when	
  playing	
  a	
  role	
  
•	
   Clearly	
  ethical	
  questions	
  and	
  opinions	
  have	
  a	
  great	
  influence	
  on	
  the	
  policy	
  making	
  process.	
  
Games	
   and	
   simulations	
   can	
   be	
   used	
   to	
   clarify	
   the	
   values	
   and	
   arguments	
   behind	
   a	
   point	
   of	
   view.	
  
While	
   in	
   ordinary	
   political	
   debate	
   values	
   remain	
   implicit,	
   by	
   creating	
   a	
   virtual	
   parliament	
   it	
   is	
  
possible	
   to	
   make	
   them	
   explicit.	
   Furthermore	
   gaming	
   and	
   simulations	
   can	
   be	
   used	
   to	
   magnify	
  
positions	
   and	
   opinions	
   of	
   stakeholders,	
   so	
   that	
   the	
   game	
   can	
   be	
   designed	
   to	
   reward	
   players	
   for	
  
quality	
  and	
  clarity	
  of	
  argumentation	
  
Moreover	
   readapting	
   the	
   taxonomy	
   of	
   Sawyer	
   and	
   Smith	
   simulation	
   and	
   serious	
   gaming	
   can	
   be	
  
useful	
  in	
  the	
  following	
  domains	
  (cross-­‐referenced	
  with	
  game	
  objectives):	
  
•	
   Public	
   sector	
   and	
   NGOs:	
   public	
   health	
   education	
   and	
   mass	
   casualty	
   response	
   (games	
   for	
  
health);	
  political	
  games	
  (advergames);	
  employee	
  training	
  (games	
  for	
  training);	
  provide	
  info	
  to	
  the	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
101	
  |	
  P a g e 	
  
public	
   (games	
   for	
   education);	
   data	
   collection/planning	
   	
   (games	
   for	
   research);	
   strategic	
   and	
   policy	
  
planning,	
  spatial	
  planning	
  (games	
  for	
  producing);	
  diplomacy	
  and	
  opinion	
  research	
  (games	
  as	
  work)	
  
•	
   Defence:	
   rehabilitation	
   and	
   wellness	
   (games	
   for	
   health);	
   recruitment	
   and	
   propaganda	
  
(advergames);	
   soldier/support	
   training	
   (games	
   for	
   training);	
   school	
   house	
   education	
   (games	
   for	
  
education);	
   wargames	
   and	
   planning	
   (games	
   for	
   research);	
   war	
   planning	
   and	
   weapons	
   research	
  
(games	
  for	
  producing);	
  command	
  and	
  control	
  (games	
  as	
  work)	
  
•	
   Healthcare:	
   cyber	
   therapy/exergaming	
   (games	
   for	
   health);	
   public	
   health	
   (advergames);	
  
policy	
  and	
  social	
  awareness	
  campaigns	
  (games	
  for	
  training);	
  training	
  games	
  for	
  health	
  professionals	
  
(games	
  for	
  education);	
  games	
  for	
  patient	
  education	
  and	
  disease	
  management	
  (games	
  for	
  research);	
  
visualization	
   and	
   epidemiology;	
   biotech	
   manufacturing	
   and	
   design	
   (games	
   for	
   producing);	
   public	
  
health	
  response	
  planning	
  and	
  logistics	
  (games	
  as	
  work)	
  
•	
   Education:	
  inform	
  about	
  diseases/risks	
  (games	
  for	
  health);	
  social	
  issue	
  games	
  (advergames);	
  
train	
   teachers/workforce	
   skills	
   (games	
   for	
   training);	
   learning	
   (games	
   for	
   education);	
   computer	
  
science	
   and	
   recruitment	
   	
   (games	
   for	
   research);	
   P2P	
   Learning	
   (games	
   for	
   producing);	
   distance	
  
learning	
  	
  (games	
  as	
  work)	
  
	
  
	
  
Inspiring	
  cases	
  
	
  
Let	
  us	
  present	
  now	
  some	
  inspiring	
  cases	
  of	
  serious	
  games	
  applied	
  to	
  policy	
  making:	
  
•	
   SimHealth:	
   The	
   National	
   Health	
   Care	
   Simulation	
   is	
   a	
   management	
   simulation	
   of	
   the	
   U.S.	
  
Healthcare	
  system	
  released	
  during	
  Congressional	
  debates	
  on	
  the	
  Clinton	
  health	
  care	
  plan	
  
•	
   SimCity	
   2013:	
   is	
   an	
   upcoming	
   city-­‐building/urban	
   planning	
   simulation	
   computer	
   game	
  
allowing	
   allows	
   players	
   to	
   visualize	
   data,	
   such	
   as	
   pollution	
   and	
   water	
   distribution,	
   which	
   will	
   be	
  
realised	
  in	
  February	
  2013	
  
•	
   City	
  One:	
  the	
  game	
  teaches	
  industry	
  professionals	
  and	
  civil	
  servants	
  the	
  real-­‐world	
  planning	
  
in	
  fields	
  such	
  as	
  optimization	
  of	
  banking,	
  retail,	
  energy	
  and	
  water	
  solutions	
  
•	
   Democracy	
   2:	
   government	
   simulation	
   game	
   in	
   which	
   the	
   player	
   acts	
   as	
   the	
   president	
   or	
  
prime	
  minister	
  of	
  a	
  democratic	
  government	
  introducing	
  and	
  altering	
  policies	
  in	
  areas	
  such	
  as	
  tax,	
  
economy,	
  welfare,	
  foreign	
  policy,	
  transport,	
  law	
  and	
  order	
  and	
  public	
  services	
  
•	
   Close	
  Combat	
  Marines:	
  serious	
  game	
  for	
  military	
  training	
  purposes,	
  with	
  particular	
  reference	
  
to	
  the	
  United	
  States	
  Marine	
  Corp	
  	
  
•	
   Incident	
  Commander™	
  NIMS-­‐compliant	
  training	
  tool	
  for	
  Homeland	
  Security:	
  in	
  this	
  game	
  the	
  
player	
   mimics	
   the	
   role	
   of	
   incident	
   commander	
   in	
   case	
   of	
   natural	
   or	
   manmade	
   disaster,	
   terrorist	
  
attack	
  or	
  hostage	
  situation.	
  Application:	
  US	
  Department	
  of	
  Justice	
  officers’	
  training	
  
•	
   Virtual	
   Battlespace	
   Systems	
   2:	
   this	
   is	
   an	
   interactive	
   military	
   simulator	
   developed	
   for	
   the	
  
United	
   States	
   Marine	
   Corp	
   and	
   the	
   Australian	
   Defence	
   Force	
   to	
   meet	
   the	
   individual	
   needs	
   of	
  
military,	
   law	
   enforcement,	
   homeland	
   defence,	
   loadmaster,	
   and	
   first	
   responder	
   training	
  
environments	
  
•	
   Pulse!!	
  	
  Virtual	
  Clinical	
  Learning	
  Lab	
  for	
  Health	
  Care	
  Training:	
  the	
  game	
  recreates	
  a	
  lifelike,	
  
interactive,	
   virtual	
   environment	
   in	
   which	
   civilian	
   and	
   military	
   heath	
   care	
   professionals	
   practice	
  
clinical	
  skills	
  in	
  case	
  of	
  catastrophe	
  or	
  terrorist	
  attack	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
102	
  |	
  P a g e 	
  
•	
   Levee	
  Patroller:	
  immersive	
  3D	
  game-­‐based	
  environment	
  to	
  train	
  levee	
  inspection	
  knowledge	
  
and	
  skills,	
  in	
  order	
  to	
  be	
  prepared	
  to	
  cope	
  with	
  unexpected	
  flooding	
  
•	
   Construct.it:	
  game-­‐based	
  learning	
  environment	
  allowing	
  players	
  to	
  experience	
  and	
  debrief	
  
some	
  of	
  the	
  complexities	
  involved	
  in	
  large-­‐scale	
  urban	
  projects.	
  Application:	
  development	
  plan	
  for	
  
Scheveningen-­‐Harbour	
  of	
  The	
  Hague	
  
•	
   Simport-­‐Maasvlakte	
   2:	
   computer-­‐supported	
   multi-­‐player	
   simulation	
   game	
   that	
   mimics	
   the	
  
real	
  processes	
  involved	
  in	
  planning,	
  equipping	
  and	
  exploiting	
  the	
  new	
  area	
  in	
  the	
  Port	
  of	
  Rotterdam	
  
•	
   Pro	
  Rail:	
  capacity	
  optimization	
  of	
  a	
  complex	
  infrastructural	
  network,	
  in	
  this	
  case	
  the	
  Dutch	
  
railways.	
  Applications:	
  cargo	
  capacity	
  management,	
  opening	
  of	
  the	
  Vecht-­‐bridge,	
  increase	
  traffic	
  on	
  
the	
  A2-­‐corridor	
  
• Win	
   Manager:	
   online	
   multiplayer	
   negotiation	
   business	
   game	
   in	
   which	
   players	
   conduct	
   a	
  
sequence	
   of	
   bilateral	
   negotiations	
   pursued	
   through	
   private	
   threads	
   on	
   the	
   general	
   game	
  
board	
  
• Management	
   Business	
   Game:	
   business	
   game	
   focussed	
   on	
   the	
   simulation	
   of	
   a	
   company’s	
  
management	
  in	
  a	
  competitive	
  market,	
  which	
  can	
  be	
  played	
  both	
  online	
  and	
  offline	
  
• Management	
   Utilities	
   Euroshop:	
   management	
   of	
   a	
   chain	
   of	
   retail	
   stores	
   selling	
   electronic	
  
products,	
  through	
  which	
  players	
  identify	
  the	
  relationships	
  between	
  management	
  issues	
  and	
  
competitive	
  market	
  factors	
  
• Shadow	
  Government:	
  serious	
  game	
  based	
  on	
  the	
  gamification	
  of	
  real	
  countries,	
  systems,	
  and	
  
worldwide	
   events.	
   Based	
   on	
   System	
   Dynamics,	
   customized	
   at	
   the	
   country	
   level,	
   it	
   allows	
  
players	
  to	
  test	
  several	
  policy	
  interventions	
  and	
  evaluate	
  their	
  impacts	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
103	
  |	
  P a g e 	
  
	
  
Key	
  challenges	
  and	
  gaps	
  
Following	
  Mayer	
  (2009)	
  we	
  can	
  identify	
  the	
  following	
  challenges	
  and	
  gaps:	
  
•	
   Cultural	
  changes	
  concerning	
  the	
  interaction	
  between	
  science	
  in	
  politics,	
  democracy.	
  Changes	
  
in	
  the	
  role	
  of	
  elites,	
  activism	
  and	
  citizens’	
  participation,	
  as	
  well	
  as	
  the	
  recent	
  emergence	
  of	
  game	
  
cultures	
  
•	
   Changes	
  in	
  public	
  policy	
  making	
  perception,	
  i.e.	
  from	
  rational	
  comprehensive	
  to	
  political	
  and	
  
incremental	
  	
  
•	
   How	
  natural	
  and	
  human-­‐caused	
  events	
  can	
  influence	
  the	
  political	
  agenda	
  (climate	
  warming,	
  
pollution,	
  depletion	
  of	
  natural	
  resources,	
  terrorism)	
  
•	
   Institutional	
  changes	
  and	
  the	
  emergence	
  of	
  new	
  industrial	
  or	
  institutional	
  actors	
  
•	
   Technological	
   innovation	
   in	
   computing	
   and	
   simulation	
   modelling,	
   such	
   as	
   agent-­‐based	
  
models,	
  cellular	
  automata,	
  or	
  virtual	
  game	
  worlds.	
  
	
  
	
  
Moreover	
  following	
  IDATE	
  we	
  can	
  display	
  the	
  following	
  key	
  challenges	
  regarding	
  in	
  particular	
  serious	
  
gaming:	
  
•	
   Restructure	
  the	
  game	
  in	
  order	
  to	
  cope	
  with	
  specific	
  purposes	
  and	
  broaden	
  the	
  audience	
  
•	
   Innovate	
  the	
  existing	
  business	
  models	
  
•	
   Automating	
   a	
   portion	
   of	
   the	
   production	
   process,	
   such	
   as	
   for	
   example	
   the	
   integration	
   of	
  
sector-­‐specific	
  elements	
  
•	
   Try	
  to	
  persuade	
  reluctant	
  users	
  and	
  create	
  sector-­‐targeting	
  serious	
  gamins	
  and	
  persuading	
  
reluctant	
  users	
  
•	
   Investing	
  in	
  all	
  connected	
  platforms	
  
Current	
  research	
  
•	
   Kit-­‐based	
  serious	
  games	
  
•	
   Integration	
  between	
  policy	
  models	
  and	
  simulation	
  
•	
   Design	
   of	
   appealing,	
   adaptive	
   and	
   context-­‐aware	
   interfaces.	
   Impact	
   of	
   simulation	
   and	
  
gaming	
  on	
  individual	
  behaviour	
  
•	
   Unconscious	
  impact	
  of	
  feedback	
  systems	
  
	
  
Research	
  disciplines:	
  human-­‐computer	
  interaction,	
  sensors,	
  information	
  visualisation,	
  sensor	
  design,	
  
psychology,	
  pedagogy,	
  public	
  policy	
  
	
  
Future	
  research:	
  long	
  term	
  and	
  short	
  term	
  issues	
  
Short-­‐term	
  research	
  
•	
   Citizens-­‐	
  and	
  experts-­‐generated	
  gaming	
  
•	
   Immersive	
  interfaces	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
104	
  |	
  P a g e 	
  
•	
   Large-­‐scale	
  collaboration	
  in	
  development	
  
•	
   Casual	
  serious	
  gaming	
  
•	
   Ethical	
  issues	
  in	
  serious	
  gaming	
  
•	
   User-­‐controlled	
  simulation	
  and	
  gaming	
  
•	
   Non-­‐linear	
  and	
  adaptive	
  scenarios	
  for	
  gaming	
  in	
  policy	
  context	
  
•	
   Integrated	
  analysis	
  of	
  information	
  and	
  behavioural	
  change	
  
•	
   Impact	
  of	
  simulation	
  and	
  gaming	
  on	
  systemic	
  behaviour	
  
	
  
Long-­‐term	
  research	
  
•	
   Augmented	
  reality	
  citizens-­‐generated	
  gaming	
  and	
  simulation	
  
•	
   Ubiquitous	
  feedback	
  systems	
  on	
  public	
  governance	
  
•	
   Model	
  and	
  display	
  long-­‐term	
  systemic	
  effects	
  of	
  individual	
  choices	
  on	
  public	
  policy	
  topics	
  
•	
   Interplay	
  between	
  different	
  feedback	
  systems	
  and	
  other	
  information	
  outputs	
  
	
  
	
  
	
  
	
  
3.2.5. Linked	
  Open	
  Government	
  Data	
  
The	
   notion	
   of	
   Government	
   Data	
   concerns	
   all	
   the	
   information	
   that	
   governmental	
   bodies	
   produce,	
  
collect	
  or	
  pay	
  for.	
  This	
  could	
  include	
  geographical	
  data,	
  statistics,	
  meteorological	
  data,	
  data	
  from	
  
publicly	
  funded	
  research	
  projects,	
  and	
  digitized	
  books	
  from	
  libraries.	
  In	
  this	
  respect	
  the	
  definition	
  of	
  
Open	
  Public	
  Data	
  is	
  applicable	
  when	
  that	
  data	
  can	
  be	
  readily	
  and	
  easily	
  consulted	
  and	
  re-­‐used	
  by	
  
anyone	
  with	
  access	
  to	
  a	
  computer.	
  In	
  the	
  European	
  Commission's	
  view	
  'readily	
  accessible'	
  means	
  
much	
   more	
   than	
   the	
   mere	
   absence	
   of	
   a	
   restriction	
   of	
   access	
   to	
   the	
   public.	
   Data	
   openness	
   has	
  
resulted	
  in	
  some	
  application	
  in	
  the	
  commercial	
  field,	
  but	
  by	
  far	
  the	
  most	
  relevant	
  applications	
  are	
  
created	
   in	
   the	
   context	
   of	
   government	
   data	
   repositories.	
   With	
   regard	
   to	
   linked	
   data	
   in	
   particular,	
  
most	
   research	
   is	
   being	
   undertaken	
   in	
   other	
   application	
   domains	
   such	
   as	
   medicine.	
   Government	
  
starts	
  to	
  play	
  a	
  leading	
  role	
  towards	
  a	
  web	
  of	
  data.	
  However,	
  current	
  research	
  in	
  the	
  field	
  of	
  open	
  
linked	
  data	
  for	
  government	
  is	
  limited.	
  
Following	
   the	
   Open	
   Government	
   Working	
   Group	
   Meeting	
   in	
   Sebastopol 205
	
  and	
   the	
   Sunlight	
  
Foundation206
,	
  there	
  is	
  a	
  set	
  of	
  principles	
  according	
  to	
  which	
  data	
  can	
  be	
  considered	
  open:	
  
• Data	
  must	
  be	
  completed,	
  i.e.	
  no	
  part	
  of	
  them	
  should	
  be	
  omitted	
  due	
  to	
  security,	
  privacy	
  
or	
  privilege	
  limitations	
  
• Data	
  must	
  be	
  primary,	
  disaggregated	
  and	
  not	
  modified,	
  and	
  must	
  be	
  published	
  with	
  the	
  
finest	
  possible	
  level	
  of	
  granularity	
  
• Data	
  must	
  be	
  timely	
  as	
  their	
  value	
  is	
  time-­‐relevant	
  
• Data	
  must	
  be	
  accessible	
  to	
  the	
  widest	
  range	
  of	
  users	
  and	
  purposes	
  
• Data	
  formats	
  must	
  not	
  under	
  exclusive	
  control	
  of	
  an	
  entity	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
205
	
  http://opengovdata.org/home/8principles	
  
206
	
  http://sunlightfoundation.com/policy/documents/ten-­‐open-­‐data-­‐principles	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
105	
  |	
  P a g e 	
  
• Data	
  should	
  not	
  be	
  subject	
  to	
  any	
  copyright	
  
• Data	
  must	
  be	
  machine-­‐processable	
  
• Data	
  access	
  must	
  be	
  non-­‐discriminatory	
  
• Data	
  must	
  be	
  permanent,	
  so	
  that	
  their	
  embedded	
  information	
  is	
  available	
  over	
  time	
  
• Data	
  must	
  be	
  cheaply	
  accessible	
  
In	
  the	
  same	
  way	
  according	
  to	
  Davies	
  et	
  al.207
	
  engaging	
  open	
  data	
  should:	
  
• Be	
  demand	
  driven	
  
• Put	
  data	
  in	
  context	
  
• Support	
  conversation	
  around	
  data	
  
• Build	
  capacity,	
  skills	
  and	
  networks	
  
• Collaborate	
  on	
  data	
  as	
  a	
  common	
  resource	
  
Moreover	
   according	
   to	
   Vander	
   Sande	
   et	
   al.	
  208
,	
   publishing	
   data	
   leads	
   to	
   more	
   transparency,	
   new	
  
businesses,	
  better	
  evidence-­‐based	
  policy	
  making	
  and	
  increased	
  public	
  sector	
  efficiency	
  only	
  if	
  the	
  
different	
  actors	
  in	
  the	
  chain	
  have	
  co-­‐ownership	
  of	
  the	
  data	
  and	
  be	
  able	
  to	
  participate	
  directly	
  in	
  its	
  
correction.	
  In	
  this	
  sense	
  free	
  licensing	
  and	
  shared	
  platform	
  to	
  publish	
  and	
  offer	
  feedback/corrections	
  
directly	
  to	
  the	
  data	
  are	
  crucial.	
  
Linked	
  open	
  government	
  data	
  are	
  valuable	
  for	
  a	
  number	
  of	
  reasons.	
  Firstly,	
  openness	
  in	
  government	
  
data	
   is	
   important	
   for	
   the	
   economic	
   reasons.	
   For	
   instance,	
   the	
   Open	
   Data	
   Strategy	
   for	
   Europe	
  
launched	
   by	
   the	
   European	
   Commission	
   is	
   expected	
   to	
   deliver	
   a	
   €40	
   billion	
   boost	
   to	
   the	
   EU's	
  
economy	
  each	
  year.	
  More	
  in	
  general,	
  open	
  government	
  data	
  are	
  important	
  for	
  participatory	
  decision	
  
making:	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  	
  Promotion	
  of	
  transparency	
  concerning	
  the	
  destination	
  and	
  use	
  of	
  public	
  expenditure	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  	
  Improvement	
  in	
  the	
  quality	
  of	
  policy	
  making,	
  which	
  becomes	
  more	
  evidence	
  based	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  	
  Display	
  the	
  full	
  economic	
  and	
  social	
  impact	
  of	
  information,	
  and	
  create	
  services	
  based	
  
on	
  government	
  data	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  	
  Increase	
  in	
  the	
  collaboration	
  across	
  government	
  bodies,	
  as	
  well	
  as	
  between	
  
government	
  and	
  citizens	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  	
  Permits	
  new	
  added-­‐values	
  services	
  to	
  come	
  into	
  existence	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  	
  Increase	
  the	
  awareness	
  of	
  citizens	
  on	
  specific	
  issues,	
  as	
  well	
  as	
  their	
  information	
  about	
  
government	
  policies	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  	
  Promote	
  accountability	
  of	
  public	
  officials	
  
• Very	
   important	
   examples	
   are	
   given	
   within	
   the	
   scope	
   of	
   the	
   Open	
   Government	
  
Initiative 209
	
  carried	
   out	
   by	
   the	
   Obama	
   Administration	
   for	
   promoting	
   government	
  
transparency	
  on	
  a	
  global	
  scale:Data.gov210
:	
  platform	
  which	
  increases	
  the	
  ability	
  of	
  the	
  
public	
  to	
  easily	
  find,	
  download,	
  and	
  use	
  datasets	
  that	
  are	
  generated	
  and	
  held	
  by	
  the	
  
Federal	
  Government.	
  In	
  the	
  scope	
  of	
  Data.gov	
  US	
  and	
  India	
  have	
  developed	
  an	
  open	
  
source	
   version	
   called	
   the	
   Open	
   Government	
   Platform 211
	
  (OGPL),	
   which	
   can	
   be	
  
downloaded	
   and	
   evaluated	
   by	
   any	
   national	
   Government	
   or	
   state	
   or	
   local	
   entity	
   as	
   a	
  
path	
  toward	
  making	
  their	
  data	
  open	
  and	
  transparent	
  
• USAspending.gov212
:	
   it	
   is	
   a	
   searchable	
   website	
   displaying	
   for	
   each	
   Federal	
   award	
   the	
  
name	
  of	
  the	
  entity	
  receiving	
  the	
  award,	
  the	
  amount	
  of	
  the	
  award,	
  information	
  on	
  the	
  
award,	
  and	
  the	
  location	
  of	
  the	
  entity	
  receiving	
  the	
  award	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
207
	
  http://www.w3.org/2012/06/pmod/pmod2012_submission_5.pdf	
  
208
	
  http://www.w3.org/2012/06/pmod/pmod2012_submission_4.pdf	
  
209
	
  http://www.whitehouse.gov/open	
  
210
	
  http://www.data.gov/
211
	
  http://www.opengovplatform.org/	
  
212
	
  http://www.usaspending.gov/	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
106	
  |	
  P a g e 	
  
• FederalRegister.gov213
:	
   HTML	
   edition	
   of	
   the	
   Federal	
   Register	
   to	
   make	
   it	
   easier	
   for	
  
citizens	
  and	
  communities	
  to	
  understand	
  and	
  get	
  informed	
  about	
  the	
  regulatory	
  process	
  
• Performance.gov214
:	
   website	
   providing	
   a	
   window	
   of	
   US	
   Government	
   Administration	
  
effort	
  to	
  improve	
  performance	
  and	
  accountability,	
  in	
  order	
  to	
  create	
  a	
  government	
  that	
  
is	
  more	
  effective,	
  efficient,	
  innovative,	
  and	
  responsive	
  
• IT	
   Dashboard215
:	
   website	
   enabling	
   federal	
   agencies,	
   industry,	
   the	
   general	
   public	
   and	
  
other	
  stakeholders	
  to	
  view	
  details	
  of	
  federal	
  information	
  technology	
  investments	
  
	
  
At	
   the	
   European	
   level	
   we	
   have	
   the	
   repository	
   of	
   applications	
   making	
   use	
   of	
   open	
   data:	
  
publicdata.eu216
.	
  At	
  the	
  European	
  national	
  level	
  the	
  initiatives	
  include:	
  
	
  
• United	
  Kingdom:	
  Data.gov.uk217
,	
  which	
  collects	
  data	
  from	
  5,400	
  datasets	
  available,	
  from	
  
all	
  central	
  government	
  departments	
  and	
  a	
  number	
  of	
  other	
  public	
  sector	
  bodies	
  and	
  
local	
  authorities.	
  	
  
• Italy:	
  Dati.gov.it218
,	
  which	
  is	
  an	
  open	
  data	
  portal	
  allowing	
  citizens,	
  developers,	
  firms	
  and	
  
public	
  administrations	
  to	
  make	
  use	
  of	
  the	
  public	
  administration	
  information	
  stock	
  
• Spain:	
  datos.gob.es219
,	
  the	
  national	
  portal	
  for	
  managing	
  and	
  organizing	
  the	
  Catalogue	
  of	
  
Public	
  Information	
  of	
  the	
  General	
  State	
  Administration	
  
• Ireland:	
   StatCentral.ie 220
,	
   providing	
   standard	
   documentation	
   on	
   recurring	
   official	
  
statistics	
  and	
  links	
  to	
  where	
  they	
  can	
  be	
  found	
  
• Netherlands:	
   Overheid.nl 221
,	
   the	
   central	
   access	
   point	
   to	
   all	
   information	
   about	
  
government	
  organizations	
  of	
  the	
  Netherlands	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
213
	
  https://www.federalregister.gov/	
  
214
	
  http://www.performance.gov/	
  
215
	
  http://www.itdashboard.gov/	
  
216
	
  http://publicdata.eu/app?page=1	
  
217
	
  http://data.gov.uk/	
  
218
	
  http://www.dati.gov.it/content/datigovit-­‐il-­‐portale-­‐dei-­‐dati-­‐aperti-­‐della-­‐pa	
  
219
	
  http://datos.gob.es/datos/	
  
220
	
  http://www.statcentral.ie/	
  
221
	
  http://www.overheid.nl/
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
107	
  |	
  P a g e 	
  
At	
  transnational	
  level	
  there	
  are	
  the	
  World	
  Bank222
,	
  United	
  Nations223
,	
  REEP224
	
  and	
  Open	
  Knowledge	
  
Foundation225
	
  portals.	
  
As	
  highlighted	
  by	
  the	
  experience	
  of	
  the	
  Open	
  Corporates226
,	
  which	
  turn	
  the	
  freely	
  available	
  raw	
  data	
  
into	
  something	
  genuinely	
  useful	
  that	
  customers	
  will	
  be	
  prepared	
  to	
  pay	
  for,	
  open	
  data	
  are	
  valuable	
  
also	
  for	
  the	
  private	
  sector.	
  
	
  
Figure	
  14	
  	
  Open	
  Data	
  Business	
  Model	
  (source:	
  Istituto	
  Superiore	
  Mario	
  Boella)	
  
	
  
	
  
In	
  this	
  case	
  the	
  data	
  can	
  generate	
  revenue	
  in	
  a	
  number	
  of	
  ways:	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  Subscriptions	
  or	
  royalties;	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  The	
  so-­‐called	
  “freemium”	
  model	
  where	
  a	
  basic	
  service	
  of	
  offered	
  for	
  free	
  but	
  with	
  
charges	
  for	
  premium	
  services;	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  	
  Advertising	
  by	
  third	
  parties;	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  Cross	
  subsidy;	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  By	
  offering	
  services	
  that	
  are	
  cheaper	
  and	
  more	
  efficient	
  to	
  outsource.	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
222
	
  http://data.worldbank.org	
  	
  
223
	
  http://data.un.org	
  
224
	
  http://www.reegle.info/	
  
225
	
  http://opengovernmentdata.org	
  
226
	
  http://www.w3.org/2012/06/pmod/pmod2012_submission_16.pdf	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
108	
  |	
  P a g e 	
  
	
  
Looking	
  at	
  the	
  best	
  practices	
  and	
  examples	
  of	
  Linked	
  Open	
  Government	
  Data,	
  it	
  is	
  possible	
  to	
  refer	
  
to	
  diagram	
  maintained	
  by	
  Richard	
  Cyganiak	
  (DERI,	
  NUI	
  Galway)	
  and	
  Anja	
  Jentzsch	
  (HPI),	
  which	
  is	
  a	
  
visualization	
  of	
  the	
  key	
  LOD	
  providers	
  and	
  their	
  linkages227
:	
  
	
  
	
  	
  Figure	
  15	
  -­‐LOD	
  providers	
  and	
  their	
  linkages	
  
	
  	
  	
  	
  
Three	
  other	
  inspiring	
  cases	
  are:	
  
• The	
   clean	
   energy	
   information	
   gateway	
   reegle.info228
,	
   which	
   makes	
   use	
   of	
   LOD	
   for	
  
providing	
   comprehensive	
   clean	
   energy	
   country	
   profiles	
   so	
   that	
   users	
   can	
   access	
   the	
  
highest	
   quality	
   information	
   in	
   a	
   visually	
   appealing	
   fashion.	
   By	
   using	
   reegle.info	
   small	
  
organizations	
   can	
   share	
   responsibilities,	
   as	
   they	
   are	
   not	
   required	
   to	
   maintain	
   large	
  
databases.	
   Moreover	
   the	
   information	
   is	
   directly	
   linked	
   to	
   data	
   providers,	
   so	
   that	
  
updates	
  take	
  place	
  immediately.	
  
• Open	
   Energy	
   Information	
   (OpenEl)229
,	
   a	
   collaborative	
   knowledge-­‐sharing	
   platform	
  
providing	
  open	
  and	
  free	
  access	
  to	
  energy	
  related	
  models,	
  tools	
  and	
  data.	
  The	
  business	
  
benefits	
  of	
  using	
  this	
  system	
  stem	
  from	
  the	
  fact	
  that	
  a	
  small	
  organization	
  not	
  having	
  a	
  
huge	
  team	
  of	
  people	
  for	
  maintaining	
  a	
  large	
  database	
  with	
  information	
  of	
  clean	
  energy,	
  
can	
  obtain	
  the	
  same	
  amount	
  of	
  knowledge	
  making	
  use	
  of	
  an	
  overview	
  of	
  a	
  variety	
  of	
  
energy-­‐related	
  and	
  country-­‐specific	
  topics.	
  Moreover,	
  given	
  the	
  direct	
  link	
  to	
  the	
  data	
  
providers’	
  information,	
  any	
  update	
  occurs	
  in	
  real	
  time.	
  
• The	
   UK	
   official	
   government	
   archive	
   Legislation.gov.uk,	
   which	
   offers	
   access	
   to	
   all	
  
published	
  UK	
  legislation	
  so	
  that	
  it	
  can	
  be	
  shared	
  by	
  citizens	
  and	
  businesses.	
  The	
  dataset	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
227
	
  http://lod-­‐cloud.net/	
  
228
	
  http://data.reegle.info	
  
229
	
  ttp://en.openei.org	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
109	
  |	
  P a g e 	
  
covers	
   more	
   than	
   800	
   years	
   and	
   includes	
   the	
   most	
   recent	
   changes	
   in	
   legislation.	
  
Legislation.gov.uk	
   merges	
   the	
   contents	
   of	
   the	
   Office	
   of	
   Public	
   Sector	
   Information	
  
website	
  and	
  the	
  Statute	
  Law	
  Database	
  in	
  ordered	
  to	
  provide	
  UK	
  public	
  and	
  local	
  acts,	
  
church	
  instruments,	
  ministerial	
  orders	
  and	
  acts	
  of	
  the	
  parliament.	
  
• The	
  tool	
  publicspending.gr230
,	
  which	
  takes	
  data	
  from	
  a	
  variety	
  of	
  sources	
  and	
  from	
  the	
  
combined	
  data	
  the	
  system	
  is	
  able	
  to	
  derive	
  graphs	
  showing	
  where	
  public	
  money	
  is	
  being	
  
spent	
   and	
   which	
   departments	
   are	
   spending	
   it.	
   This	
   is	
   the	
   first	
   linked	
   open	
   data	
  
application	
  in	
  Greece,	
  where	
  it	
  can	
  used	
  to	
  aid	
  to	
  policy	
  making	
  and	
  transparency	
  
• Another	
   interesting	
   case	
   is	
   “Where	
   Does	
   My	
   Money	
   Go?”231
,	
   which	
   shows	
   how	
   daily	
  
taxes	
  are	
  allocated	
  among	
  the	
  different	
  functions	
  of	
  the	
  government.	
  
• UK	
   Crime	
   Map232
,	
   which	
   has	
   prompted	
   a	
   change	
   in	
   the	
   way	
   police	
   resources	
   are	
  
prioritized,	
  and	
  is	
  widely	
  used	
  by	
  the	
  government	
  itself	
  which	
  is	
  benefitting	
  from	
  much	
  
more	
  efficient	
  access	
  to	
  information	
  
• The	
  BudgIT233
	
  platform,	
  which	
  turned	
  the	
  Nigerian	
  budget	
  into	
  an	
  interactive	
  document,	
  
complete	
  with	
  commentary	
  channels	
  via	
  the	
  Web	
  and	
  SMS	
  
• The	
   DERI's	
   Galway	
   Volvo	
   Ocean	
   Race234
	
  app	
   for	
   Android	
   and	
   iPhone,	
   created	
   by	
  
converting	
   various	
   data	
   sets	
   into	
   linked	
   data	
   and	
   then	
   enrich	
   that	
   data	
   through	
  
crowdsourcing.	
  DERI	
  was	
  used	
  to	
  classify	
  350	
  apps,	
  showing	
  that	
  the	
  majority	
  of	
  apps:	
  
• Have	
  been	
  produced	
  by	
  individuals	
  rather	
  than	
  commercial	
  companies	
  	
  
• Are	
  Web	
  based	
  
• Combine	
  OGD	
  with	
  maps	
  
• Rely	
  on	
  static	
  data	
  sets	
  rather	
  than	
  real	
  time	
  data	
  
• Use	
  a	
  single	
  data	
  set,	
  rather	
  than	
  mixed	
  data	
  
• The	
   Open	
   Culture	
   Data	
   project	
   (Open	
   Cultuur	
   Data)235
,	
   presenting	
   the	
   results	
   of	
   a	
  
hackathon.	
  The	
  winning	
  entry	
  made	
  use	
  of	
  a	
  video	
  dataset	
  and	
  smartphone	
  capabilities	
  
to	
  match	
  a	
  person's	
  location	
  with	
  video	
  taken	
  in	
  a	
  given	
  area	
  
• GLAMs236
	
  (galleries,	
  libraries,	
  archives,	
  museums),	
  which	
  provides	
  open	
  access	
  to	
  the	
  
cultural	
  heritage	
  	
  
	
  
	
  
Some	
   other	
   interesting	
   tools	
   are	
   the	
   RDF	
   Data	
   Cube	
   Vocabulary237
,	
   the	
   Data	
   Cube	
   faceted	
  
browser238
,	
  Openpolis239
,	
  Nosdeputes240
,	
  Virtueel	
  Platform241
	
  	
  
	
  
Current	
  challenges:	
  open	
  data	
  and	
  opinion	
  mining	
  
There	
  is	
  lot	
  of	
  effort	
  for	
  extracting	
  public	
  opinion	
  and	
  sentiment	
  towards	
  policies242
.	
  The	
  problem	
  is	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
230
	
  http://www.w3.org/2012/06/pmod/pmod2012_submission_32.pdf	
  
231
	
  http://www.wheredoesmymoneygo.org/	
  
232
	
  http://www.police.uk/	
  
233
	
  http://www.w3.org/2012/06/pmod/pmod2012_submission_8.pdf	
  
234
	
  http://www.w3.org/2012/06/pmod/pmod2012_submission_20.pdf	
  
235
	
  http://www.opencultuurdata.nl/	
  
236
	
  http://www.w3.org/2012/06/pmod/pmod2012_submission_22.pdf	
  
237
	
  http://www.w3.org/TR/vocab-­‐data-­‐cube/	
  
238
	
  http://www.w3.org/2012/06/pmod/pmod2012_submission_12.pdf	
  
239
	
  http://www.openpolis.it/	
  
240
	
  http://www.nosdeputes.fr/	
  
241
	
  http://virtueelplatform.nl/nieuws/apps-­‐for-­‐amsterdam-­‐zoekt-­‐nieuwe-­‐open-­‐data-­‐apps/	
  
242
	
  http://www.w3.org/2012/06/pmod/pmod2012_submission_29.pdf	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
110	
  |	
  P a g e 	
  
that	
   the	
   tools	
   that	
   are	
   better	
   at	
   extracting	
   real	
   data	
   from	
   social	
   media	
   content	
   are	
   of	
   course	
  
expensive.	
  In	
  this	
  respect	
  the	
  future	
  challenges	
  for	
  opinion	
  mining	
  applied	
  to	
  social	
  media	
  are:	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  How	
  to	
  reduce	
  human	
  efforts;	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  Identification	
  of	
  good	
  ideas;	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  Finding	
  necessary	
  investments;	
  
• 	
  	
  	
  	
  	
  	
  	
  	
  How	
  to	
  improve	
  usability	
  of	
  tools.	
  
We	
  also	
  have	
  to	
  notice	
  that	
  most	
  of	
  social	
  media	
  interaction	
  is	
  not	
  carried	
  out	
  in	
  public:	
  for	
  example	
  
in	
  Facebook	
  only	
  open	
  discussion	
  pages	
  can	
  provide	
  information	
  for	
  sentiment	
  analysis	
  and	
  opinion	
  
mining.	
  
	
  
Current	
  research	
  topics	
  
• Integration	
  of	
  open	
  government	
  data	
  (OGD)	
  and	
  social	
  media	
  data	
  (SMD):	
  policy	
  makers	
  
will	
   soon	
   be	
   able	
   to	
   see	
   the	
   subjective	
   reaction	
   to	
   objective	
   changes	
   through	
   a	
  
dashboard	
  that	
  is	
  powered	
  by	
  linked	
  data	
  
	
  
	
  
	
  
	
  
	
  
	
  
3.2.6. Collaborative	
  Governance	
  	
  
	
  
Introduction	
  and	
  definition	
   	
  
While	
   all	
   challenges	
   provide	
   opportunities	
   for	
   a	
   more	
   effective	
   large-­‐scale	
   collaboration	
   in	
   public	
  
action,	
  the	
  relevant	
  institutional	
  design	
  is	
  far	
  from	
  being	
  introduced.	
  The	
  formal	
  inclusion	
  of	
  citizens	
  
input	
  in	
  the	
  policy-­‐making	
  process,	
  the	
  deriving	
  institutional	
  rules,	
  the	
  legitimacy	
  and	
  accountability	
  
framework	
  are	
  all	
  issues	
  that	
  have	
  so	
  far	
  been	
  little	
  explored.	
  Instant,	
  open	
  governance	
  implies	
  a	
  
substantial	
   increase	
   in	
   feedback	
   loops	
   that	
   are	
   of	
   a	
   different	
   scale	
   with	
   respect	
   to	
   the	
   present	
  
context.	
  Any	
  system	
  stability	
  is	
  affected	
  by	
  the	
  number,	
  speed	
  and	
  intensity	
  of	
  feedback	
  loops,	
  and	
  
the	
  institutional	
  context	
  has	
  been	
  designed	
  for	
  less	
  and	
  slower	
  loops.	
  	
  
The	
  definition	
  and	
  design	
  of	
  public	
  sector	
  role	
  is	
  being	
  directly	
  affected	
  by	
  the	
  radical	
  increase	
  in	
  
bottom-­‐up	
  collaboration,	
  deriving	
  from	
  the	
  lower	
  cost	
  of	
  self-­‐organisation.	
  There	
  are	
  also	
  important	
  
questions	
  to	
  be	
  answered	
  –	
  where	
  does	
  the	
  legitimacy	
  come	
  from,	
  how	
  to	
  gain	
  and	
  maintain	
  the	
  
trust	
  of	
  users,	
  how	
  to	
  identify	
  the	
  users	
  online.	
  There	
  is	
  also	
  a	
  very	
  important	
  issue	
  of	
  how	
  to	
  take	
  
into	
   the	
   account	
   the	
   diversity	
   of	
   the	
   standpoints,	
   i.e.	
   how	
   to	
   achieve	
   a	
   consensual	
   answer	
   to	
  
controversial	
  social	
  issues,	
  especially	
  when	
  we	
  do	
  not	
  offer	
  alternatives	
  (ready-­‐made	
  options)	
  but	
  
start	
   from	
   an	
   open	
   question	
   and	
   work	
   throughout	
   different	
   options	
   proposed	
   by	
   participants.	
  
Furthermore,	
  the	
  trade-­‐off	
  between	
  direct	
  or	
  representative	
  model	
  of	
  democracy	
  will	
  have	
  to	
  be	
  
analysed	
  in	
  this	
  context.	
  It	
  is	
  far	
  from	
  being	
  proved	
  that	
  the	
  open	
  and	
  collaborative	
  governance	
  is	
  
really	
  inclusive	
  and	
  representative	
  of	
  all	
  the	
  social	
  groups,	
  including	
  the	
  disadvantaged	
  and	
  of	
  all	
  
standpoints.	
  There	
  is	
  a	
  visible	
  risk	
  that	
  online	
  collaboration	
  increases	
  the	
  divide,	
  rather	
  than	
  reduces	
  
it.	
  However,	
  in	
  the	
  current	
  situation,	
  the	
  circles	
  where	
  the	
  policy	
  proposals	
  are	
  designed,	
  amended	
  
and	
  ranked	
  hierarchically	
  are	
  very	
  small	
  and	
  composed	
  by	
  leaders	
  of	
  political	
  parties,	
  top-­‐level	
  civil	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
111	
  |	
  P a g e 	
  
servants,	
  CEOs	
  of	
  large	
  firms.	
  On-­‐line	
  collaborative	
  tools	
  would	
  broaden	
  these	
  circles	
  to	
  all	
  those	
  
that	
  have	
  a	
  competence	
  in	
  the	
  field	
  being	
  discussed,	
  and	
  have	
  an	
  ability	
  to	
  elaborate	
  an	
  argument.	
  
The	
   management	
   of	
   institutional	
   bodies	
   is	
   changing:	
   innovative	
   ideas	
   and	
   insight	
   coming	
   from	
  
employees	
   and	
   citizens	
   are	
   key	
   resources	
   to	
   be	
   exploited,	
   and	
   meritocracy	
   and	
   transparency	
   are	
  
entering	
  a	
  once	
  stable	
  and	
  conservative	
  workforce.	
  Enhanced	
  collaboration	
  with	
  citizens	
  and	
  private	
  
third	
   parties	
   should	
   be	
   accompanied	
   by	
   adequate	
   legal	
   and	
   accountability	
   frameworks,	
   mapping	
  
incentives	
  to	
  participation	
  and	
  enabling	
  business	
  models	
  for	
  different	
  stakeholders.	
  	
  
The	
  privacy	
  paradigm	
  is	
  changing	
  and	
  appropriate,	
  more	
  dynamic	
  frameworks	
  have	
  to	
  be	
  designed,	
  
taking	
  into	
  account	
  the	
  willingness	
  of	
  citizens	
  to	
  share	
  information	
  and	
  at	
  the	
  same	
  time	
  ensuring	
  
their	
  full	
  awareness	
  of	
  the	
  implications	
  and	
  their	
  control	
  over	
  the	
  data	
  usage.	
  
	
  
	
  
Recent	
  trends	
  
The	
   current	
   status	
   is	
   characterized	
   by	
   practice-­‐driven	
   implementation,	
   accompanied	
   by	
   little	
  
scientific	
  reflection.	
  Guidelines	
  and	
  soft	
  regulation	
  are	
  being	
  created	
  from	
  scratch	
  and	
  by	
  building	
  on	
  
other	
   institutions	
   examples.	
   The	
   development	
   of	
   collaborative	
   governance	
   is	
   growing	
   rapidly	
  
without	
  an	
  appropriate	
  reference	
  framework.	
  
	
  
	
  
Public	
  Policy	
  Application	
  	
  
	
  
As	
  it	
  is	
  widely	
  recognized,	
  policy	
  issues	
  of	
  our	
  age	
  can	
  be	
  addressed	
  only	
  through	
  the	
  collaboration	
  
of	
  all	
  the	
  components	
  of	
  the	
  society,	
  including	
  the	
  private	
  sector	
  and	
  individual	
  citizens.	
  In	
  this	
  view	
  
the	
  advantages	
  in	
  collaborative	
  governance	
  are	
  given	
  by:	
  
• Effectiveness	
  and	
  efficiency	
  in	
  the	
  delivery	
  of	
  programs	
  
• Professional	
  development	
  /	
  capacity	
  building	
  
• Better	
  needs	
  assessment	
  and	
  use	
  of	
  available	
  resources	
  
• Boost	
  communication	
  among	
  citizens	
  and	
  stakeholders	
  
• Increase	
  transparency	
  and	
  accountability,	
  as	
  well	
  as	
  equity	
  and	
  inclusiveness	
  	
  
• Avoiding	
  duplication	
  in	
  policy	
  making	
  
• Increasing	
  responsiveness,	
  access	
  and	
  build	
  relationship	
  
• Improving	
  public	
  image	
  
• Improve	
  the	
  quality	
  of	
  information	
  
• Consensus	
  based	
  decision-­‐making	
  	
  
• Increased	
  acceptance	
  of	
  results	
  
	
  
Collaborative	
  governance	
  can	
  be	
  applied	
  to	
  virtually	
  all	
  the	
  policy	
  making	
  fields.	
  The	
  following	
  areas	
  
constitute	
  a	
  mere	
  example:	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
112	
  |	
  P a g e 	
  
• Infrastructure	
  management:	
  building	
  new	
  infrastructures	
  often	
  entails	
  the	
  necessity	
  to	
  
balance	
  conflicting	
  interests,	
  especially	
  for	
  what	
  concerns	
  the	
  case	
  of	
  huge	
  externalities	
  
• Digital	
  inclusion:	
  the	
  increase	
  in	
  the	
  use	
  of	
  ICT	
  has	
  to	
  be	
  fostered	
  by	
  the	
  collaboration	
  at	
  
every	
  level	
  
• Energy:	
  delivering	
  affordable	
  and	
  efficient	
  energy,	
  collaboration	
  on	
  the	
  definition	
  of	
  energy	
  
regulations	
  
• Environment:	
   definition	
   of	
   new	
   regulations	
   on	
   environmental	
   safeguard,	
   mediation	
  
concerning	
   the	
   use	
   of	
   environmental	
   resources,	
   collaboration	
   in	
   assessing	
   public	
   projects	
  
with	
  environmental	
  impact	
  
• Health	
   care:	
   collaboration	
   in	
   health	
   care	
   reform,	
   awareness	
   and	
   education	
   campaigns,	
  
disease	
  prevention	
  
• Transportation:	
   collaboration	
   in	
   the	
   definition	
   of	
   a	
   transportation	
   plan,	
   negotiation	
   of	
  
transportation	
  rules,	
  settlement	
  of	
  disputes	
  on	
  the	
  construction	
  of	
  a	
  transportation	
  facility	
  
In	
  general	
  the	
  fields	
  where	
  collaborative	
  governance	
  is	
  most	
  fruitful	
  are,	
  in	
  general,	
  those	
  that	
  (1)	
  
involve	
  many	
  different	
  categories	
  of	
  stakeholders,	
  and	
  (2)	
  where	
  the	
  conflicts	
  are	
  not	
  frozen	
  along	
  
entrenched	
   lines.	
   Indeed,	
   (1)	
   the	
   benefits	
   of	
   collaboration	
   are	
   enhanced	
   when	
   the	
   diversity	
   is	
  
highest,	
  and	
  (2)	
  entrenched	
  conflicts	
  are	
  hardly	
  solved	
  by	
  dialogue,	
  and	
  become	
  a	
  field	
  of	
  power	
  
balance	
  and	
  force.	
  
	
  
	
  
Inspiring	
  cases	
  of	
  ICT	
  applications	
  to	
  Collaborative	
  Governance	
  
	
  
The	
   Open	
   Government	
   Initiative 243
	
  carried	
   out	
   by	
   the	
   Obama	
   Administration,	
   for	
   promoting	
  
government	
  transparency	
  and	
  citizen	
  engagement	
  on	
  a	
  global	
  scale:	
  
• Partner4Solutions244
:	
  the	
  website	
  for	
  the	
  Partnership	
  Fund	
  for	
  Program	
  Integrity	
  Innovation	
  
at	
  the	
  Office	
  of	
  Management	
  and	
  Budget.	
  By	
  using	
  this	
  tool,	
  the	
  Partnership	
  Fund	
  aims	
  at	
  
gathering	
  ideas	
  from	
  citizens	
  for	
  improving	
  the	
  Federal	
  assistance	
  programme	
  	
  
• Regulations.gov245
:	
  in	
  this	
  website	
  it	
  is	
  possible	
  to	
  comment	
  on	
  proposed	
  regulations	
  and	
  
related	
   documents	
   published	
   by	
   the	
   U.S.	
   Federal	
   government,	
   as	
   well	
   as	
   to	
   search	
   and	
  
review	
  original	
  regulatory	
  documents	
  as	
  well	
  as	
  comments	
  submitted	
  by	
  others	
  
• Challenge.gov246
:	
  online	
  challenge	
  platform	
  allowing	
  the	
  public	
  to	
  bring	
  the	
  best	
  ideas	
  and	
  
top	
  talent	
  to	
  bear	
  on	
  our	
  nation’s	
  most	
  pressing	
  challenges,	
  which	
  can	
  range	
  from	
  simple	
  
ideas	
   and	
   suggestions	
   to	
   proofs	
   of	
   concept,	
   designs,	
   or	
   finished	
   products	
   that	
   solve	
   the	
  
grand	
  challenges	
  of	
  the	
  21st	
  century	
  
• We	
  the	
  People247
:	
  allowing	
  citizens	
  to	
  create	
  and	
  launch	
  a	
  petition	
  in	
  order	
  to	
  engage	
  the	
  
government	
  	
  
• Change	
  by	
  Us248
,	
  which	
  allows	
  people	
  to	
  propose	
  ideas	
  and	
  projects	
  for	
  improving	
  the	
  cities	
  
they	
  live.	
  So	
  far	
  the	
  tool	
  has	
  been	
  applied	
  to	
  New	
  York,	
  Phoenix	
  and	
  Philadelphia	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
113	
  |	
  P a g e 	
  
• Wikirendum249
,	
  a	
  platform	
  where	
  citizens	
  can	
  share	
  ideas	
  on	
  policy	
  making	
  
• The	
  Digital	
  Engagement	
  Guide250
,	
  which	
  is	
  a	
  platform	
  where	
  ideas	
  on	
  how	
  to	
  use	
  digital	
  and	
  
social	
  media	
  in	
  the	
  public	
  sector	
  are	
  shared	
  
	
  
	
  
	
  
Key	
  challenges	
  and	
  gaps	
  
	
  
The	
  key	
  challenges	
  and	
  gaps	
  in	
  collaborative	
  governance	
  are:	
  
• Coping	
  with	
  accelerating	
  changes	
  in	
  policy	
  making	
  
• Overlapping	
  in	
  institutions	
  and	
  jurisdictions	
  
• Increasing	
  complexity	
  in	
  the	
  issues	
  to	
  be	
  tackled	
  
• Ability	
  to	
  choose	
  the	
  appropriate	
  tool	
  for	
  tackling	
  the	
  problem	
  at	
  hand	
  
• The	
  need	
  to	
  integrate	
  policies	
  and	
  resources	
  
• Managing	
  expectations	
  
• Public	
  involvement	
  processes	
  can	
  be	
  disconnected	
  from	
  real	
  decision	
  making	
  
• Tackle	
  conflicting	
  interests	
  among	
  participants	
  	
  
• Using	
  tools	
  appropriate	
  for	
  the	
  scale	
  (small-­‐scale	
  or	
  large	
  scale)	
  of	
  the	
  problem/solution	
  
• Calibrate	
  the	
  level	
  of	
  citizens’	
  participation	
  required	
  with	
  respect	
  to	
  the	
  nature	
  of	
  the	
  
problem/solution251
	
  
• Define	
  the	
  appropriate	
  levels	
  of	
  accountability	
  
• Avoid	
  instability	
  in	
  preferences	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
243
	
  http://www.whitehouse.gov/open	
  
244
	
  http://www.partner4solutions.gov/	
  
245
	
  http://www.regulations.gov/#!home	
  
246
	
  http://challenge.gov/	
  
247
	
  https://wwws.whitehouse.gov/petitions	
  
248
	
  http://changeby.us/	
  
249
	
  http://wikirendum.org/	
  
250
	
  http://www.digitalengagement.info/	
  
251
	
  Making	
  a	
  reliable	
  synthesis	
  of	
  a	
  large-­‐scale	
  discussion	
  can	
  be	
  daunting.	
  An	
  approach	
  considered	
  viable	
  is	
  some	
  sort	
  of	
  
machine-­‐assisted	
   human	
   "harvesting",	
   or	
   "catching":	
   software	
   uses	
   several	
   algorithms	
   to	
   identify	
   possible	
   "atoms	
   of	
  
interest".	
  For	
  example,	
  networks	
  analysis	
  can	
  detect	
  balkanization	
  of	
  a	
  community	
  around	
  a	
  polarizing	
  issue;	
  if	
  users	
  give	
  
ratings	
  to	
  statements	
  (as	
  is	
  the	
  case	
  in	
  some	
  of	
  the	
  tools	
  examined	
  by	
  CROSSOVER)	
  Bayesian	
  inference	
  on	
  user	
  behavior	
  
can	
  detect	
  inconsistencies,	
  and	
  therefore	
  irrational	
  biases.	
  The	
  content	
  thus	
  algorithmically	
  selected	
  is	
  then	
  presented	
  to	
  
human	
  "harvesters",	
  who	
  can	
  write	
  summaries.	
  These	
  attention-­‐mediation	
  algorithms	
  were	
  apparently	
  developed	
  in	
  the	
  
context	
  of	
  medicine,	
  as	
  a	
  tool	
  for	
  helping	
  doctors	
  formulate	
  diagnoses	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
114	
  |	
  P a g e 	
  
	
  
	
  
	
  
Current	
  research	
  
• Analysing	
  the	
  compatibility	
  of	
  new	
  collaborative	
  behaviour	
  with	
  existing	
  institutional	
  
framework	
  	
  
	
  
Research	
   disciplines:	
   political	
   sciences,	
   public	
   administration,	
   law,	
   sociology,	
   and	
   other	
   social	
  
sciences	
   in	
   general	
   (including	
   institutional	
   economics	
   for	
   example),	
   as	
   well	
   as	
   organisational,	
  
network,	
  innovation	
  theories,	
  etc.	
  
Possible	
  research	
  instruments:	
  thematic	
  networks,	
  Support	
  Action	
  
	
  
Future	
  research:	
  long	
  term	
  and	
  short	
  term	
  issues	
  
	
  
Short-­‐term	
  research	
  
• Updated	
  institutional	
  framework	
  
	
  
Long-­‐term	
  research	
  	
  
• New	
  models	
  of	
  governance	
  and	
  service	
  provision	
  
	
  
	
  
3.2.7. Participatory	
  Sensing	
  
Introduction	
  and	
  definition	
  
Participatory	
  sensing	
  refers	
  to	
  the	
  usage	
  of	
  sensors,	
  usually	
  embedded	
  in	
  personal	
  devices	
  such	
  as	
  
smartphones	
   to	
   allow	
   citizens	
   to	
   feed	
   data	
   of	
   public	
   interest.	
   This	
   could	
   include	
   anything	
   from	
  
photos	
   to	
   passive	
   monitoring	
   of	
   movement	
   in	
   the	
   traffic.	
   Participatory	
   sensing	
   involves	
   higher	
  
commitment	
  from	
  citizens,	
  contrary	
  to	
  opportunistic	
  sensing	
  where	
  user	
  may	
  not	
  be	
  aware	
  of	
  active	
  
applications.	
   The	
   diffusion	
   of	
   mobile	
   phones	
   significantly	
   lowers	
   the	
   barriers	
   of	
   participation	
   and	
  
data	
  input	
  by	
  citizens,	
  with	
  automated	
  geo-­‐tagging	
  and	
  time-­‐stamping:	
  given	
  the	
  right	
  architecture,	
  
they	
   could	
   act	
   as	
   sensor	
   nodes	
   and	
   location-­‐aware	
   data	
   collection	
   instruments.	
   While	
   traditional	
  
sensor	
  nodes	
  are	
  centralised,	
  these	
  sensors	
  are	
  under	
  the	
  owners’	
  control.	
  This	
  would	
  give	
  way	
  to	
  
data	
  availability	
  at	
  an	
  unprecedented	
  scale.	
  
	
  
Why	
  it	
  matters	
  in	
  governance	
  
Participatory	
   sensing	
   radically	
   improves	
   the	
   data	
   availability	
   for	
   evaluating	
   the	
   effect	
   of	
   public	
  
policies	
  and	
  how	
  individual	
  behaviour	
  is	
  changing,	
  provided	
  adequate	
  privacy	
  provisions	
  are	
  in	
  place.	
  
Devices	
  should	
  assure	
  enhanced	
  users’	
  control	
  over	
  data,	
  i.e.	
  which	
  data	
  is	
  being	
  sent,	
  when	
  and	
  
how	
  it	
  is	
  treated,	
  as	
  well	
  as	
  possibility	
  for	
  enhanced	
  data	
  anonymisation.	
  
Furthermore,	
  design	
  of	
  participatory	
  sensing	
  should	
  be	
  placed	
  in	
  the	
  framework	
  of	
  policy	
  contexts,	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
115	
  |	
  P a g e 	
  
allowing	
   inference	
   of	
   policy	
   impact	
   from	
   data.	
   Future	
   platforms	
   should	
   combine	
   participatory	
  
sensing,	
  mass	
  moderation,	
  personalised	
  feedback	
  and	
  social	
  network	
  analysis	
  to	
  assess	
  the	
  interplay	
  
between	
  perception,	
  data	
  and	
  social	
  interaction.	
  
Participatory	
  sensing	
  is	
  already	
  used	
  in	
  “public	
  sphere”	
  activities	
  such	
  as	
  environment	
  and	
  health.	
  
However	
   the	
   specific	
   issue	
   of	
   evaluating	
   public	
   policies	
   has	
   been	
   so	
   far	
   little	
   researched,	
   with	
  
particular	
  regard	
  to	
  the	
  implications	
  for	
  privacy,	
  large-­‐scale	
  deployment	
  and	
  bias	
  management	
  on	
  
citizens	
  sensing.	
  
	
  
Recent	
  trends	
  
Small-­‐scale	
   experiments	
   are	
   being	
   carried	
   out	
   in	
   different	
   domains,	
   mainly	
   dealing	
   with	
  
environmental	
  and	
  health	
  data.	
  Applications	
  in	
  the	
  field	
  of	
  urban	
  planning	
  are	
  particularly	
  promising,	
  
yet	
   there	
   is	
   no	
   structure	
   link	
   between	
   participatory	
   sensing	
   and	
   policy	
   models.	
   Larger	
   scale	
  
deployment	
  would	
  require	
  more	
  granular	
  privacy	
  compliance	
  and	
  user-­‐control,	
  adequate	
  incentives	
  
to	
  participation	
  and	
  deriving	
  business	
  models.	
  There	
  is	
  no	
  formalisation	
  of	
  the	
  requirements	
  and	
  the	
  
design	
   of	
   opportunistic	
   versus	
   participatory	
   sensing,	
   including	
   sampling	
   design	
   for	
   participants	
  
recruitment.	
  
	
  
Advantage	
  of	
  Application	
  in	
  Public	
  Policy	
  	
  
Participatory	
  sensing	
  can	
  be	
  used	
  to	
  gather	
  and	
  collect	
  the	
  following	
  kinds	
  of	
  information:	
  
• Civic	
  data:	
  neighborhood	
  maintenance	
  issues,	
  power	
  outage	
  documentation	
  
• Environmental	
  data:	
  data	
  providing	
  hints	
  on	
  pollution	
  levels,	
  climate-­‐change	
  related	
  data	
  
• Transportation:	
  commutation	
  habits,	
  location	
  and	
  movement	
  data,	
  condition	
  of	
  the	
  road,	
  
connections	
  to	
  public	
  transportation,	
  incidence	
  of	
  traffic,	
  accidents	
  occurrence	
  	
  
• Health:	
  vital	
  signs,	
  info	
  providing	
  early	
  warnings	
  of	
  diminishing	
  health,	
  info	
  on	
  epidemic	
  
spread,	
  self-­‐administered	
  diagnostic	
  tests	
  
	
  
The	
  advantages	
  for	
  policy	
  making	
  are:	
  
• Possibility	
  to	
  collect	
  data	
  at	
  an	
  otherwise	
  unachievable	
  scale	
  and	
  geographic	
  range	
  
• Virtually	
  costless	
  data	
  collection	
  
• Reveal	
  and	
  highlight	
  behavioural	
  patterns	
  and	
  routines	
  which	
  can	
  be	
  accordingly	
  changed	
  
• Engage	
  common	
  citizens	
  in	
  sensitive	
  issues	
  
• More	
  pervasive	
  monitoring	
  capacity	
  in	
  fields	
  such	
  as	
  environment	
  and	
  health	
  
Inspiring	
  cases	
  of	
  policy	
  making	
  related	
  applications252
	
  
• PEIR 253
	
  (Personal	
   Environmental	
   Impact	
   Report),	
   which	
   is	
   a	
   system	
   allowing	
   users	
   to	
  
determine	
  their	
  exposure	
  to	
  environmental	
  pollution	
  by	
  using	
  a	
  sensor	
  in	
  their	
  mobile	
  phone	
  
able	
  to	
  determine	
  the	
  location	
  and	
  the	
  mean	
  of	
  transport	
  
• eHealthSense254
,	
  automatically	
  detects	
  health	
  related	
  events	
  which	
  are	
  not	
  directly	
  observed	
  
by	
  current	
  sensor	
  technology,	
  like	
  pain,	
  tow	
  conditions,	
  depression	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
116	
  |	
  P a g e 	
  
• SenSay255
,	
  which	
  is	
  able	
  to	
  alert	
  the	
  medical	
  staff	
  when	
  the	
  user	
  falls	
  or	
  in	
  case	
  of	
  suspect	
  
behaviour	
  
• MobAsthma256
,	
  which	
  monitors	
  the	
  exposure	
  to	
  pollution	
  affecting	
  asthma.	
  Both	
  the	
  volume	
  
of	
  air	
  inhaled	
  and	
  the	
  pollution	
  rate	
  are	
  collected	
  by	
  sensors	
  interfaced	
  to	
  the	
  mobile	
  phone	
  	
  
• Haze	
  Watch257
,	
  in	
  which	
  the	
  concentration	
  of	
  carbon	
  monoxide,	
  ozone,	
  sulphur	
  dioxide,	
  and	
  
nitrogen	
  dioxide	
  is	
  measured	
  by	
  embedding	
  pollution	
  sensors	
  in	
  mobile	
  phones	
  	
  
• NoiseTube258
,	
  which	
  registers	
  noise	
  levels	
  used	
  to	
  monitor	
  noise	
  pollution,	
  which	
  can	
  affect	
  
human	
  hearing	
  and	
  behaviour	
  
• EpySurveyor259
,	
  used	
  by	
  the	
  Red	
  Cross	
  to	
  evaluate	
  anti-­‐malarial	
  bednet	
  distribution	
  and	
  use	
  
throughout	
  sub-­‐Saharan	
  Africa,	
  as	
  well	
  as	
  the	
  coverage	
  achieved	
  by	
  vaccination	
  campaigns	
  
• CarTel260
,	
   which	
   is	
   a	
   mobile	
   sensing	
   and	
   computing	
   system	
   making	
   use	
   of	
   mobile	
   phones	
  
carried	
  in	
  vehicles	
  to	
  collect	
  information	
  about	
  traffic	
  or	
  WIFI	
  access	
  points	
  	
  
• NoiseSpy261
,	
  which	
  is	
  a	
  sound-­‐sensing	
  system	
  able	
  to	
  log	
  data	
  for	
  monitoring	
  environmental	
  
noise.	
  Users	
  can	
  explore	
  a	
  city	
  area	
  while	
  at	
  the	
  same	
  time	
  visualize	
  noise	
  levels	
  in	
  real	
  time	
  
	
  
Key	
  challenges	
  and	
  gaps	
  
• Preserve	
  the	
  privacy	
  of	
  the	
  users	
  which	
  are	
  required	
  to	
  provide	
  extremely	
  personal	
  data	
  
• Create	
  new	
  mobile	
  device	
  interfaces	
  which	
  are	
  engaging	
  and	
  efficient	
  and	
  can	
  be	
  used	
  
• Ensure	
  security,	
  as	
  the	
  current	
  and	
  past	
  citizen’s	
  position	
  might	
  be	
  spotted	
  
• Provide	
   new	
   sensors	
   capable	
   of	
   increasing	
   the	
   range	
   of	
   information	
   that	
   individuals	
   can	
  
track	
  and	
  use	
  
• Create	
  network	
  infrastructures	
  aimed	
  at	
  supporting	
  participatory	
  sensing	
  services	
  
• Provide	
  incentive	
  for	
  participation	
  to	
  data	
  collection	
  
• Develop	
   analytical	
   techniques	
   to	
   carry	
   out	
   more	
   accurate	
   inference	
   with	
   mobile	
   phone	
  
supplied	
  data	
  such	
  as	
  geo-­‐data	
  and	
  images	
  	
  
• Develop	
   visual	
   analytics	
   and	
   data	
   analysis	
   techniques	
   which	
   provide	
   relevant	
   and	
   easy	
   to	
  
interpret	
  information	
  for	
  the	
  general	
  public	
  
• Create	
  engaging	
  and	
  efficient	
  mobile	
  device	
  interfaces	
  to	
  support	
  effective,	
  real-­‐time	
  user	
  
interaction	
  
• Provide	
   quality	
   data,	
   temporal	
   and	
   geo-­‐graphical	
   availability,	
   and	
   ability	
   to	
   cover	
   the	
  
phenomena	
  	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
252
	
  To	
  the	
  best	
  of	
  our	
  knowledge	
  there	
  are	
  not	
  yet	
  government	
  applications	
  in	
  the	
  realm	
  of	
  participatory	
  sensing	
  
253
	
  http://peir.cens.ucla.edu	
  
254
	
  http://dl.acm.org/citation.cfm?doid=1411759.1411761	
  
255
	
  See	
  Siewiorek	
  et	
  al.	
  (2003)	
  
256
	
  http://crystal.uta.edu/~kumar/CSE4340_5349MSE/mobsense.pdf	
  
257
	
  http://www.pollution.ee.unsw.edu.au/	
  
258
	
  http://noisetube.net/	
  
259
	
  http://www.episurveyor.org/user/index	
  
260
	
  http://cartel.csail.mit.edu/doku.php	
  
261
	
  www.cl.cam.ac.uk/mobilesensing/downloads.htm	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
117	
  |	
  P a g e 	
  
Current	
  research	
  
• Aggregating	
  and	
  validating	
  citizens	
  generated	
  and	
  government	
  data	
  resource	
  discovery,	
  	
  
• Selective	
  sharing,	
  and	
  context	
  verification	
  mechanisms,	
  as	
  well	
  as	
  application-­‐level	
  support	
  
for	
  data	
  gathering	
  campaigns,	
  
• Incentives	
  for	
  participatory	
  sensing,	
  	
  
• Evaluation	
  of	
  human	
  agents	
  as	
  sensors	
  
Disciplines	
  of	
  research:	
  sensor	
  networks,	
  location	
  services;	
  psychology,	
  economics	
  of	
  participation;	
  
privacy	
  
Research	
  instruments:	
  testbeds	
  and	
  living	
  labs,	
  STREPs	
  
Future	
  research:	
  long	
  term	
  and	
  short	
  term	
  issues	
  
Short-­‐term	
  research	
  
• Sensing	
  coverage,	
  sensor	
  calibration	
  and	
  sensor	
  context	
  for	
  opportunistic	
  sensing.	
  
• Quality	
  verification	
  for	
  participatory	
  sensing	
  
• Privacy-­‐compliant	
  sensing	
  and	
  sharing	
  
• Business	
  models	
  for	
  participatory	
  sensing	
  
• Intelligently	
  recruiting	
  collaborators	
  and	
  deploying	
  data	
  collection	
  protocols.	
  
• Anonymous,	
  transparent	
  use	
  of	
  human-­‐carried	
  sensing	
  devices	
  
• Evaluating	
  behavioural	
  change	
  through	
  participatory	
  sensing	
  
Long-­‐term	
  research	
  
• Enhanced	
  analytical	
  techniques	
  to	
  make	
  more	
  accurate	
  inferences	
  from	
  mobile	
  phone-­‐
supplied	
  data	
  such	
  as	
  location	
  and	
  images	
  and	
  to	
  automatically	
  detect	
  and	
  respond	
  to	
  subtle	
  
events;	
  	
  
• New	
  personal-­‐scale	
  sensors	
  to	
  expand	
  the	
  range	
  of	
  information	
  that	
  individuals	
  can	
  track	
  
and	
  use	
  
• Privacy	
  by	
  design	
  in	
  participatory	
  sensing	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
118	
  |	
  P a g e 	
  
3.2.8. Identity	
  Management	
  	
  
	
  
Introduction	
  and	
  definition	
  
Digital	
  identity	
  management	
  has	
  long	
  been	
  a	
  policy	
  priority	
  in	
  the	
  EU	
  Member	
  States,	
  and	
  large-­‐scale	
  
investments	
   have	
   been	
   deployed.	
   In	
   the	
   context	
   of	
   collaborative	
   governance,	
   digital	
   identity	
  
constitutes	
  a	
  fundamental	
  pillar	
  of	
  trustworthy	
  cooperation.	
  Identity	
  management	
  systems	
  include	
  
control	
  and	
  management	
  of	
  credentials	
  used	
  to	
  authenticate	
  one	
  entity	
  to	
  another,	
  and	
  authorise	
  an	
  
entity	
  to	
  adopt	
  a	
  specific	
  role	
  or	
  perform	
  a	
  specific	
  task.	
  Global	
  in	
  nature,	
  they	
  should	
  support	
  non-­‐
repudiation	
   mechanisms	
   and	
   policies;	
   dynamic	
   management	
   of	
   identities,	
   roles,	
   and	
   permissions;	
  
privacy	
   protection	
   mechanisms	
   and	
   revocation	
   of	
   permissions,	
   roles,	
   and	
   identity	
   credentials.	
  
Furthermore,	
   all	
   the	
   identities	
   and	
   associated	
   assertions	
   and	
   credentials	
   must	
   be	
   machine	
  
processable	
  and	
  human	
  understandable.	
  
At	
  the	
  EU	
  level,	
  the	
  goal	
  is	
  to	
  provide	
  an	
  interoperable	
  privacy	
  protecting	
  infrastructure	
  for	
  eID	
  that	
  
is	
   federated	
   across	
   countries,	
   with	
   multiple	
   levels	
   of	
   security	
   for	
   different	
   services,	
   relying	
   on	
  
authentic	
  sources,	
  and	
  usable	
  in	
  a	
  private	
  sector	
  context.	
  
Alongside	
   this,	
   a	
   flexible,	
   context-­‐dependent	
   and	
   interoperable	
   identity	
   management	
   system	
   is	
  
required	
   for	
   large-­‐scale	
   deployment.	
   In	
   particular,	
   federated	
   identity	
   management	
   systems	
   that	
  
ensure	
   flexible	
   deployment	
   and	
   seamless	
   integration	
   of	
   users’	
   preferred	
   identities,	
   including	
  
commercial	
  (such	
  as	
  Facebook	
  connect)	
  and	
  open	
  source	
  solutions	
  (such	
  as	
  OpenID)	
  are	
  needed.	
  
Particular	
   focus	
   should	
   be	
   put	
   on	
   usable	
   delegation	
   of	
   privileges,	
   which	
   is	
   very	
   important	
   for	
  
workflows	
  and	
  integrating	
  services.	
  
Electronic	
  identity	
  management	
  should	
  identify	
  non-­‐humans	
  (devices,	
  sensors)	
  as	
  well	
  as	
  humans,	
  in	
  
order	
  to	
  ensure	
  validated	
  identity	
  in	
  the	
  context	
  of	
  participatory	
  sensing	
  and	
  the	
  Internet	
  of	
  Things.	
  
At	
   the	
   same	
   time,	
   eIdentity	
   management	
   should	
   take	
   into	
   account	
   the	
   risks	
   of	
   information	
  
centralization	
  in	
  terms	
  of	
  data	
  privacy	
  and	
  security.	
  Cost-­‐benefit	
  considerations	
  of	
  centralised	
  versus	
  
federated	
  systems	
  remains	
  a	
  key	
  issue.	
  Identity	
  federation	
  can	
  be	
  accomplished	
  in	
  any	
  number	
  of	
  
ways,	
   some	
   of	
   which	
   involve	
   the	
   use	
   of	
   Internet	
   standards,	
   such	
   as	
   the	
   OASIS	
   Security	
   Assertion	
  
Markup	
   Language	
   (SAML)	
   specifications,	
   with	
   the	
   use	
   of	
   open	
   source	
   technologies	
   and/or	
   other	
  
openly	
  published	
  specifications.	
  	
  
Why	
  it	
  matters	
  in	
  governance	
  
Identity	
  certification	
  is	
  one	
  of	
  the	
  core	
  tasks	
  of	
  government,	
  and	
  therefore	
  pertains	
  specifically	
  to	
  
the	
  governance	
  context.	
  This	
  is	
  reinforced	
  by	
  Meta	
  Group	
  (2002),	
  who	
  views	
  the	
  implementation	
  of	
  
identity	
  management	
  “not	
  as	
  a	
  differentiator	
  but	
  as	
  mandatory	
  security	
  consideration,	
  a	
  business	
  
imperative	
  and	
  a	
  non-­‐negotiable	
  user	
  expectation”.	
  
	
  
Recent	
  trends	
  
The	
  role	
  of	
  Identity	
  Management	
  is	
  vital	
  in	
  the	
  context	
  of	
  ICT	
  for	
  Governance	
  and	
  Policy	
  Modelling.	
  
The	
   importance	
   of	
   addressing	
   eIdentity-­‐related	
   issues	
   for	
   secure	
   public	
   service	
   provision,	
   citizen	
  
record	
   management	
   and	
   law	
   enforcement	
   has	
   made	
   Identity	
   management	
   a	
   strategic	
   issue	
   for	
  
governments	
  at	
  both	
  a	
  local	
  and	
  international	
  level.	
  Research	
  for	
  the	
  design	
  and	
  implementation	
  of	
  
privacy	
   preserving	
   digital	
   identity,	
   as	
   well	
   as	
   for	
   its	
   supporting	
   management	
   infrastructures,	
   and	
  
delegation	
  of	
  authority,	
  has	
  reached	
  a	
  satisfactory	
  level.	
  Nevertheless,	
  one	
  of	
  the	
  greatest	
  problems	
  
in	
   Identity	
   Management	
   is	
   lack	
   of	
   interoperability	
   of	
   digital	
   identities	
   and	
   identity	
   management	
  
systems	
   between	
   proprietary	
   systems	
   and	
   standards-­‐based	
   ones,	
   and	
   between	
   organisations	
   and	
  
governments.	
  	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
119	
  |	
  P a g e 	
  
Current	
  practice	
  
• Electronic	
  ID	
  creation	
  at	
  national	
  level	
  
• Pilots	
  in	
  cross-­‐border	
  interoperability	
  of	
  field	
  in	
  EU	
  (STORK	
  project)	
  
Public	
  Policy	
  Applications	
  
The	
   development	
   of	
   Federated	
   Identity	
   Management	
   would	
   be	
   to	
   the	
   following	
   benefits	
   at	
  
governmental	
  level:	
  
• Avoid	
   replicated	
   efforts:	
   reduction	
   in	
   the	
   number	
   of	
   sign-­‐ons	
   and	
   passwords	
   needed	
   for	
  
accessing	
  multiple	
  systems	
  and	
  databases,	
  thereby	
  decreasing	
  cost	
  and	
  time-­‐waist	
  
• It	
  would	
  be	
  possible	
  to	
  define	
  a	
  mechanism	
  of	
  sharing	
  and	
  managing	
  identity	
  information	
  as	
  
it	
  moves	
  between	
  discrete	
  legal,	
  policy	
  and	
  organizational	
  domains	
  which	
  would	
  be	
  based	
  
on	
  standards	
  
• Institutions	
   would	
   not	
   have	
   to	
   establish	
   separate	
   relationships	
   and	
   procedures	
   with	
   one	
  
another	
  	
  
• It	
  is	
  possible	
  to	
  grant	
  ad	
  revoke	
  user	
  access	
  to	
  info	
  more	
  easily	
  	
  
• 	
  Reduce	
  the	
  number	
  of	
  passwords	
  accumulated:	
  citizens	
  either	
  forget	
  them	
  or	
  choose	
  simple	
  
ones	
  thereby	
  increasing	
  insecurity	
  and	
  fraud	
  possibility	
  
• Increase	
  in	
  security	
  regarding	
  the	
  user	
  access	
  to	
  information	
  and	
  the	
  digital	
  resources,	
  as	
  it	
  
eliminates	
  the	
  need	
  to	
  replicate	
  databases	
  of	
  user	
  credentials	
  for	
  separate	
  applications	
  and	
  
systems,	
  which	
  are	
  potential	
  weak	
  points	
  
• Increase	
  in	
  sensitive	
  information	
  shared	
  across	
  government	
  and	
  organizational	
  boundaries	
  in	
  
case	
  of	
  crisis	
  
• Allows	
  to	
  focus	
  on	
  users	
  of	
  information	
  and	
  services	
  rather	
  than	
  on	
  entities	
  that	
  house	
  those	
  
resources	
  
Key	
  challenges	
  and	
  gaps	
  
• Fragmentation	
  of	
  research	
  in	
  identity	
  along	
  disciplinary	
  lines	
  
• Need	
  for	
  new	
  identity	
  proof	
  processes	
  	
  
• Privacy	
  issues:	
  use	
  limitation	
  principles,	
  avoid	
  pervasive	
  surveillance	
  
• Capability	
  to	
  efficiently	
  integrate	
  services	
  throughout	
  the	
  chain	
  
• Time	
  saving	
  identification	
  
• Specifications	
   and	
   nature	
   of	
   a	
   Digital	
   Identity	
   dictated	
   by	
   the	
   social	
   and	
   political	
  
environment	
  of	
  the	
  country	
  of	
  issuance	
  
• Increasing	
   number	
   of	
   electronic	
   identity-­‐related	
   crimes	
   (identity	
   fraud,	
   identity	
   theft,	
  
impersonation),	
  which	
  makes	
  it	
  difficult	
  to	
  guarantee	
  the	
  legitimacy	
  of	
  identities	
  	
  
Current	
  research	
  
• Cultural-­‐dependent	
  identity	
  systems	
  
• Mobile	
  and	
  biometrics	
  in	
  eIdentity	
  
• Privacy	
  protecting	
  identity	
  management	
  systems	
  
• User-­‐centric	
  identity,	
  delegation	
  of	
  authority	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
120	
  |	
  P a g e 	
  
Disciplines	
  of	
  research:	
  legal,	
  technological,	
  social,	
  economics	
  
Possible	
  research	
  instruments:	
  testbeds	
  and	
  living	
  labs,	
  STREPs	
  
Future	
  research:	
  long	
  term	
  and	
  short	
  term	
  issues	
  
Short-­‐term	
  research	
  
• Quantitative	
  research	
  on	
  cost-­‐benefit	
  analysis	
  of	
  interoperable	
  identity	
  
• Dynamic	
  user-­‐controlled	
  identity	
  disclosure	
  
• Formal	
  verification	
  of	
  identity	
  management	
  systems	
  
• Governance	
  and	
  legal	
  issues,	
  levels	
  of	
  assurance	
  
	
  
Long-­‐term	
  research	
  
• Context-­‐dependent	
  identity	
  management	
  	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
121	
  |	
  P a g e 	
  
	
  
	
  
3.2.9. Global	
  Systems	
  Science	
  
	
  
	
  
	
  
Introduction	
  and	
  definition262
	
  
Current	
  tools	
  available	
  to	
  policy	
  makers	
  are	
  insufficient	
  for	
  providing	
  guidance	
  on	
  a	
  global	
  scale	
  in	
  
facing	
  present	
  societal	
  challenges	
  because	
  of	
  the	
  connections	
  across	
  subject	
  domains	
  as	
  well	
  as	
  the	
  
globalization	
   of	
   the	
   policy	
   challenges,	
   which	
   range	
   from	
   environmental	
   threats,	
   food	
   security,	
   or	
  
energy	
  sufficiency.	
  Such	
  challenges	
  are	
  multi-­‐dimensional	
  and	
  borderless,	
  thereby	
  they	
  cannot	
  be	
  
solved	
   by	
   one	
   single	
   country	
   or	
   by	
   one	
   aspect	
   of	
   policy.	
   In	
   fact	
   current	
   public	
   policy	
   making	
   is	
  
targeted	
  at	
  individual,	
  rather	
  than	
  interrelated	
  systems,	
  and	
  thereby	
  struggles	
  in	
  achieving	
  systemic	
  
change	
   and	
   in	
   addressing	
   challenges	
   which	
   are	
   global	
   and	
   interconnected	
   in	
   scope,	
   as	
   they	
   arise	
  
from	
   the	
   interplay	
   of	
   social,	
   technological,	
   and	
   natural	
   systems.	
   In	
   this	
   respect	
   it	
   is	
   important	
   to	
  
integrate	
  scientific	
  evidence	
  into	
  social	
  process	
  for	
  being	
  able	
  to	
  address	
  those	
  challenges.	
  	
  
In	
  this	
  view	
  we	
  need	
  a	
  new	
  multidisciplinary	
  system	
  approach	
  taking	
  into	
  account	
  the	
  connections	
  
across	
  policy	
  areas	
  (e.g.	
  economy,	
  transport,	
  health	
  and	
  social	
  understanding	
  of	
  system	
  risk)	
  as	
  well	
  
as	
   across	
   geographical	
   borders.	
   This	
   new	
   branch	
   of	
   science	
   should	
   take	
   into	
   account	
   the	
  
multidimensionality	
  of	
  global	
  problems	
  given	
  by	
  the	
  interconnectedness	
  of	
  decisions	
  across	
  different	
  
policy	
  realms.	
  
As	
  stated	
  in	
  the	
  Cordis	
  website263
,	
  Global	
  System	
  Science	
  “addresses	
  new	
  ways	
  of	
  supporting	
  policy	
  
decision	
  making	
  on	
  globally	
  interconnected	
  challenges	
  such	
  as	
  climate	
  change,	
  financial	
  crises,	
  or	
  
containment	
   of	
   pandemics.	
   The	
   ICT	
   engines	
   behind	
   GSS	
   are	
   large-­‐scale	
   computing	
   platforms	
   to	
  
simulate	
   highly	
   interconnected	
   systems,	
   data	
   analytics	
   for	
   'Big	
   Data'	
   to	
   make	
   full	
   use	
   of	
   the	
  
abundance	
   of	
   high-­‐dimensional	
   and	
   often	
   uncertain	
   data	
   on	
   social,	
   economic,	
   financial,	
   and	
  
ecological	
   systems	
   available	
   today,	
   and	
   novel	
   participatory	
   tools	
   and	
   processes	
   for	
   gathering	
   and	
  
linking	
  scientific	
  evidence	
  into	
  the	
  policy	
  process	
  and	
  into	
  societal	
  dialogue.	
  GSS	
  will	
  develop	
  further	
  
the	
  scientific	
  and	
  technological	
  foundations	
  in	
  systems	
  science,	
  computer	
  science,	
  and	
  mathematics.”	
  
Some	
  examples	
  of	
  global	
  systems	
  are	
  the	
  following:	
  
• Energy,	
  water	
  and	
  food	
  supply	
  systems	
  
• Community	
  of	
  scientists	
   	
   	
  
• World	
  wide	
  web	
  
• Globally	
  spreading	
  diseases	
  	
  
• Global	
  financial	
  system	
  
• Climate	
  policy	
  
• Web	
  of	
  military	
  forces	
  and	
  diplomatic	
  relations	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
262
	
  Among	
  the	
  sources	
  of	
  this	
  research	
  challenge	
  there	
  is	
  the	
  position	
  paper	
  “GSS:	
  Towards	
  a	
  Research	
  Program	
  for	
  Global	
  
Systems	
  Science”	
  prepared	
  for	
  the	
  Second	
  Open	
  Global	
  Systems	
  Science	
  Conference	
  which	
  took	
  place	
  on	
  June	
  10-­‐12	
  2013	
  
in	
  Brussels,	
  as	
  well	
  as	
  other	
  documents	
  related	
  to	
  the	
  GSDP	
  consortium	
  
263
	
  http://cordis.europa.eu/fp7/ict/fet-­‐proactive/fetconsult2012-­‐topic09_en.html	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
122	
  |	
  P a g e 	
  
ICT	
  tools	
  are	
  very	
  important	
  for	
  GSS,	
  as	
  they	
  support	
  large-­‐scale,	
  complex	
  societal	
  and	
  infrastructure	
  
decisions	
   that	
   affect	
   human	
   life	
   and	
   health.	
   On	
   the	
   side	
   of	
   policy	
   informatics,	
   they	
   provide	
   a	
  
scientific	
  evidence-­‐base	
  for	
  policy,	
  i.e.	
  models	
  and	
  data	
  integrated	
  across	
  different	
  policy	
  sectors.	
  On	
  
the	
  side	
  of	
  societal	
  informatics,	
  ICT	
  tools,	
  presenting	
  the	
  models’	
  results	
  by	
  the	
  mean	
  of	
  games	
  and	
  
visualization,	
  are	
  able	
  to	
  integrate	
  the	
  can	
  make	
  a	
  better	
  link	
  between	
  stakeholders	
  in	
  the	
  scientific	
  
and	
  policy	
  process,	
  leading	
  to	
  a	
  society-­‐centred	
  science.	
  In	
  this	
  sense	
  ICT	
  tools	
  are	
  also	
  very	
  useful	
  to	
  
solve	
   problems	
   for	
   policy	
   makers,	
   engage	
   domain	
   experts	
   and	
   empowered	
   officials	
   early,	
   and	
  
demonstrate	
  relevance	
  of	
  the	
  treated	
  issues.	
  
	
   	
   	
  
Why	
  it	
  matters	
  in	
  Governance	
  
There	
  are	
  two	
  main	
  motivations	
  for	
  adopting	
  a	
  Global	
  System	
  Science	
  approach:	
  
• Develop	
   new	
   models	
   in	
   support	
   of	
   decision-­‐making:	
   obviously	
   nowadays	
   the	
   most	
  
important	
  issue	
  to	
  deal	
  with	
  lies	
  in	
  the	
  economic	
  and	
  financial	
  crisis.	
  A	
  typical	
  example	
  
of	
   global	
   systems	
   model	
   is	
   given	
   by	
   the	
   interdependencies	
   among	
   the	
   actors	
   and	
  
institutions	
   in	
   the	
   financial	
   system	
   as	
   well	
   as	
   the	
   contagion	
   channels	
   between	
   the	
  
financial	
  systems	
  and	
  the	
  other	
  sectors	
  of	
  the	
  economy.	
  Having	
  neglected	
  or	
  overseen	
  
such	
  interconnections	
  has	
  led	
  to	
  dangerous	
  chain	
  reactions	
  and	
  contagion	
  of	
  crises	
  not	
  
anticipated	
  in	
  current	
  economic	
  models.	
  In	
  this	
  respect	
  global	
  systems	
  models	
  can	
  give	
  
an	
  alternative	
  to	
  existing	
  theoretical	
  and	
  empirical	
  approach,	
  in	
  particular	
  when	
  facing	
  
challenges	
   which	
   are	
   global	
   by	
   definition,	
   such	
   as	
   financial	
   instability	
   and	
   the	
  
environmental	
  or	
  energy	
  policy.	
  The	
  point	
  of	
  arrival	
  will	
  be	
  the	
  definition	
  of	
  advanced	
  
simulation	
  models	
  mimicking	
  factual	
  conditions	
  and	
  human	
  behaviours,	
  and	
  embedding	
  
empirical	
  data	
  on	
  systemic	
  dependencies	
  as	
  well	
  as	
  on	
  the	
  role	
  of	
  human	
  behaviour.	
  	
  
Such	
   models	
   will	
   be	
   funded	
   on	
   large-­‐scale	
   agent-­‐based	
   modelling,	
   will	
   allow	
  
stakeholders	
   participation	
   and	
   interaction	
   and	
   online	
   monitoring	
   with	
   feedback	
   from	
  
individual	
  citizens.	
  
	
  
• Develop	
   new	
   models	
   of	
   governance:	
   there	
   is	
   the	
   necessity	
   for	
   scientific	
   modellers	
   to	
  
better	
  communicate	
  and	
  interact	
  with	
  citizens,	
  businessmen,	
  politicians,	
  civil	
  servants,	
  
NGO	
   representatives	
   and	
   other	
   stakeholders	
   as	
   concrete	
   societal	
   needs	
   and	
   policy	
  
decisions	
  must	
  drive	
  the	
  scientific	
  questions	
  to	
  be	
  asked,	
  the	
  data	
  to	
  be	
  collected	
  and	
  
how	
   the	
   models	
   have	
   to	
   be	
   conceived.	
   Global	
   Systems	
   Science,	
   by	
   producing	
   better	
  
models,	
  can	
  provide	
  the	
  decision	
  making	
  process	
  with	
  insight	
  on	
  system	
  behaviour	
  and	
  
dynamical	
   outcomes,	
   leading	
   to	
   better	
   policies.	
   In	
   this	
   respect	
   the	
   current	
   global	
  
governance	
   model	
   which	
   is	
   based	
   on	
   nation	
   states	
   cooperating	
   in	
   international	
  
organizations	
   is	
   unfit	
   to	
   meet	
   global	
   challenges,	
   so	
   that	
   a	
   Global	
   Systems	
   Science	
   is	
  
necessary.	
  
	
  
Tools	
  and	
  Techniques	
  in	
  GSS	
  
GSS	
  includes	
  in	
  general	
  computer	
  science	
  and	
  mathematical	
  approaches	
  such	
  as	
  interaction	
  based	
  
computing;	
   data	
   topology	
   and	
   modeling	
   languages;	
   high	
   performance	
   computation;	
   data	
   mining	
  
methodologies;	
   methods	
   for	
   specification	
   and	
   analysis	
   of	
   dynamics	
   of	
   highly	
   interconnected	
  
systems;	
  specification,	
  verification	
  and	
  validation	
  of	
  the	
  computational	
  dynamics	
  simulations,	
  and	
  
formal	
   approach	
   to	
   the	
   analysis	
   of	
   dynamical	
   network	
   abstractions	
   for	
   complex	
   system	
  
representation.	
  More	
  in	
  particular	
  we	
  have:	
  
• Agent-­‐based	
  or	
  Multi-­‐agent	
  Models,	
  which	
  are	
  synthetic	
  virtual	
  realities	
  populated	
  by	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
123	
  |	
  P a g e 	
  
artificial	
  agents	
  interacting	
  adaptively	
  with	
  each	
  other	
  and	
  also	
  change	
  with	
  the	
  overall	
  
conditions	
  in	
  the	
  environment	
  
• Analyses	
  of	
  networks	
  based	
  on	
  maps	
  of	
  relationships	
  or	
  linkages	
  among	
  constituents	
  in	
  
systems,	
  from	
  which	
  is	
  possible	
  to	
  identify	
  configurations	
  that	
  are	
  especially	
  unstable	
  
and	
  can	
  be	
  used	
  as	
  predictors	
  of	
  catastrophic	
  failures	
  in	
  real-­‐life	
  networks	
  	
  
• Data	
  Mining:	
  techniques	
  for	
  finding	
  patterns	
  and	
  relationships	
  in	
  large	
  data	
  sets	
  with	
  
complex	
  qualities,	
  which	
  are	
  applicable	
  to	
  nonlinear	
  and	
  discontinuous	
  phenomena	
  
• Modelling	
   of	
   artificially	
   constructed	
   scenarios	
   representing	
   hypothetical	
   models	
   of	
  
complex	
   systems	
   that	
   reflect	
   their	
   key	
   constituents	
   and	
   dynamics,	
   and	
   which	
   can	
   be	
  
used	
   to	
   anticipate	
   the	
   effects	
   of	
   various	
   conditions	
   and	
   to	
   identify	
   policies	
   that	
   are	
  
robust	
  to	
  many	
  likely	
  futures,	
  for	
  instance	
  in	
  case	
  of	
  man-­‐made	
  or	
  natural	
  disasters	
  
• Sensitivity	
  Analysis,	
  used	
  for	
  assessing	
  the	
  behaviours	
  and	
  evolution	
  of	
  complex	
  systems	
  
due	
   to	
   shocks	
   in	
   the	
   underlying	
   parameters	
   performed	
   by	
   the	
   mean	
   of	
   numerical	
  
techniques	
  
• Dynamical	
   Systems	
   Models,	
   i.e.	
   sets	
   of	
   differential	
   equations	
   or	
   iterative	
   discrete	
  
equations	
  used	
  to	
  describe	
  the	
  behaviour	
  of	
  interacting	
  parts	
  in	
  a	
  complex	
  system,	
  and	
  
used	
  for	
  simulate	
  the	
  results	
  of	
  alternative	
  system	
  interventions	
  as	
  well	
  as	
  unintended	
  
consequences	
  of	
  policies	
  	
  
	
  
	
  
	
  
	
  
	
  
Policy	
  Applications	
  of	
  GSS	
  Tools	
  
Health.	
  In	
  contrast	
  with	
  traditional	
  epidemic	
  models,	
  in	
  which	
  each	
  agent	
  bears	
  the	
  same	
  probability	
  
of	
  infection,	
  agent	
  based	
  models	
  entail	
  a	
  heterogeneous	
  population	
  which	
  interacts	
  in	
  a	
  changing	
  
environment	
   leading	
   to	
   more	
   realistic	
   tests	
   and	
   prediction	
   of	
   new	
   policies.	
   As	
   an	
   example	
   some	
  
complexity-­‐science	
   simulations	
   showed	
   that	
   reductions	
   in	
   air	
   traffic	
   (even	
   20%-­‐50%)	
   would	
   not	
  
dramatically	
   slow	
   the	
   spread	
   of	
   certain	
   epidemics.	
   On	
   the	
   other	
   hand	
   the	
   massive	
   storage	
   of	
  
smallpox	
  vaccine	
  would	
  reduce	
  the	
  number	
  of	
  infections	
  in	
  case	
  of	
  a	
  biological	
  terror	
  attack.	
  
	
  
Urbanization.	
  More	
  complex	
  patterns	
  displace	
  the	
  classic	
  centre-­‐periphery	
  structures	
  and	
  puts	
  into	
  
question	
  the	
  distinction	
  of	
  nature	
  and	
  culture.	
  Moreover	
  urban	
  lifestyles	
  are	
  blended	
  with	
  the	
  global	
  
awareness	
  fostered	
  by	
  ICT.	
  In	
  this	
  view	
  urbanization	
  raises	
  major	
  challenges	
  because	
  innovations	
  my	
  
worsen	
  already	
  worrying	
  trends,	
  urbanization	
  can	
  undermine	
  communities	
  leading	
  to	
  new	
  forms	
  of	
  
violence	
   and	
   anomie,	
   and	
   health	
   problems	
   such	
   as	
   circulatory	
   diseases,	
   cancer,	
   obesity	
   and	
  
epidemics	
  can	
  be	
  augmented.	
  GSS	
  is	
  able	
  to	
  explain	
  how	
  settlement	
  structures	
  and	
  lifestyles	
  are	
  
modified	
  by	
  interaction	
  between	
  the	
  global	
  urban	
  system	
  and	
  the	
  global	
  ICT	
  system,	
  as	
  well	
  as	
  how	
  
policy-­‐makers	
  can	
  influence	
  their	
  future	
  dynamics.	
  
	
  
Traffic.	
  Analytic	
  techniques	
  can	
  be	
  used	
  to	
  anticipate	
  life-­‐threatening	
  traffic	
  phenomena,	
  as	
  well	
  as	
  
to	
   reduce	
   pollution	
   and	
   improve	
   traffic	
   flows	
   in	
   order	
   to	
   save	
   time	
   and	
   energy.	
   This	
   advanced	
  
modelling	
  approach	
  incorporates	
  human	
  cognition	
  and	
  has	
  been	
  adopted	
  for	
  predicting	
  unexpected	
  
events	
  such	
  as	
  traffic	
  jams	
  so	
  as	
  to	
  automatically	
  alert	
  drivers	
  via	
  wireless	
  communications	
  devices.	
  
This	
  class	
  of	
  models	
  can	
  be	
  generalisable	
  to	
  other	
  types	
  of	
  situations,	
  e.g.	
  outbreaks	
  of	
  civil	
  unrest.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
124	
  |	
  P a g e 	
  
Similar	
  methods	
  have	
  been	
  successfully	
  used	
  for	
  analysing	
  human	
  foot	
  traffic	
  and	
  have	
  been	
  applied	
  
to	
  prevent	
  stampeding	
  during	
  the	
  Hajj	
  in	
  Mecca.	
  
	
  
Crisis	
   management	
   and	
   security.	
   	
   By	
   using	
   a	
   global	
   systems	
   science	
   and	
   complex	
   systems	
   it	
   is	
  
possible	
   to	
   enable	
   bottom-­‐up	
   disaster	
   response	
   capabilities	
   as	
   well	
   as	
   to	
   implement	
   a	
   proactive	
  
approach	
   to	
   disaster	
   preparation	
   and	
   planning,	
   by	
   the	
   mean	
   of	
   policy	
   simulations.	
   Moreover	
  
network-­‐analysis	
   methods	
   can	
   be	
   used	
   to	
   attempt	
   to	
   identify	
   associations	
   of	
   terrorists,	
   including	
  
pinpointing	
  the	
  locations	
  of	
  key	
  dangerous	
  individuals.	
  	
  
	
  
Climate	
   Change.	
   In	
   the	
   most	
   advanced	
   climate	
   change	
   models	
   is	
   missing	
   the	
   social	
   and	
   human	
  
aspect	
  of	
  the	
  issues	
  given	
  by	
  the	
  strict	
  interconnection	
  between	
  nature,	
  economy,	
  finance,	
  energy	
  
and	
  the	
  industrial	
  structure.	
  In	
  this	
  respect	
  complexity	
  and	
  global	
  systems	
  science	
  techniques	
  allow	
  
to	
  identify	
  tipping	
  points	
  in	
  the	
  human-­‐earth	
  system.	
  An	
  example	
  is	
  given	
  by	
  the	
  management	
  of	
  
water	
  resources:	
  water	
  stresses	
  occur	
  regularly	
  in	
  different	
  geographical	
  locations,	
  showing	
  that	
  a	
  
tipping	
  point,	
  leading	
  to	
  massive	
  long-­‐term	
  water	
  shortages,	
  may	
  be	
  close.	
  GSS	
  will	
  support	
  global	
  
climate	
  policy	
  by	
  highlighting	
  its	
  benefits	
  from	
  reduced	
  health	
  impacts	
  to	
  accelerated	
  productivity	
  
growth	
  by	
  new	
  directions	
  and	
  volumes	
  of	
  investment,	
  in	
  order	
  not	
  to	
  be	
  stuck	
  in	
  multiple	
  basins	
  of	
  
attraction.	
   This	
   will	
   require	
   new	
   models	
   as	
   well	
   as	
   greater	
   interactions	
   between	
   different	
   policy	
  
fields	
  such	
  as	
  environment,	
  energy,	
  employment,	
  health	
  and	
  foreign	
  policy.	
  Policy	
  makers	
  will	
  join	
  to	
  
show	
   that	
   increased	
   economic	
   well-­‐being	
   is	
   possible	
   with	
   systematically	
   decreasing	
   emissions,	
  
strengthen	
  resilience	
  while	
  reducing	
  emissions,	
  and	
  prepare	
  for	
  the	
  need	
  to	
  take	
  CO2	
  back	
  from	
  the	
  
atmosphere,	
  especially	
  once	
  global	
  poverty	
  will	
  have	
  been	
  overcome.	
  
	
  
	
  
Financial	
   Markets.	
   Decision	
   support	
   and	
   analysis	
   tools,	
   based	
   on	
   modelling	
   and	
   simulation,	
  
conceived	
  within	
  the	
  scope	
  of	
  global	
  systems	
  science	
  models	
  can	
  allow	
  the	
  theoretical	
  testing,	
  of	
  the	
  
resilience	
   of	
   proposed	
   financial	
   regulations	
   in	
   order	
   to	
   avoid	
   the	
   dramatic	
   instabilities	
   of	
   recent	
  
times.	
   Those	
   classes	
   of	
   models	
   can	
   offer	
   a	
   crucial	
   supplement	
   to	
   traditional	
   analysis	
   as	
   they	
  
emphasize	
   dynamism	
   rather	
   than	
   equilibrium,	
   real	
   attractors	
   rather	
   than	
   theoretically	
   prescribed	
  
ones,	
   positive	
   feedback	
   loops	
   and	
   phase	
   transitions.	
   The	
   current	
   financial	
   crisis	
   did	
   not	
   crashed	
  
completely	
  the	
  Euro	
  economy	
  thanks	
  to	
  the	
  president	
  of	
  the	
  ECB,	
  Mario	
  Draghi,	
  who	
  pushed	
  the	
  
market	
  from	
  a	
  bad	
  to	
  a	
  good	
  equilibrium	
  rather	
  than	
  considering	
  the	
  crisis	
  as	
  a	
  shock	
  that	
  had	
  to	
  be	
  
absorbed	
  by	
  the	
  capacity	
  of	
  the	
  markets	
  to	
  return	
  to	
  the	
  stable	
  equilibrium.	
  In	
  the	
  next	
  decades	
  GSS	
  
will	
   be	
   necessary	
   for	
   the	
   development	
   of	
   an	
   integrated	
   governance	
   of	
   global	
   risks	
   taking	
   into	
  
account	
  the	
  interactions	
  between	
  financial	
  and	
  other	
  markets,	
  as	
  well	
  as	
  the	
  socioeconomic	
  dynamic	
  
at	
  different	
  scale,	
  relying	
  on	
  the	
  analysis	
  of	
  the	
  large	
  data-­‐sets	
  for	
  monitoring	
  the	
  complex	
  networks	
  
of	
   world	
   agents.	
   Researchers	
   and	
   policy	
   makers	
   should	
   join	
   within	
   the	
   scope	
   of	
   GSS	
   in	
   order	
   to	
  
design	
   and	
   implement	
   effective	
   measures	
   towards	
   a	
   financial	
   sector	
   supporting	
   increasing	
  
employment	
   and	
   sustainable	
   economic	
   growth,	
   such	
   as	
   rules	
   to	
   limit	
   risky	
   dynamics	
   of	
   complex	
  
financial	
  systems,	
  regional	
  experiments	
  with	
  innovative	
  schemes	
  to	
  foster	
  sustainable	
  growth,	
  and	
  
the	
  creation	
  of	
  a	
  global	
  monetary	
  system.	
  
	
  
	
  
Methodological	
  Aspects	
  
First	
  of	
  all	
  GSS	
  will	
  rely	
  on	
  computer	
  models	
  in	
  order	
  to	
  tackle	
  the	
  complex	
  multi-­‐scale	
  (spatial	
  and	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
125	
  |	
  P a g e 	
  
temporal)	
   structure	
   of	
   global	
   systems.	
   However	
   computer	
   models	
   are	
   ill	
   suited	
   to	
   deal	
   with	
   the	
  
ambiguities	
   that	
   are	
   a	
   vital	
   ingredient	
   of	
   human	
   life.	
   In	
   this	
   respect	
   narratives	
   delimit	
   the	
   scope	
  
within	
  which	
  a	
  particular	
  model	
  is	
  useful	
  and	
  understand	
  what	
  goes	
  wrong	
  when	
  it	
  is	
  used	
  beyond	
  
that	
  scope.	
  
Another	
   interesting	
   methodological	
   realm	
   of	
   GSS	
   is	
   high	
   performance	
   computing	
   (HPC),	
   which	
  
should	
  provide	
  policy-­‐makers	
  with	
  a	
  scientific	
  evidence	
  coming	
  with	
  an	
  assessments	
  of	
  its	
  reliability,	
  
validity,	
   and	
   relevance,	
   by	
   exploring	
   complex	
   sample	
   spaces	
   of	
   parameter	
   values	
   and	
   boundary	
  
conditions.	
   To	
   this	
   respect	
   the	
   computational	
   skills	
   must	
   be	
   combined	
   with	
   great	
   skills	
   in	
  
communication	
  and	
  in	
  assessing	
  the	
  relevance	
  of	
  evidence	
  for	
  addressing	
  specific	
  practical	
  issues.	
  
Moreover	
  Big	
  Data,	
  if	
  used	
  in	
  new	
  ways,	
  can	
  become	
  an	
  essential	
  tool	
  to	
  perceive	
  global	
  systems.	
  
GSS	
  can	
  define	
  practical	
  problems	
  and	
  preliminary	
  concepts	
  that	
  can	
  be	
  used	
  to	
  mine	
  big	
  data	
  sets,	
  
which	
  can	
  be	
  obtained	
  by	
  crowdsourcing,	
  in	
  the	
  view	
  of	
  describing	
  the	
  dynamics	
  and	
  structure	
  of	
  
global	
   systems,	
   by	
   exploiting	
   the	
   relation	
   between	
   models	
   and	
   narratives	
   of	
   globalization.	
   The	
  
resulting	
  output	
  will	
  be	
  used	
  to	
  improve	
  problem	
  definitions	
  and	
  concepts,	
  as	
  well	
  as	
  to	
  monitor	
  the	
  
intended	
  and	
  unintended	
  consequences	
  of	
  policies	
  dealing	
  with	
  global	
  systems.	
  
	
  
	
  
	
  
Key	
  challenges	
  and	
  gaps	
  
GSS	
  entails	
  a	
  number	
  of	
  challenges	
  and	
  gaps:	
  
• Model	
   validation	
   in	
   terms	
   of	
   the	
   underlying	
   assumptions	
   and	
   parameter	
   choices	
  
interfaces	
  
• Creation	
   of	
   ICT	
   platforms	
   for	
   setting	
   up,	
   execute	
   and	
   validate	
   large-­‐scale	
   models.	
   In	
  
particular	
   it	
   would	
   necessary	
   to	
   establish	
   agreed-­‐upon	
   standards	
   for	
   validation	
   and	
  
calibration	
  in	
  order	
  to	
  improve	
  the	
  models	
  credibility	
  for	
  policy	
  makers	
  
• Tools	
   for	
   gathering,	
   integrating	
   and	
   linking	
   data	
   from	
   various	
   sources:	
   financial	
   data,	
  
socio-­‐economic	
  data,	
  data	
  on	
  financial	
  and	
  economic	
  networks,	
  ecological	
  and	
  energy	
  
data,	
  and	
  even	
  data	
  on	
  nature	
  of	
  human	
  decision.	
  Tools	
  for	
  knowledge	
  elicitation	
  
• Decision-­‐support	
  tools:	
  scientists	
  and	
  researchers	
  should	
  try	
  to	
  formulate	
  the	
  results	
  of	
  
their	
   work	
   in	
   terms	
   that	
   policymakers	
   can	
   use,	
   for	
   example	
   through	
   simulations	
  
showing	
  different	
  scenarios	
  in	
  a	
  global	
  setting	
  
• Interoperability	
  of	
  Models:	
  models	
  should	
  be	
  built	
  in	
  order	
  for	
  the	
  results	
  of	
  modelling	
  
to	
  be	
  comparable,	
  eliminating	
  the	
  heterogeneity	
  preventing	
  non-­‐experts	
  from	
  choosing	
  
and	
  applying	
  models,	
  as	
  well	
  as	
  gauging	
  their	
  relevance	
  and	
  credibility.	
  Moreover	
  we	
  
should	
  be	
  able	
  to	
  run	
  simulations	
  in	
  different	
  degrees	
  of	
  granularity	
  
• Institutional	
  adaptation:	
  the	
  development	
  of	
  a	
  science	
  which	
  is	
  global	
  in	
  scope	
  requires	
  
coordination	
  of	
  research	
  and	
  education	
  efforts	
  at	
  a	
  global	
  level	
  	
  
A	
   more	
   comprehensive	
   set	
   of	
   challenges	
   and	
   opportunities	
   can	
   be	
   found	
   at	
  
http://goo.gl/qWbhG264
.	
  Here	
  we	
  summarize	
  the	
  main	
  points:	
  
• Breaking	
   inter-­‐disciplinary	
   boundaries,	
   stimulating	
   cross-­‐disciplinary	
   fertilization	
   and	
  
removing	
  silos	
  between	
  organisations,	
  governments,	
  scientists/technologists	
  and	
  other	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
264
	
  This	
  set	
  of	
  challenges	
  and	
  opportunities	
  has	
  been	
  developed	
  by	
  the	
  experts	
  conveyed	
  at	
  the	
  brainstorming	
  meeting	
  on	
  
"Global	
   System	
   Sciences:	
   the	
   role	
   of	
   models	
   and	
   data",	
   which	
   took	
   place	
   in	
   Brussels	
   on	
   the	
   7-­‐8	
   February	
   2013	
  
http://www.isi.it/events/workshop-­‐on-­‐global-­‐systems-­‐science-­‐role-­‐of-­‐models-­‐and-­‐data-­‐brussels-­‐february-­‐7-­‐8-­‐2013	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
126	
  |	
  P a g e 	
  
stakeholders	
  
• Building	
   the	
   scientific	
   foundations	
   of	
   GSS,	
   scoping	
   the	
   technologies	
   and	
   methods,	
  
changing	
   basic	
   paradigms	
   in	
   science	
   and	
   making	
   society	
   and	
   humans	
   parts	
   of	
   the	
  
phenomenology	
  of	
  science	
  
• Managing	
   and	
   making	
   sense	
   of	
   Big	
   Data:	
   manage	
   the	
   increased	
   input	
   of	
  
data/information	
  and	
  analyse	
  big	
  data	
  while	
  preserving	
  ethical	
  values	
  and	
  respecting	
  
privacy	
  and	
  civil	
  rights	
  
• Models	
   and	
   simulation,	
   languages:	
   create	
   a	
   common	
   universal	
   language	
   to	
   precisely	
  
describe	
  and	
  simulate	
  models,	
  close	
  the	
  gap	
  between	
  formalized	
  models	
  and	
  real	
  world	
  
peculiarities	
  of	
  systems	
  
• Infrastructures	
   and	
   resources:	
   develop	
   a	
   culture	
   of	
   ICT	
   as	
   the	
   fabric	
   that	
   connects	
  
nature	
  and	
  society,	
  and	
  push	
  a	
  new	
  conceptual	
  advance	
  in	
  order	
  to	
  overcome	
  the	
  limits	
  
of	
  computing	
  	
  
• Ownership	
  and	
  regulations:	
  enable	
  scientists	
  to	
  act	
  on	
  society	
  and	
  to	
  access	
  the	
  data	
  
without	
   violating	
   individual	
   rights	
   to	
   privacy,	
   and	
   manage	
   copyright/IPR	
   in	
   the	
   hyper	
  
connected	
  world	
  
• GSS	
  and	
  policy	
  making:	
  ensure	
  take	
  up	
  in	
  GSS	
  from	
  decision	
  makers	
  and	
  policy	
  makers,	
  
enable	
  policy	
  feedback	
  to	
  be	
  reflected	
  into	
  the	
  models	
  and	
  create	
  tools	
  for	
  effectively	
  
support	
  decision	
  making	
  
• Communicating	
  GSS	
  and	
  engaging	
  society:	
  raise	
  awareness	
  of	
  Complex	
  Systems	
  science	
  
and	
  promote	
  GSS	
  skills/curricula	
  
As	
  for	
  the	
  use	
  of	
  ICT	
  tools	
  in	
  GSS	
  there	
  are	
  several	
  categories	
  of	
  challenges265
:	
  
Evidence	
   Dissemination	
   Global	
  
Reasoning	
  
Proposal	
   Governance	
   Implementation	
  
• Scientific	
  data	
  
• Scientific	
  code	
  
• Complex	
  
modelling	
  and	
  
simulation	
  
• Software	
  
architecture,	
  
sustainability,	
  
interoperability,	
  
interfaces	
  
• Common	
  data	
  
interchange	
  
format	
  
• Domain	
  
specific	
  
languages	
  
• Visualization	
  
• User	
  interface	
  
• Independent	
  
validation	
  
• ICT	
  tools	
  for	
  
learning	
  and	
  
understanding	
  
	
  
• Accessibility	
  
• Tools	
  for	
  
interactive	
  
participation	
  
• Computer	
  
linguistic	
  for	
  
narrative	
  and	
  
automatic	
  
translation	
  
• User-­‐
friendly	
  
simulation	
  
tools	
  for	
  
prediction	
  
• Computer	
  
security	
  for	
  
trust	
  
management	
  
• ICT	
  
platforms	
  
• ICT	
  tools	
  for	
  
transparency	
  
and	
  
participation	
  
• Semantic	
  
frameworks	
  
for	
  computer-­‐
added	
  decision	
  
making	
  
	
  
• Use	
  of	
  ICT	
  
platforms	
  
• Crowd-­‐sourced	
  
development	
  
• Feedback	
  to	
  the	
  
scientific	
  data	
  
management	
  and	
  the	
  
model	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
265
	
  Source:	
  readapted	
  from	
  the	
  presentation	
  delivered	
  by	
  Ulf	
  Dahlsten	
  at	
  the	
  First	
  Open	
  Global	
  Systems	
  Science	
  Conference	
  
(Brussels,	
   November	
   8	
   –	
   10,	
   2012).	
   A	
   Tech4i2	
   delegate	
   attended	
   the	
   meeting.	
   http://blog.global-­‐systems-­‐
science.eu/?page_id=938	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
127	
  |	
  P a g e 	
  
• Validation	
  
Table	
  2	
  Challenges	
  in	
  the	
  use	
  of	
  ICT	
  tools	
  for	
  Global	
  Systems	
  Science
Current	
  Research	
  	
  
• Computer	
  Science	
  for	
  interacting	
  Informational,	
  Technological	
  and	
  Social	
  Networks	
  
• Computer	
   Science	
   of	
   very	
   large	
   systems:	
   high	
   performance	
   computing	
   and	
   data-­‐driven	
  	
  	
  
societal	
  science	
  	
  
• Advanced	
  computing	
  for	
  Network	
  Science	
  	
  
• Network	
  Science	
  as	
  an	
  integrating	
  framework	
  for	
  real	
  world	
  complexity	
  
• Network	
  approach	
  for	
  governance	
  and	
  policy	
  tools	
  	
  
• Computational	
  modelling	
  in	
  complex	
  realities	
  
• Computational	
  and	
  digital	
  epidemiology	
  
• Mathematics	
  of	
  complex	
  systems	
  
	
  
Future	
  Research	
  
Nowadays	
   there	
   is	
   not	
   a	
   mathematical	
   model	
   able	
   to	
   describe	
   all	
   the	
   interactions	
   between	
   the	
  
components	
   in	
   Global	
   System	
   Science.	
   So	
   far	
   scientists	
   have	
   implemented	
   components	
   and	
   their	
  
interactions	
  with	
  code,	
  but	
  there	
  is	
  the	
  need	
  to	
  create	
  an	
  intermediate,	
  mathematical	
  layer	
  between	
  
narratives	
  and	
  simulations,	
  such	
  as	
  in	
  physics.	
  This	
  mathematical	
  layer	
  cannot	
  be	
  based	
  on	
  mere	
  
partial	
  differential	
  equations	
  or	
  functional	
  analysis.	
  On	
  the	
  contrary,	
  as	
  the	
  formal	
  language	
  of	
  GSS	
  is	
  
computer	
   code,	
   the	
   mathematical	
   layer	
   has	
   to	
   be	
   embedded	
   in	
   the	
   mathematics	
   of	
   general	
  
programs.	
  In	
  practice	
  computer	
  science	
  should	
  play	
  for	
  GSS	
  the	
  same	
  role	
  that	
  mathematics	
  plays	
  
for	
  physics.	
  In	
  this	
  sense	
  there	
  is	
  the	
  necessity	
  to	
  adapt	
  and	
  extend	
  to	
  the	
  GSS	
  models	
  one	
  or	
  more	
  
of	
  the	
  formal	
  languages	
  for	
  specifying	
  and	
  reasoning	
  about	
  programs.	
  The	
  main	
  candidate	
  so	
  far	
  is	
  
the	
   constructive	
   type	
   theory,	
   which	
   can	
   be	
   used	
   to	
   express	
   both	
   programs	
   and	
   classical	
  
mathematical	
  results.	
  	
  
	
  
	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
128	
  |	
  P a g e 	
  
4. The	
  case	
  for	
  policy-­‐making	
  2.0:	
  evaluating	
  the	
  impact	
  	
  
These	
  technologies	
  and	
  methodologies	
  certainly	
  show	
  great	
  promises	
  for	
  better	
  policy-­‐making,	
  but	
  
to	
  what	
  extent	
  do	
  they	
  genuinely	
  lead	
  to	
  better	
  policy-­‐making?	
  
In	
  this	
  section,	
  we	
  provide	
  an	
  overview	
  of	
  the	
  evidence	
  regarding	
  the	
  actual	
  impact	
  of	
  policy-­‐making	
  
2.0	
  tools.	
  To	
  do	
  so,	
  we	
  extract	
  the	
  main	
  related	
  findings	
  from	
  the	
  cases	
  studies;	
  from	
  the	
  prize	
  on	
  
policy-­‐making	
  2.0	
  launched	
  by	
  the	
  project;	
  from	
  the	
  survey	
  of	
  users’	
  needs.	
  Based	
  on	
  the	
  findings,	
  
we	
  propose	
  an	
  additional	
  research	
  challenge	
  on	
  the	
  impact	
  evaluation	
  of	
  policy-­‐making	
  2.0.	
  
4.1. Cross	
  analysis	
  of	
  case	
  studies	
  	
  
In	
  deliverable	
  D5.2,	
  the	
  CROSSOVER	
  project	
  provides	
  an	
  in	
  depth	
  analysis	
  of	
  4	
  cases	
  studies:	
  
1. Gleam	
  
2. Pathways	
  2050	
  
3. UrbanSim	
  
4. Opinion	
  Space	
  	
  
In	
  this	
  section,	
  we	
  extract	
  and	
  analyse	
  the	
  relevant	
  information	
  about	
  their	
  impact	
  on	
  the	
  quality	
  of	
  
policy-­‐making.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
129	
  |	
  P a g e 	
  
4.1.1. Global	
  Epidemic	
  and	
  Mobility	
  Model	
  
GLEAM	
  -­‐	
  The	
  global	
  epidemic	
  and	
  mobility	
  model266
,	
  is	
  a	
  discrete	
  stochastic	
  epidemic	
  computational	
  
model	
  based	
  on	
  a	
  meta-­‐population	
  approach	
  in	
  which	
  the	
  world	
  is	
  defined	
  in	
  geographical	
  census	
  
areas	
  connected	
  in	
  a	
  network	
  of	
  interactions	
  by	
  human	
  travel	
  fluxes	
  corresponding	
  to	
  transportation	
  
infrastructures	
   and	
   mobility	
   patterns.	
   The	
   GLEAM	
   2.0	
   simulation	
   engine	
   includes	
   a	
   multi-­‐scale	
  
mobility	
  model267
	
  integrating	
  different	
  layers	
  of	
  transportation	
  networks	
  going	
  from	
  the	
  long	
  range	
  
airline	
   connections	
   to	
   the	
   short	
   range	
   daily	
   commuting	
   pattern268
	
  and	
   it	
   elaborates	
   stochastic	
  
infectious	
   disease	
   models	
   to	
   support	
   a	
   wide	
   range	
   of	
   epidemiological	
   studies,	
   covering	
   different	
  
types	
  of	
  infections	
  and	
  intervention	
  scenarios	
  in	
  order	
  to	
  respond	
  to	
  the	
  spread	
  of	
  a	
  pandemic	
  crisis	
  
in	
   very	
   short	
   time.	
   Real-­‐world	
   data	
   on	
   population	
   and	
   mobility	
   networks	
   are	
   used	
   and	
   integrate	
  
those	
  in	
  structured	
  spatial	
  epidemic	
  models	
  to	
  generate	
  data	
  driven	
  simulations	
  of	
  the	
  worldwide	
  
spread	
  of	
  infectious	
  diseases.	
  
GLEAM	
  is	
  mostly	
  used	
  in	
  the	
  design	
  stage	
  of	
  the	
  policy	
  making	
  cycle,	
  and	
  it	
  is	
  able	
  to	
  manage	
  and	
  
visualize	
   with	
   a	
   huge	
   amount	
   of	
   complex	
   and	
   diverse	
   data	
   (e.g.	
   detailed	
   airline	
   transportation	
  
model).	
  In	
  fact,	
  data	
  from	
  census	
  agencies,	
  data	
  regarding	
  population	
  at	
  very	
  high	
  resolutions,	
  data	
  
from	
  a	
  world	
  map	
  implemented	
  by	
  NASA	
  with	
  the	
  world	
  population	
  divided	
  to	
  5x5	
  miles	
  area	
  boxes,	
  
the	
   entire	
   database	
   of	
   airlines,	
   about	
   40	
   databases	
   from	
   different	
   countries	
   for	
   local	
   mobility,	
  
transfer	
  etc.	
  are	
  utilized.	
  In	
  addition,	
  it	
  has	
  to	
  be	
  mentioned	
  that	
  GLEAM	
  has	
  moved	
  beyond	
  research	
  
in	
  the	
  H1N1	
  epidemic	
  case;	
  when	
  the	
  simulation	
  derived	
  from	
  the	
  application	
  of	
  GLEAM	
  was	
  used	
  
ex-­‐post	
  and	
  resulted	
  in	
  a	
  particularly	
  accurate	
  analysis.	
  GLEAM	
  is	
  nowadays	
  utilized	
  both	
  in	
  research	
  
initiatives	
  (e.g.	
  EPIWORK	
  IP	
  project269
,	
  EPIFOR	
  project270
)	
  and	
  in	
  formal	
  policy	
  making	
  agencies	
  (e.g.	
  
US	
  Defense	
  Agency).	
  Moreover,	
  GLEAM	
  can	
  also	
  be	
  met	
  in	
  educational	
  courses;	
  both	
  in	
  a	
  high	
  school	
  
and	
  at	
  the	
  university	
  level.	
  
	
  
	
  
Figure	
  4:	
  The	
  three	
  population	
  and	
  mobility	
  data	
  layers	
  in	
  GLEAM	
  
Impact	
  of	
  Gleam	
  
The	
  main	
  impact	
  of	
  GLEAM	
  so	
  far	
  was	
  the	
  production	
  of	
  the	
  forecast	
  for	
  the	
  H1N1	
  pandemic	
  in	
  real-­‐
time	
  which	
  was	
  a	
  quite	
  successful	
  exercise	
  and	
  showed	
  the	
  power	
  of	
  the	
  model.	
  A	
  validation	
  paper	
  
(Tizzoni	
  et	
  al.	
  2012)	
  has	
  been	
  published	
  in	
  December	
  2012	
  showcasing	
  that	
  the	
  GLEAM	
  predictions	
  
were	
  quite	
  spot	
  on.	
  	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
266
	
  http://www.gleamviz.org/	
  
267
	
  http://www.gleamviz.org/model/	
  
268
	
  GLEAM	
  in	
  Detail.	
  Available	
  at:	
  ww.GLEAMviz.org/GLEAM-­‐in-­‐detail/	
  
269
	
  EpiWork	
  -­‐Developing	
  the	
  framework	
  for	
  an	
  epidemic	
  forecast	
  infrastructure.	
  Available	
  at:	
  http://www.epiwork.eu	
  
270
	
  EpiFor	
   -­‐	
   Complexity	
   and	
   predictability	
   of	
   epidemics:	
   toward	
   a	
   computational	
   infrastructure	
   for	
   epidemic	
   forecasts.	
  
Available	
  at:	
  http://www.epifor.eu	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
130	
  |	
  P a g e 	
  
Many	
  stakeholders	
  are	
  also	
  using	
  the	
  software	
  and	
  support	
  their	
  policy-­‐making	
  procedures	
  in	
  terms	
  
of	
   designing	
   measures	
   to	
   prevent	
   or	
   constrain	
   the	
   spread	
   of	
   diseases.	
   Examples	
   include	
   the	
   US	
  
Defense	
  Agency,	
  the	
  JRC,	
  and	
  other	
  corporations	
  that	
  are	
  using	
  the	
  software.	
  It	
  has	
  to	
  be	
  noted	
  that	
  
JRC	
  is	
  using	
  the	
  tool	
  in	
  its	
  long-­‐term	
  strategy	
  for	
  studying	
  and	
  responding	
  to	
  the	
  spread	
  of	
  epidemics	
  
(through	
   communicating	
   the	
   simulation	
   results	
   to	
   DG	
   SANCO	
   policy	
   officers),	
   based	
   on	
   the	
  
experience	
  that	
  has	
  been	
  accumulated	
  from	
  using	
  the	
  GLEAM	
  toolkit	
  during	
  the	
  H1N1	
  disease.	
  	
  
The	
  core	
  innovation	
  of	
  GLEAM	
  lies	
  within	
  the	
  computational	
  model	
  which	
  can	
  integrate	
  data	
  from	
  
various	
   sources	
   and	
   provide	
   a	
   close	
   to	
   real	
   time	
   forecast	
   (by	
   combining	
   various	
   real-­‐time	
   data	
  
sources)	
  on	
  the	
  spread	
  of	
  epidemics	
  on	
  a	
  global	
  level,	
  which	
  was	
  not	
  possible	
  before	
  at	
  that	
  level	
  of	
  
precision	
  and	
  punctuality.	
  Moreover,	
  through	
  the	
  visual	
  interface	
  users	
  are	
  in	
  a	
  position	
  to	
  create	
  
their	
  own	
  models	
  and	
  investigate	
  specific	
  diseases	
  and	
  issues	
  that	
  they	
  are	
  interested	
  in.	
  
	
  
4.1.2. UrbanSim	
  
UrbanSim271
	
  is	
   a	
   software-­‐based	
   demographic	
   and	
   development	
   modelling	
   tool	
   for	
   integrated	
  
planning	
   and	
   analysis	
   of	
   urban	
   development,	
   incorporating	
   the	
   interactions	
   between	
   land	
   use,	
  
transportation,	
  environment,	
  economy	
  and	
  public	
  policy	
  with	
  demographic	
  information.	
  It	
  simulates	
  
in	
  a	
  3D	
  environment	
  the	
  choices	
  of	
  individual	
  households,	
  businesses,	
  and	
  parcel	
  landowners	
  and	
  
developers,	
  interacting	
  in	
  urban	
  real	
  estate	
  markets	
  and	
  connected	
  by	
  a	
  multi-­‐modal	
  transportation	
  
system.	
  The	
  3D	
  output	
  resulting	
  from	
  the	
  process	
  underpinning	
  the	
  simulation	
  model	
  is	
  presented	
  
using	
  indicators,	
  which	
  are	
  variables	
  that	
  convey	
  information	
  on	
  significant	
  aspects	
  of	
  the	
  simulation	
  
results.	
  	
  
	
  
	
  
Figure	
  5	
  UrbanSim	
  Land	
  Maps	
  
	
  
This	
  approach	
  works	
  with	
  individual	
  agents	
  as	
  done	
  in	
  agent-­‐based	
  modelling,	
  and	
  with	
  very	
  small	
  
cells	
   as	
   in	
   the	
   cellular	
   automata272
	
  approach,	
   or	
   even	
   at	
   building	
   and	
   parcel	
   levels.	
   UrbanSim	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
271
	
  http://www.urbasim.org	
  
272
	
  http://en.wikipedia.org/wiki/Cellular_automaton	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
131	
  |	
  P a g e 	
  
however	
  differs	
  from	
  these	
  approaches	
  by	
  drawing	
  together	
  choice	
  theory273
,	
  a	
  simulation	
  of	
  real	
  
estate	
   markets,	
   and	
   statistical	
   methods	
   in	
   order	
   to	
   achieve	
   accurate	
   estimation	
   of	
   the	
   necessary	
  
model	
   parameters	
   (such	
   as	
   land	
   policies,	
   infrastructure	
   choices,	
   etc.)	
   in	
   order	
   to	
   calibrate	
  
uncertainty	
  in	
  its	
  system.	
  As	
  an	
  example	
  of	
  its	
  use,	
  one	
  could	
  refer	
  to	
  the	
  project	
  on	
  Modelling	
  Land	
  
Use	
  Change	
  in	
  Chittenden	
  County274
,	
  where	
  the	
  model	
  parameters	
  based	
  on	
  statistical	
  analysis	
  of	
  
historical	
  data	
  are	
  integrated	
  with	
  market	
  behaviours,	
  land	
  policies,	
  infrastructure	
  choices	
  in	
  order	
  
to	
  produce	
  simulations	
  on	
  household,	
  employment	
  and	
  real	
  estate	
  development	
  decisions	
  (where	
  
the	
  first	
  two	
  are	
  based	
  on	
  an	
  agent-­‐based	
  approach	
  while	
  the	
  latter	
  on	
  a	
  grid-­‐based	
  approach).	
  
	
  
Impact	
  of	
  UrbanSim	
  
As	
  far	
  as	
  the	
  impact	
  is	
  concerned,	
  the	
  European	
  case	
  is	
  not	
  at	
  the	
  same	
  level	
  as	
  the	
  US	
  ones.	
  In	
  the	
  
US	
   there	
   are	
   quite	
   a	
   number	
   of	
   MPOs	
   that	
   actively	
   utilize	
   the	
   UrbanSim	
   platform.	
   The	
   most	
  
indicative	
  application,	
  representing	
  the	
  approach	
  common	
  in	
  the	
  US,	
  is	
  probably	
  the	
  San	
  Francisco	
  
Bay	
   one.	
   The	
   results	
   of	
   the	
   aforementioned	
   case	
   have	
   involved	
   examining	
   and	
   analysing	
   five	
  
alternative	
   scenarios	
   that	
   required	
   articulating	
   a	
   set	
   of	
   assumptions	
   about	
   land	
   use	
   policies,	
  
transport	
  policies	
  and	
  macro-­‐economic	
  growth	
  (the	
  analysis	
  in	
  now	
  complete	
  –	
  relevant	
  publications	
  
will	
  be	
  available	
  in	
  the	
  next	
  few	
  months).	
  
In	
   one	
   of	
   them,	
   analysing	
   visibility	
   of	
   the	
   proposed	
   policy	
   though	
   reverse	
   engineering	
   was	
  
attempted,	
   that	
   made	
   the	
   task	
   much	
   more	
   challenging,	
   both	
   in	
   terms	
   of	
   research	
   and	
  
implementation.	
   The	
   agency	
   has	
   now	
   accepted	
   the	
   results,	
   with	
   documentation	
   and	
   visualization	
  
supporting	
  them.	
  
In	
  the	
  San	
  Francisco	
  case,	
  the	
  3D	
  visualization	
  system	
  was	
  created	
  in	
  order	
  to	
  achieve	
  higher	
  visibility	
  
amongst	
  citizens	
  than	
  the	
  plain	
  UrbanSim	
  tool.	
  The	
  intention	
  was	
  to	
  use	
  this	
  system	
  in	
  a	
  number	
  of	
  
workshops	
   held	
   during	
   January	
   2012.	
   User	
   engagement	
   was	
   intense	
   even	
   from	
   the	
  
development/testing	
   phase.	
   In	
   addition,	
   the	
   public	
   agencies	
   used	
   it	
   in	
   a	
   series	
   of	
   meetings	
   with	
  
community	
   organizations.	
   Each	
   of	
   these	
   meetings	
   had	
   from	
   15	
   up	
   to	
   200	
   participants	
   each.	
   The	
  
point	
  of	
  these	
  meetings	
  was	
  to	
  communicate	
  the	
  different	
  scenarios	
  to	
  the	
  public	
  and	
  to	
  receive	
  
feedback	
  on	
  the	
  preferences	
  of	
  the	
  citizens.	
  
One	
  of	
  the	
  most	
  innovative	
  elements	
  of	
  UrbanSim	
  is	
  the	
  combination	
  of	
  various	
  technological	
  and	
  
theoretical	
  aspects,	
  as	
  well	
  as	
  the	
  withdrawal	
  of	
  strong	
  assumptions	
  regarding	
  urban	
  planning	
  and	
  
adoption	
  of	
  less	
  strong	
  assumptions	
  (than	
  markets	
  are	
  an	
  equilibrium).	
  For	
  example,	
  the	
  impacts	
  of	
  
transport	
  projects	
  on	
  urban	
  planning	
  are	
  far	
  from	
  being	
  instantaneously	
  realized	
  (in	
  fact	
  they	
  might	
  
evolve	
   over	
   decades).	
   In	
   addition,	
   the	
   capacity	
   of	
   being	
   able	
   to	
   support	
   these	
   less	
   strong	
  
assumptions	
  can	
  also	
  be	
  considered	
  as	
  a	
  core	
  innovation.	
  
	
  
4.1.3. Opinion	
  Space	
  
Launched	
   by	
   the	
   U.S.	
   Department	
   of	
   State275
	
  in	
   collaboration	
   with	
   Berkeley	
   University	
   which	
  
developed	
   it,	
   Opinion	
   Space	
   bridges	
   the	
   worlds	
   of	
   politics	
   and	
   social	
   media	
   in	
   an	
   interactive	
  
visualization	
  forum,	
  where	
  users	
  can	
  engage	
  in	
  open	
  dialog	
  on	
  foreign	
  affairs	
  and	
  global	
  policies.	
  It	
  
invites	
  users	
  to	
  share	
  their	
  perspectives	
  and	
  ideas	
  in	
  an	
  innovative	
  visual	
  "opinion	
  map"	
  that	
  will	
  
illustrate	
  which	
  ideas	
  result	
  in	
  the	
  most	
  discussions	
  and	
  which	
  ideas	
  are	
  judged	
  most	
  insightful	
  by	
  
the	
  community	
  of	
  participants.	
  	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
273
	
  http://en.wikipedia.org/wiki/Choice_theory	
  
274
	
  http://www.uvm.edu/rsenr/countymodel/Workshop08bv3.ppt	
  
275
	
  U.S.	
  Department	
  of	
  State.	
  Available	
  at:	
  http://State.gov	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
132	
  |	
  P a g e 	
  
Using	
   an	
   experimental	
   gaming	
   model,	
   Opinion	
   Space	
   incorporates	
   techniques	
   from	
   deliberative	
  
polling,	
   collaborative	
   filtering,	
   and	
   multidimensional	
   visualization.	
   The	
   result	
   is	
   a	
   self-­‐organizing	
  
system	
   that	
   uses	
   an	
   intuitive	
   graphical	
   "map"	
   that	
   displays	
   patterns,	
   trends,	
   and	
   insights	
   as	
   they	
  
emerge	
  and	
  employs	
  the	
  wisdom	
  of	
  crowds	
  to	
  identify	
  and	
  highlight	
  the	
  most	
  insightful	
  ideas.	
  
	
  
	
  
Figure	
  16	
  Rating	
  other	
  opinions'	
  in	
  Opinion	
  Space	
  
Opinion	
  Space	
  is	
  fully	
  operational	
  in	
  its	
  current	
  state.	
  Nevertheless,	
  as	
  a	
  research	
  platform	
  it	
  still	
  
remains	
  experimental.	
  The	
  great	
  amount	
  of	
  data	
  is	
  very	
  structured	
  and	
  this	
  helps	
  towards	
  continuing	
  
research	
  on	
  text	
  analysis,	
  statistical	
  modelling	
  etc.	
  
	
  
Impact	
  of	
  Opinion	
  Space	
  
One	
  of	
  the	
  first	
  and	
  main	
  indicators	
  of	
  the	
  impact	
  of	
  Opinion	
  Space,	
  which	
  have	
  applies	
  mostly	
  to	
  the	
  
“agenda	
  setting”	
  and	
  “monitor	
  and	
  evaluation”	
  phases	
  of	
  the	
  policy	
  cycle,	
  was	
  the	
  participation	
  rate:	
  
users	
  that	
  arrive	
  in	
  the	
  platform	
  for	
  the	
  first	
  time	
  and	
  those	
  that	
  become	
  active	
  participants.	
  People	
  
that	
  arrive	
  in	
  websites	
  are	
  always	
  more	
  than	
  those	
  who	
  actually	
  participate	
  (in	
  some	
  projects	
  the	
  
rate	
  was	
  close	
  to	
  50%	
  and	
  in	
  others	
  around	
  10%).	
  	
  
In	
   the	
   State	
   Department	
   instance	
   (of	
   Opinion	
   Space	
   3.0),	
   more	
   than	
   2000	
   different	
   ideas	
   were	
  
collected	
  (about	
  US	
  foreign	
  policy).	
  In	
  addition,	
  more	
  than	
  5000	
  individual	
  responses	
  were	
  collected.	
  
It	
   cannot	
   be	
   said	
   whether	
   the	
   final	
   decisions	
   were	
   based	
   on	
   some	
   of	
   the	
   ideas	
   provided,	
   but	
   a	
  
detailed	
   report	
   was	
   provided	
   to	
   the	
   policy	
   makers.	
   The	
   project	
   with	
   a	
   US	
   auto-­‐maker	
   (targeted	
  
towards	
  recognizing	
  ways	
  of	
  improving	
  their	
  image)	
  resulted	
  to	
  about	
  1000	
  ideas	
  and	
  about	
  100.000	
  
ratings	
  evaluating	
  these	
  ideas	
  (e.g.	
  more	
  specifically	
  they	
  talked	
  about	
  green	
  vehicles).	
  One	
  of	
  the	
  
core	
  innovations	
  and	
  successes	
  of	
  Opinion	
  Space	
  is	
  the	
  very	
  fast	
  way	
  to	
  browse	
  (and	
  rate)	
  amongst	
  a	
  
large	
  number	
  of	
  ideas	
  (even	
  if	
  this	
  is	
  a	
  visualization-­‐oriented	
  innovation).	
  From	
  the	
  scientific	
  point	
  of	
  
view,	
  the	
  greatest	
  innovation	
  was	
  bringing	
  statistical	
  analysis	
  in	
  structured	
  discussion/	
  data.	
  	
  
One	
   of	
   the	
   best	
   endorsements	
   regarding	
   Opinion	
   Space	
   was	
   Hillary	
   Clinton’s	
   reference	
   to	
   the	
  
initiative.	
  Other	
  endorsements	
  include	
  high	
  level	
  officers	
  of	
  collaborating	
  companies	
  as	
  presented	
  in	
  
the	
  Opinion	
  Space	
  website.	
  	
  As	
  far	
  as	
  the	
  Opinion	
  Space	
  team	
  is	
  aware	
  of,	
  Opinion	
  Space	
  has	
  not	
  yet	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
133	
  |	
  P a g e 	
  
been	
  incorporated	
  in	
  any	
  formal	
  decision	
  making	
  procedures.	
  The	
  State	
  Department,	
  however,	
  uses	
  
“informally”	
  Opinion	
  Space	
  in	
  order	
  to	
  get	
  ideas	
  and	
  opinions	
  on	
  specific	
  policies.	
  
	
  
4.1.4. 2050	
  Pathways	
  Analysis	
  
The	
   UK	
   Department	
   of	
   Energy	
   and	
   Climate	
   Change	
   (DECC)	
   built	
   the	
   2050	
   Pathways	
   Analysis	
  
Calculator	
  to	
  help	
  the	
  public	
  engage	
  in	
  the	
  debate,	
  and	
  for	
  Government	
  to	
  ensure	
  that	
  its	
  short-­‐	
  and	
  
medium-­‐term	
  planning	
  was	
  consistent	
  with	
  achieving	
  the	
  long-­‐term	
  aim.	
  More	
  specifically,	
  as	
  the	
  UK	
  
is	
  committed	
  to	
  reducing	
  its	
  greenhouse	
  gas	
  emissions	
  by	
  at	
  least	
  80%	
  by	
  2050,	
  relative	
  to	
  1990	
  
levels,	
   a	
   transformation	
   of	
   the	
   UK	
   economy	
   is	
   needed	
   while	
   ensuring	
   secure,	
   low	
   carbon	
   energy	
  
supplies	
   to	
   2050,	
   and	
   face	
   major	
   choices	
   about	
   how	
   to	
   do	
   this.	
   In	
   the	
   Carbon	
   Plan	
   published	
   in	
  
December	
   2011,	
   the	
   Calculator	
   was	
   used	
   to	
   illustrate	
   three	
   2050	
   futures	
   that	
   show	
   some	
   of	
   the	
  
plausible	
  routes	
  towards	
  meeting	
  the	
  target.	
  	
  
The	
  2050	
  Pathways	
  Analysis	
  features	
  four	
  resources:	
  	
  
1. A	
   web-­‐based	
   tool	
   for	
   the	
   public	
   to	
   try	
   their	
   own	
   ideas	
   for	
   reducing	
   greenhouse	
   gas	
  
emissions.	
  	
  
2. An	
   in	
   depth	
   Excel-­‐based	
   tool	
   and	
   reporting	
   system	
   which	
   includes	
   the	
   methodology/the	
  
models	
  that	
  are	
  used	
  for	
  the	
  analysis.	
  	
  
3. A	
  web-­‐based	
  presentation	
  for	
  younger	
  audiences	
  about	
  greenhouse	
  gas	
  emissions.	
  	
  
4. A	
  toolkit	
  for	
  leading	
  an	
  energy	
  debate	
  in	
  schools.	
  	
  
	
  
The	
  2050	
  Calculator	
  is	
  targeted	
  at	
  citizens,	
  policy	
  makers,	
  senior	
  officials	
  and	
  politicians	
  as	
  well	
  as	
  
technical	
  experts	
  through	
  different	
  interfaces.	
  	
  
The	
   2050	
   Pathways	
   presents	
   a	
   framework	
   through	
   which	
   it	
   is	
   possible	
   to	
   consider	
   some	
   of	
   the	
  
choices	
  and	
  trade-­‐offs	
  we	
  will	
  have	
  to	
  make	
  over	
  the	
  next	
  forty	
  years.	
  It	
  is	
  system-­‐wide,	
  covering	
  all	
  
parts	
  of	
  the	
  economy	
  and	
  all	
  greenhouse	
  gases	
  emissions	
  released	
  in	
  the	
  UK.	
  It	
  is	
  rooted	
  in	
  scientific	
  
and	
  engineering	
  realities,	
  looking	
  at	
  what	
  is	
  thought	
  to	
  be	
  physically	
  and	
  technically	
  possible	
  in	
  each	
  
sector276
.	
  
2050	
  pathways	
  is	
  a	
  tool	
  to	
  help	
  policy	
  makers,	
  the	
  energy	
  industry	
  and	
  the	
  public	
  understand	
  these	
  
choices.	
  For	
  each	
  sector	
  of	
  the	
  economy,	
  four	
  alternative	
  trajectories	
  have	
  been	
  developed,	
  ranging	
  
from	
  little	
  or	
  no	
  effort	
  to	
  reduce	
  emissions	
  or	
  save	
  energy	
  (level	
  1)	
  to	
  extremely	
  ambitious	
  changes	
  
that	
  push	
  towards	
  the	
  physical	
  or	
  technical	
  limits	
  of	
  what	
  can	
  be	
  achieved	
  (level	
  4).	
  	
  
The	
  2050	
  Pathways	
  Calculator	
  –	
  available	
  on	
  the	
  DECC	
  website	
  -­‐	
  allows	
  users	
  to	
  develop	
  their	
  own	
  
combination	
  of	
  levels	
  of	
  change	
  to	
  achieve	
  an	
  80%	
  reduction	
  in	
  greenhouse	
  gas	
  emissions	
  by	
  2050,	
  
while	
  ensuring	
  that	
  energy	
  supply	
  meets	
  demand277
.	
  
The	
  supportive	
  tools	
  of	
  the	
  initiative	
  provide	
  different	
  ways	
  of	
  securing	
  a	
  low-­‐carbon	
  future	
  for	
  the	
  
UK	
  and	
  they	
  can	
  be	
  tried	
  out:	
  	
  
·∙	
  By	
  creating	
  each	
  user’s	
  own	
  pathway	
  using	
  the	
  2050	
  Web	
  Tool.	
  	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
276
	
  Department	
  of	
  Energy	
  and	
  Climate	
  Change	
  https://www.gov.uk/2050-­‐pathways-­‐analysis	
  
277
	
  HM	
  Government	
  (2010).	
  2050	
  Pathways	
  Analysis.	
  Available	
  at:	
  
http://www.decc.gov.uk/assets/decc/what%20we%20do/a%20low%20carbon%20uk/2050/216-­‐2050-­‐pathways-­‐analysis-­‐
report.pdf	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
134	
  |	
  P a g e 	
  
·∙	
   By	
   exploring	
   what	
   a	
   low-­‐carbon	
   UK	
   might	
   look	
   like	
   in	
   2050	
   by	
   playing	
   the	
   simplified	
   My2050	
  
simulation.	
  	
  
·∙	
  By	
  taking	
  the	
  debate	
  into	
  the	
  classroom	
  in	
  the	
  schools	
  toolkit.	
  	
  
	
  
	
  
Figure	
  17	
  Playing	
  the	
  My2050	
  game	
  for	
  the	
  demand	
  side	
  
	
  
As	
  far	
  as	
  the	
  CROSSOVER	
  Policy	
  Cycle	
  is	
  concerned,	
  the	
  project	
  probably	
  fits	
  in	
  the	
  first	
  step,	
  this	
  of	
  
Agenda	
  Setting.	
  This	
  is	
  due	
  to	
  the	
  fact	
  that	
  the	
  concept	
  is	
  a	
  high-­‐level	
  one	
  (e.g.	
  reduce	
  gas	
  emissions	
  
to	
  80%	
  by	
  2050).	
  As	
  the	
  data	
  are	
  currently	
  being	
  updated	
  and	
  a	
  comparison	
  between	
  the	
  projected	
  
and	
  the	
  actual	
  results	
  will	
  take	
  place,	
  probably	
  the	
  case	
  could	
  in	
  the	
  near	
  future	
  fit	
  into	
  the	
  Monitor	
  
and	
  Evaluation	
  Policy	
  Cycle	
  step	
  as	
  well.	
  
	
  
Impact	
  of	
  2050	
  Pathways	
  Analysis	
  
The	
  numbers	
  of	
  visitors	
  and	
  of	
  interactions	
  with	
  the	
  tool	
  have	
  demonstrated	
  the	
  success	
  and	
  impact	
  
of	
   the	
   case.	
   In	
   the	
   first	
   three	
   months	
   from	
   the	
   official	
   project	
   launch	
   there	
   were	
   about	
   10.000	
  
unique	
  visitors	
  in	
  the	
  platform.	
  Regarding	
  My2050	
  there	
  are	
  over	
  16.000	
  pathways	
  up	
  to	
  the	
  date.	
  
Regarding	
   the	
   stakeholders,	
   about	
   200	
   were	
   involved	
   in	
   the	
   initial	
   (building)	
   phase	
   and	
   after	
   the	
  
launch	
  about	
  500	
  stakeholders	
  were	
  contacted.	
  Moreover,	
  a	
  week-­‐long	
  online	
  debate	
  including	
  5-­‐6	
  
experts	
  took	
  place	
  with	
  lots	
  of	
  comments	
  from	
  open	
  public.	
  It	
  is	
  important	
  to	
  note	
  that	
  there	
  are	
  
Master’s	
  programs,	
  both	
  in	
  and	
  outside	
  of	
  the	
  UK,	
  that	
  engage	
  the	
  2050	
  Pathways	
  models	
  and	
  tools	
  
in	
  their	
  courses.	
  In	
  addition,	
  the	
  my2050	
  game	
  is	
  also	
  communicated	
  to	
  pupils	
  of	
  various	
  schools	
  in	
  
the	
  UK;	
  there	
  is	
  a	
  “schools’	
  toolkit”	
  available	
  and	
  downloadable	
  from	
  the	
  project’s	
  website,	
  as	
  well	
  
as	
  from	
  other	
  websites,	
  including	
  the	
  department	
  of	
  Education	
  website.	
  	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
135	
  |	
  P a g e 	
  
It	
  has	
  to	
  be	
  noted	
  that	
  due	
  to	
  the	
  project’s	
  open	
  source	
  nature,	
  it	
  is	
  quite	
  difficult	
  to	
  tell	
  how	
  many	
  
and	
  who	
  exactly	
  are	
  using	
  the	
  platform.	
  	
  
In	
   addition,	
   a	
   large	
   number	
   of	
   presentations	
   have	
   been	
   conducted	
   in	
   workshops,	
   schools,	
  
conferences,	
   NGOs,	
   international	
   colleagues	
   etc.	
   A	
   presentation	
   was	
   made	
   to	
   the	
   European	
  
Commission	
   too.	
   Really	
   positive	
   media	
   coverage	
   has	
   also	
   been	
   noticed	
   (around	
   15	
   key	
   articles	
  
regarding	
   the	
   project278279
).	
   Other	
   references	
   to	
   the	
   case	
   have	
   also	
   been	
   made	
   (e.g.	
   cultural	
  
festivals).	
  
	
  
	
  
4.1.5. Cross	
  analysis	
  of	
  the	
  case	
  studies	
  
In	
  this	
  section	
  we	
  analyse	
  common	
  features	
  and	
  differences	
  between	
  the	
  case	
  studies	
  with	
  regard	
  
to:	
  
-­‐ Usage	
  (policy	
  phase,	
  policy	
  domain,	
  participation,	
  involvement	
  of	
  decision-­‐maker)	
  
-­‐ Impact	
  (satisfaction,	
  role	
  in	
  the	
  actual	
  decision	
  taken,	
  quality	
  of	
  the	
  policy)	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
278
	
  https://www.gov.uk/2050-­‐pathways-­‐analysis	
  
279
	
  http://www.involve.org.uk/2050-­‐pathways-­‐public-­‐dialogue/	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
136	
  |	
  P a g e 	
  
	
  
	
   2050	
  Pathways	
   GLEAM	
   Opinion	
   Space	
  
3.0	
  
UrbanSim	
  
Policy	
  phase	
   Design	
   Design	
   Agenda	
  Setting	
   Design	
  
Policy	
  domain	
   Energy	
   Health	
   Foreign	
  Policy	
   Urban	
  Planning	
  
Number	
   of	
  
participants	
  
16.000	
   pathways	
  
created	
  
Not	
  relevant	
   2000	
  ideas	
   100s	
  
Involvement	
   of	
  
decision	
  makers	
  
High	
   High	
   High	
   High	
  
Actual	
   usage	
   of	
  
the	
   output	
   in	
  
policy-­‐making	
  
High	
   (used	
   in	
   the	
  
main	
   Low	
   Carbon	
  
Strategy	
  
document)	
  
High,	
   used	
   by	
  
international	
  
agencies	
  
Low	
   Medium,	
   used	
   by	
  
several	
   US	
  
municipalities	
  	
  
Press	
  impact	
   High	
   Low	
   High	
   n.a.	
  
Feedback	
   by	
  
policy-­‐makers	
  
High	
   n.a.	
   High	
   n.a	
  
Actual	
  
improvement	
   of	
  
policy	
  quality	
  
n.a.	
   1	
  paper	
  positively	
  
reviewed	
   the	
  
predictions	
  
n.a.	
   n.a.	
  
Table	
  3	
  Cross	
  analysis	
  of	
  the	
  cases	
  impact	
  
	
  
Firstly,	
  we	
  can	
  perform	
  a	
  matching	
  of	
  the	
  cases	
  under	
  scrutiny	
  with	
  respect	
  to	
  the	
  different	
  phases	
  of	
  the	
  
policy	
  cycle.	
  As	
  we	
  can	
  see	
  from	
  Table	
  3	
  Cross	
  analysis	
  of	
  the	
  cases	
  impact	
  
	
  most	
  of	
  the	
  cases	
  are	
  related	
  to	
  the	
  “Design”	
  of	
  policies,	
  while	
  there	
  is	
  a	
  limited	
  coverage	
  of	
  the	
  
“Agenda	
  Setting”	
  and	
  the	
  “Implementation”	
  and	
  “Monitor	
  and	
  Evaluation”	
  phases280
.	
  
This	
  is	
  due	
  to	
  the	
  fact	
  that	
  the	
  key	
  challenges	
  faced	
  by	
  the	
  policy	
  makers	
  (e.g.	
  “the	
  need	
  to	
  detect	
  
and	
  understand	
  problems	
  before	
  they	
  become	
  unsolvable”	
  or	
  “the	
  reduction	
  of	
  uncertainty	
  on	
  the	
  
possible	
  impacts	
  of	
  policies”)	
  require	
  a	
  certain	
  degree	
  of	
  proactivity	
  in	
  order	
  to	
  deliver	
  high	
  quality,	
  
evidence-­‐based	
   and	
   impact	
   oriented	
   policies	
   and	
   not	
   perform	
   trials	
   on	
   real	
   conditions.	
   In	
   this	
  
respect	
  the	
  “Design”	
  phase	
  seems	
  to	
  prevail	
  over	
  the	
  others	
  when	
  it	
  comes	
  to	
  tools	
  that	
  are	
  mostly	
  
desired	
  by	
  policy	
  makers.	
  More	
  in	
  particolar:	
  
• In	
  the	
  “Design”	
  phase	
  policy	
  makers	
  are	
  able	
  to	
  both	
  explore	
  their	
  options	
  and	
  seek	
  for	
  the	
  ex-­‐
ante	
   assessment	
   of	
   the	
   policies	
   under	
   consideration	
   from	
   the	
   citizen’s	
   perspective.	
   On	
   the	
  
other	
  hand	
  it	
  is	
  possible	
  that	
  decisions	
  have	
  been	
  already	
  taken	
  and	
  then	
  the	
  emphasis	
  is	
  laid	
  
on	
  the	
  implementation	
  of	
  policies.	
  
• In	
  the	
  “Implementation”	
  phase	
  the	
  main	
  object	
  is	
  to	
  increase	
  acceptance	
  and	
  collaboration	
  
between	
  the	
  decision	
  makers	
  and	
  the	
  citizens	
  based	
  on	
  already	
  deployed	
  terms.	
  On	
  the	
  other	
  
side	
   it	
   is	
   worth	
   noticing	
   that	
   the	
   improved	
   collaboration	
   is	
   handled	
   by	
   tools	
   and	
   methods	
  
focusing	
  on	
  the	
  communication	
  of	
  messages	
  aimed	
  at	
  favoring	
  the	
  smooth	
  implementation	
  of	
  
a	
  policy.	
  Those	
  tools	
  do	
  not	
  belong	
  to	
  the	
  “core”	
  Policy	
  Making	
  2.0	
  methods,	
  even	
  though	
  they	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
280
	
  X’s	
   marks	
   the	
   answers	
   retrieved	
   directly	
   for	
   the	
   responsible	
   team	
   of	
   each	
   case,	
   while	
   @’s	
   mark	
   potential	
   usage	
   as	
  
envisaged	
  during	
  the	
  analysis	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
137	
  |	
  P a g e 	
  
display	
  a	
  close	
  relation	
  with	
  them.	
  	
  
• In	
  the	
  “Monitor	
  and	
  Evaluation”	
  step	
  decision	
  makers	
  get	
  informed	
  about	
  the	
  impact	
  of	
  the	
  
already	
   deployed	
   policies,	
   so	
   that	
   it	
   is	
   possible	
   to	
   identify	
   only	
   few	
   ICT-­‐based	
   tools	
   and	
  
methods	
  that	
  are	
  really	
  having	
  an	
  impact	
  and	
  engage	
  fruitfully	
  with	
  stakeholders	
  and	
  citizens.	
  
There	
  are	
  many	
  discussion	
  tools,	
  which	
  have	
  already	
  been	
  experimented	
  by	
  policy	
  makers,	
  
showing	
  so	
  far	
  many	
  limitations	
  and	
  constraints.	
  	
  
• The	
  issues	
  apply	
  also	
  to	
  the	
  “Agenda	
  Setting”	
  phase,	
  as	
  there	
  is	
  an	
  absence	
  of	
  new	
  ways	
  to	
  
massively	
  engage	
  citizens	
  during	
  the	
  early	
  procedures	
  that	
  lie	
  before	
  the	
  actual	
  design	
  phase.	
  
Most	
  of	
  the	
  tools	
  have	
  been	
  around	
  since	
  many	
  years	
  now,	
  and	
  in	
  some	
  cases	
  are	
  merely	
  re-­‐
furbished	
  with	
  some	
  new	
  tweaks	
  and	
  upgraded	
  features.	
  Crowdsourcing	
  seems	
  to	
  fit	
  very	
  well	
  
this	
  stage,	
  but	
  again	
  the	
  impact	
  of	
  such	
  experiments	
  remains	
  anecdotal	
  and	
  the	
  results	
  are	
  not	
  
embedded	
  in	
  policy-­‐making,	
  at	
  least	
  for	
  the	
  cases	
  analysed.	
  
	
  
In	
   these	
   cases,	
   the	
   policy-­‐making	
   2.0	
   tools	
   have	
   been	
   used	
   to	
   address	
   real	
   problems	
   in	
   sensitive	
  
policy	
  domains,	
  and	
  all	
  have	
  been	
  initiated	
  either	
  by	
  governments	
  or	
  as	
  a	
  result	
  of	
  collaboration	
  
between	
  researchers	
  and	
  public	
  administrations	
  at	
  different	
  levels,	
  mainly	
  in	
  a	
  top-­‐down	
  approach.	
  
In	
   particular,	
   GLEAM	
   and	
   Opinion	
   Space	
   3.0	
   were	
   initially	
   introduced	
   as	
   research	
   initiatives	
   that	
  
gathered	
  significant	
  attention	
  and	
  subsequent	
  funding	
  from	
  public	
  authorities.	
  In	
  fact,	
  all	
  cases	
  build	
  
on	
   a	
   wide	
   range	
   of	
   techniques	
   that	
   result	
   from	
   research	
   and	
   exemplify	
   how	
   research	
   can	
   be	
  
effectively	
  applied	
  in	
  real-­‐life	
  settings	
  and	
  public	
  policies.	
  Multi-­‐disciplinarity	
  in	
  the	
  teams	
  of	
  all	
  cases	
  
has	
   brought	
   together	
   different	
   perspectives	
   and	
   ensured	
   appropriate	
   modelling	
   of	
   policy	
   options	
  
and	
   interpretation	
   of	
   outcomes.	
   Building	
   a	
   dynamic	
   dialogue	
   with	
   policy	
   makers	
   and	
   all	
   external	
  
stakeholders	
  (NGOs,	
  academia,	
  industry)	
  and	
  specific	
  experts,	
  has	
  provided	
  significant	
  insights	
  and	
  
feedback	
   to	
   all	
   cases	
   (to	
   different	
   extents	
   as	
   for	
   example	
   in	
   GLEAM,	
   where	
   the	
   participation	
   of	
  
citizens	
  is	
  limited).	
  Further,	
  the	
  real	
  support	
  by	
  public	
  officials	
  and	
  experts	
  has	
  been	
  instrumental	
  in	
  
the	
  success	
  of	
  all	
  cases.	
  To	
  address	
  the	
  targeted	
  needs	
  of	
  policy	
  makers	
  and	
  citizens	
  and	
  allow	
  them	
  
contribute	
  in	
  a	
  more	
  efficient	
  and	
  productive	
  way	
  to	
  the	
  policy	
  issues	
  at	
  stake,	
  dedicated	
  tools	
  have	
  
been	
   developed	
   in	
   each	
   case	
   study.	
   Naturally,	
   in	
   each	
   case,	
   the	
   required	
   learning	
   curve	
   to	
  
understand	
   and	
   use	
   a	
   policy	
   model	
   significantly	
   varies	
   (and	
   it	
   depends	
   on	
   the	
   complexity	
   of	
   the	
  
policy	
  model(s)	
  running	
  in	
  the	
  background	
  for	
  being	
  used	
  effectively	
  by	
  policy	
  makers).	
  
Uptake	
   by	
   participants	
   varies,	
   from	
   few	
   hundreds	
   up	
   to	
   several	
   thousands.	
   Impact	
   evaluation,	
  
however,	
  was	
  not	
  built-­‐in	
  the	
  initiative	
  from	
  the	
  beginning.	
  	
  Typically,	
  no	
  specific	
  Key	
  Performance	
  
Indicator	
   (KPIs)	
   were	
   set,	
   and	
   no	
   evaluation	
   envisaged.	
   However,	
   the	
   numbers	
   of	
   visitors	
   and	
   of	
  
interactions	
  have	
  demonstrated	
  their	
  success	
  and	
  impact,	
  which	
  has	
  been	
  reinforced	
  with	
  the	
  help	
  
of	
   appropriate	
   stakeholders’	
   engagement	
   strategies.	
   It	
   needs	
   to	
   be	
   noted	
   that	
   in	
   some	
   cases	
  
(GLEAM)	
   users	
   resorted	
   to	
   the	
   corresponding	
   platform	
   as	
   a	
   result	
   of	
   a	
   natural	
   phenomenon	
   (i.e.	
  
H1N1	
   pandemic)	
   whereas	
   in	
   others	
   (Opinion	
   Space	
   3.0	
   and	
   2050	
   Pathways	
   Analysis),	
   it	
   was	
   the	
  
outcome	
  of	
  large	
  press	
  coverage	
  that	
  demonstrated	
  the	
  value	
  of	
  the	
  cases.	
  By	
  studying	
  cases	
  that	
  
had	
  strong	
  internalization	
  aspects	
  (i.e.	
  transferring	
  experience	
  from	
  national	
  to	
  international	
  level	
  in	
  
2050	
  Pathways	
  Analysis,	
  from	
  US	
  to	
  EU	
  in	
  UrbanSim),	
  the	
  difference	
  in	
  socio-­‐cultural	
  dimensions	
  
emerges	
  and	
  should	
  not	
  be	
  neglected	
  as	
  it	
  may	
  decide	
  the	
  success	
  of	
  a	
  case	
  in	
  applying	
  it	
  to	
  different	
  
geographic	
  settings	
  and	
  socio-­‐technical	
  landscapes.	
  
	
  
4.2. Survey	
  of	
  Users’	
  needs	
  results	
  	
  
The	
  CROSSOVER	
  project	
  delivered	
  a	
  survey	
  of	
  users,	
  presented	
  in	
  Deliverable	
  D5.1.	
  As	
  part	
  of	
  the	
  
survey	
  it	
  was	
  asked	
  which	
  ICT	
  tools	
  and	
  methodologies	
  are	
  needed	
  and	
  adopted	
  by	
  respondents	
  as	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
138	
  |	
  P a g e 	
  
part	
  of	
  the	
  governance	
  and	
  policy-­‐making	
  processes	
  they	
  are	
  involved	
  in.	
  In	
  Figure	
  2,	
  which	
  displays	
  
some	
  preliminary	
  and	
  selected	
  results,	
  it	
  emerges	
  that	
  Open	
  Data	
  and	
  Big	
  Data	
  methodologies	
  are	
  
already	
  adopted	
  by	
  more	
  than	
  30%	
  of	
  respondents.	
  Moreover,	
  other	
  methodologies,	
  which	
  are	
  also	
  
used,	
  are	
  strictly	
  related	
  to	
  Open	
  and	
  Big	
  Data,	
  such	
  as	
  visual	
  analytics	
  that	
  can	
  be	
  used	
  to	
  make	
  
sense	
  of	
  large	
  amounts	
  of	
  data,	
  and	
  large	
  scale	
  simulations	
  which	
  need	
  large	
  amount	
  of	
  data	
  to	
  be	
  
performed.	
  
	
  
Figure	
  18	
  Adoption	
  of	
  ICT	
  Tools	
  and	
  Methodologies	
  for	
  policy-­‐making	
  (source:	
  CROSSOVER	
  Survey	
  
of	
  Users’	
  Needs	
  2012)	
  
In	
  the	
  same	
  way	
  in	
  Error! Reference source not found.3	
  are	
  presented	
  the	
  respondents’	
  views	
  
regarding	
  the	
  needs	
  and	
  challenges	
  in	
  the	
  policy	
  making	
  process.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
139	
  |	
  P a g e 	
  
Figure	
  19	
  Needs	
  and	
  Challenges	
  in	
  the	
  Policy	
  Making	
  Process	
  (source:	
  CROSSOVER	
  Survey	
  of	
  Users’	
  
Needs	
  2012)	
  
	
  
On	
  the	
  horizontal	
  axis	
  is	
  reported	
  the	
  average	
  score	
  accruing	
  to	
  each	
  option.	
  The	
  score	
  goes	
  from	
  
1	
  (not	
  important)	
  to	
  5	
  (very	
  important).	
  As	
  we	
  can	
  see	
  the	
  most	
  relevant	
  challenges	
  in	
  the	
  policy	
  
making	
   process	
   are	
   “Detect	
   and	
   Understand	
   Problems	
   before	
   they	
   become	
   unsolvable”	
   and	
  
“Understand	
  the	
  Actual	
  Impact	
  of	
  Policies”.	
  At	
  any	
  rate	
  the	
  difference	
  among	
  the	
  various	
  options	
  
does	
  not	
  seem	
  overly	
  significant,	
  suggesting	
  that	
  the	
  broad	
  range	
  of	
  challenges	
  in	
  policy	
  making	
  is	
  
recognized	
  as	
  important.	
  
The	
   respondents	
   were	
   also	
   required	
   to	
   suggest	
   other	
   important	
   challenges	
   pertaining	
   to	
   the	
  
policy	
   making	
   activity,	
   outside	
   the	
   one	
   adduced	
   in	
   the	
   online	
   questionnaire.	
   The	
   suggested	
  
challenges	
  include:	
  
• Create	
  an	
  effective	
  and	
  collaborative	
  dialogue	
  among	
  the	
  policy	
  makers	
  and	
  affected	
  
stakeholders	
  
• Ensure	
  reversibility	
  as	
  well	
  as	
  basic	
  societal	
  values	
  (e.g.	
  security,	
  equality,	
  privacy	
  etc.)	
  
• Analyze	
  and	
  visualize	
  information	
  for	
  identifying	
  problems	
  
• Foster	
  more	
  direct	
  communication	
  between	
  citizens	
  and	
  policy	
  makers	
  
• Translate	
  citizens'	
  input	
  into	
  actionable	
  outputs	
  
• Create	
  common	
  understanding	
  across	
  areas	
  of	
  responsibility	
  
• Secure	
  buy-­‐in	
  from	
  key	
  stakeholders	
  and	
  prevent	
  blocking	
  of	
  new	
  policy	
  by	
  vested	
  interests	
  
• Encourage	
  the	
  widespread	
  acceptability	
  of	
  simulation	
  as	
  a	
  public	
  policy	
  tool	
  
• Take	
  responsibility	
  for	
  choices	
  made	
  by	
  either	
  him/her	
  or	
  own	
  team.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
140	
  |	
  P a g e 	
  
Making	
  a	
  comparison	
  between	
  the	
  ICT	
  tools	
  and	
  methodologies	
  adopted	
  and	
  the	
  challenges	
  and	
  
needs	
  of	
  the	
  policy	
  making	
  process,	
  it	
  is	
  clear	
  that	
  detecting	
  and	
  understand	
  problems	
  before	
  they	
  
become	
  unsolvable	
  and	
  understanding	
  the	
  actual	
  impact	
  of	
  policies	
  are	
  possible	
  only	
  by	
  the	
  mean	
  of	
  
advanced	
   policy	
   modelling	
   and	
   simulation	
   tools	
   and	
   techniques.	
   And	
   as	
   already	
   mentioned,	
   the	
  
preciseness	
  of	
  large	
  scale	
  simulations	
  and	
  modelling	
  is	
  allowed	
  and	
  improved	
  by	
  the	
  availability	
  of	
  
large	
  amounts	
  of	
  data.	
  From	
  the	
  analysis	
  of	
  these	
  preliminary	
  findings,	
  it	
  seems	
  evident	
  that	
  the	
  use	
  
of	
  big	
  data	
  and	
  the	
  way	
  to	
  analyse	
  and	
  exploit	
  them	
  is	
  perceived	
  as	
  an	
  important	
  need	
  by	
  policy	
  
makers	
  and	
  it	
  is	
  already	
  in	
  part	
  applied	
  in	
  some	
  sphere	
  of	
  public	
  governance.	
  However,	
  our	
  analysis	
  
found	
  that	
  in	
  most	
  cases	
  the	
  application	
  of	
  techniques	
  and	
  methodologies	
  to	
  make	
  sense	
  of	
  massive	
  
amounts	
  of	
  data	
  is	
  still	
  in	
  an	
  embryonic	
  stage	
  and	
  it	
  remains	
  largely	
  at	
  experimental	
  level.	
  This	
  is	
  
confirmed	
  by	
  the	
  findings	
  of	
  the	
  activity	
  of	
  mapping	
  and	
  identification	
  of	
  case	
  studies	
  that	
  has	
  been	
  
conducted	
  as	
  part	
  of	
  the	
  CROSSOVER	
  project,	
  and	
  in	
  part	
  illustrated	
  in	
  the	
  previous	
  section.	
  
	
  
4.3. Analysis	
  of	
  the	
  prize	
  winners	
  
The	
  project	
  also	
  launched	
  a	
  prize	
  competition	
  for	
  the	
  best	
  policy-­‐making	
  2.0	
  application.	
  The	
  prize	
  
was	
  assigned	
  based	
  on	
  the	
  criteria	
  of	
  technological	
  innovation,	
  uptake	
  and	
  impact.	
  
Let	
  us	
  present	
  now	
  the	
  description	
  of	
  the	
  three	
  winners	
  stemming	
  from	
  the	
  applications	
  to	
  the	
  prize	
  
	
  
	
  
IdeaScale	
  SAVE	
  award	
  
	
  
Describe	
  briefly	
  the	
  context	
  of	
  the	
  solution	
  
President	
   Obama’s	
   belief	
   was	
   that	
   the	
   best	
   ideas	
   for	
   potential	
   government	
   savings	
  opportunities	
  
would	
  come	
  from	
  the	
  front	
  lines	
  (federal	
  employees)	
  In	
  2009,	
  he	
  launched	
  the	
  SAVE	
  Award	
  (Securing	
  
Americans	
  Value	
  and	
  Efficiency),	
  hoping	
  to	
  find	
  ideas	
  that	
  would	
  make	
  government	
  more	
  effective	
  
and	
  efficient	
  and	
  ensure	
  taxpayer	
  dollars	
  was	
  spent	
  only	
  on	
  what	
  was	
  necessary.	
  Not	
  only	
  would	
  this	
  
help	
  reduce	
  the	
  debt,	
  but	
  it	
  would	
  impact	
  every	
  American	
  tax	
  payer.	
  	
  At	
  that	
  point,	
  there	
  was	
  no	
  
existing	
   system	
   that	
   could	
   amalgamate	
   a	
   steady	
   stream	
   of	
   ideas	
   and	
   feedback	
   around	
   those	
  
suggestions.	
   Out	
   of	
   desire	
   to	
   remain	
   true	
   to	
   the	
   values	
   of	
   the	
   open	
   government	
   initiative	
  
(transparency,	
   participation,	
   and	
   collaboration),	
   the	
   White	
   House	
   selected	
   IdeaScale	
   as	
   the	
   most	
  
viable	
  solution	
  that	
  served	
  all	
  of	
  these	
  needs.	
  Not	
  only	
  did	
  it	
  allow	
  employees	
  to	
  submit	
  ideas,	
  but	
  
they	
  could	
  vote	
  on	
  those	
  ideas,	
  comment	
  and	
  improve	
  on	
  those	
  ideas	
  and	
  the	
  best	
  ones	
  rose	
  to	
  the	
  
top	
  for	
  review.	
  Over	
  the	
  past	
  four	
  years,	
  federal	
  employees	
  have	
  submitted	
  tens	
  of	
  thousands	
  of	
  
cost-­‐cutting	
  ideas	
  through	
  the	
  SAVE	
  Award.	
  Dozens	
  of	
  the	
  most	
  promising	
  ideas	
  have	
  been	
  included	
  
in	
  the	
  President’s	
  Budget.	
  Each	
  year	
  the	
  OMB	
  narrows	
  the	
  best	
  ideas	
  to	
  a	
  “final	
  four.”	
  The	
  American	
  
people	
  vote	
  online	
  to	
  choose	
  the	
  winner.	
  The	
  winner	
  then	
  comes	
  to	
  Washington	
  to	
  present	
  their	
  
idea	
   to	
   the	
   President.	
   They	
   needed	
   a	
   system	
   that	
   would	
   serve	
   that	
   entire	
   process:	
   submission,	
  
voting,	
  evaluation	
  and	
  monitoring,	
  transparent	
  presentation	
  and	
  collaborative	
  development.	
  
	
  
What	
  impact	
  did	
  it	
  have	
  on	
  the	
  quality	
  of	
  policies?	
  
Over	
   the	
   past	
   four	
   years,	
   the	
   White	
   House	
   has	
   collected	
   thousands	
   of	
   ideas	
   that	
   cut	
   costs	
   and	
  
improve	
  efficiency.	
  This	
  has	
  allowed	
  the	
  White	
  House	
  to	
  meet	
  its	
  main	
  goals:	
  
Main	
  Goals	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
141	
  |	
  P a g e 	
  
• Generate	
  Suggestions:	
  nearly	
  100,000	
  ideas	
  have	
  been	
  collected	
  in	
  the	
  past	
  four	
  years	
  
of	
   SAVE	
   Awards.	
   Engagement	
   remains	
   high	
   with	
   thousands	
   of	
   users	
   signing	
   on	
   to	
  
submit,	
  vote,	
  and	
  comment	
  each	
  year.	
  These	
  ideas	
  come	
  from	
  every	
  government	
  arena	
  
and	
  from	
  numerous	
  geographic	
  locations	
  allowing	
  nationwide	
  collaboration.	
  	
  
• Improve	
   Government	
   Programs	
   and	
   Save	
   Money:	
   Each	
   year	
   a	
   winning	
   idea	
   was	
  
selected.	
  Each	
  idea	
  has	
  been	
  assessed	
  as	
  saving	
  the	
  government	
  potentially	
  millions	
  of	
  
dollars.	
  	
  
• 2009:	
  As	
  is	
  the	
  case	
  in	
  most	
  hospitals	
  all	
  across	
  the	
  country,	
  medicine	
  that	
  is	
  used	
  in	
  the	
  
hospital	
  is	
  not	
  given	
  to	
  patients	
  to	
  be	
  brought	
  home;	
  instead,	
  it	
  is	
  thrown	
  out.	
  Nancy	
  
Fichtner	
   proposed	
   ending	
   this	
   practice	
   and	
   sending	
   excess	
   medication	
   home	
   with	
  
patients.	
  This	
  is	
  expected	
  to	
  save	
  $21	
  million	
  by	
  2014.	
  
• 2010:	
  The	
  winning	
  idea	
  was	
  to	
  reduce	
  the	
  number	
  of	
  hard	
  copies	
  of	
  the	
  Federal	
  Register	
  
by	
  offering	
  an	
  opt-­‐in	
  choice	
  for	
  those	
  who	
  wanted	
  to	
  be	
  able	
  to	
  access	
  the	
  Register	
  in	
  
print.	
  This	
  is	
  expected	
  to	
  save	
  more	
  than	
  $4	
  million	
  per	
  year.	
  
• 2011:	
  Matthew	
  Ritsko	
  suggested	
  that	
  NASA	
  employees	
  form	
  a	
  lending	
  library	
  of	
  tools	
  
that	
  they	
  can	
  share	
  rather	
  than	
  purchasing	
  costly	
  equipment	
  each	
  time	
  they	
  need	
  to	
  
build	
  something.	
  	
  
• 2012:	
  Frederick	
  Winter	
  proposed	
  that	
  all	
  Federal	
  employees	
  with	
  transit	
  benefits	
  adopt	
  
the	
  reduced	
  senior	
  fare	
  as	
  soon	
  as	
  they	
  are	
  eligible.	
  In	
  the	
  DC	
  area,	
  this	
  change	
  would	
  
lower	
  the	
  cost	
  of	
  employee	
  travel	
  by	
  50%	
  per	
  cent.	
  	
  
• Improve	
  Engagement:	
  Federal	
  employees	
  have	
  stated	
  that	
  they	
  feel	
  more	
  empowered	
  
with	
   the	
   tool	
   that	
   is	
   available	
   to	
   them.	
   In	
   its	
   first	
   year,	
   the	
   Executive	
   Office	
   of	
   the	
  
President	
  of	
  the	
  United	
  States	
  received	
  38,000	
  SAVE	
  Award.	
  
	
  
How	
  extensive	
  policy	
  maker	
  and	
  public	
  take	
  up	
  
The	
  application	
  required	
  a	
  minimal	
  commitment	
  on	
  the	
  part	
  of	
  the	
  policymaker,	
  because	
  the	
  ideas	
  
had	
  been	
  submitted	
  and	
  prioritized	
  by	
  the	
  crowd	
  at	
  large.	
  Although	
  all	
  ideas	
  were	
  reviewed,	
  the	
  
most	
  promising	
  options	
  revealed	
  themselves	
  at	
  an	
  early	
  stage	
  of	
  review.	
  The	
  contribution	
  on	
  the	
  
part	
  of	
  each	
  individual	
  was	
  minimal,	
  as	
  well,	
  since	
  the	
  submission	
  of	
  ideas	
  and	
  voting	
  minimized	
  the	
  
time	
  commitment	
  for	
  all.	
  
Clear	
   goals	
   and	
   a	
   readymade	
   solution	
   allowed	
   IdeaScale	
   to	
   successfully	
   deploy	
   the	
   SAVE	
   Award	
  
community	
   against	
   an	
   extreme	
   timeline.	
   IdeaScale	
   successfully	
   delivered	
   its	
   ideation	
   software	
   on	
  
time	
  and	
  within	
  budget	
  to	
  the	
  Executive	
  Office	
  of	
  the	
  President.	
  The	
  platform	
  scaled	
  easily	
  and	
  has	
  
never	
  shown	
  any	
  strain	
  under	
  a	
  high	
  volume	
  of	
  users	
  (nearly	
  90,000	
  over	
  the	
  past	
  four	
  years	
  of	
  SAVE	
  
Awards).	
  In	
  the	
  first	
  week	
  three	
  weeks	
  of	
  2009	
  alone,	
  early	
  40,000	
  ideas	
  were	
  collected.	
  
	
  
	
  
Liquid	
  Democracy	
  
Describe	
  briefly	
  the	
  context	
  of	
  the	
  solution	
  
The	
  Enquete-­‐Comission	
  Internet	
  and	
  digital	
  Society	
  stands	
  for	
  a	
  parliamentary	
  temporary	
  committee	
  
established	
  between	
  2010-­‐2013.	
  During	
  this	
  period,	
  Politicians	
  and	
  experts	
  worked	
  together	
  in	
  order	
  
to	
   develop	
   policy	
   recommendations	
   on	
   socially	
   relevant	
   and	
   complex	
   internet	
   policy-­‐issues	
   for	
  
future	
  purposes.	
  This	
  specific	
  Enquete-­‐commission	
  decided	
  –	
  for	
  the	
  first	
  time	
  ever	
  in	
  the	
  German	
  
history	
  -­‐	
  to	
  link	
  their	
  decision-­‐making	
  processes	
  and	
  to	
  work	
  with	
  an	
  eParticipation	
  platform,	
  which	
  
aimed	
   to	
   enable	
   to	
   involved	
   citizens	
   a	
   wider	
   online-­‐participation.	
   The	
   platform	
   called	
  
enquetebeteiligung.de	
   has	
   been	
   subsequently	
   implemented	
   by	
   the	
   non-­‐profit	
   association	
   Liquid	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
142	
  |	
  P a g e 	
  
Democracy	
  e.V.	
  which	
  is	
  settled	
  in	
  Berlin	
  and	
  which	
  develops	
  further	
  the	
  open	
  source	
  participation	
  
software	
  Adhocracy.	
  On	
  this	
  platform	
  involved	
  citizens	
  could	
  build	
  up	
  proposals	
  and	
  solutions	
  for	
  
issues	
  concerning	
  the	
  commission,	
  discuss	
  relevant	
  topics	
  and	
  vote	
  upon	
  the	
  proposals.	
  The	
  most	
  
popular	
  proposals	
  have	
  been	
  implemented	
  in	
  the	
  final	
  report	
  of	
  the	
  Enquete-­‐Comission	
  Internet	
  and	
  
digital,	
  the	
  official	
  policy	
  guideline	
  related	
  to	
  the	
  debated	
  topics.	
  The	
  success	
  lies	
  upon	
  that	
  two	
  of	
  
twelve	
  recommendations	
  have	
  been	
  mentioned	
  in	
  the	
  final	
  report	
  by	
  taking	
  over	
  exact	
  quotes	
  from	
  
the	
  proposals	
  of	
  enquetebeteiligung.de.	
  
What	
  impact	
  did	
  it	
  have	
  on	
  the	
  quality	
  of	
  policies?	
  
For	
  the	
  first	
  time	
  in	
  the	
  German	
  history,	
  citizens	
  could	
  take	
  part	
  through	
  an	
  online-­‐process	
  to	
  the	
  
work	
  of	
  an	
  official	
  parliamentary	
  committee.	
  Their	
  proposals	
  have	
  been	
  partly	
  included	
  in	
  the	
  final	
  
report	
   of	
   the	
   Enquete-­‐commission,	
   which	
   is	
   considered	
   as	
   the	
   official	
   policy-­‐guideline	
   for	
   the	
  
German	
  Government	
  for	
  the	
  next	
  years.	
  Through	
  this	
  process	
  the	
  policy	
  recommendations	
  for	
  the	
  
German	
   Government	
   could	
   be	
   provided	
   with	
   a	
   unique	
   democratic	
   legitimation.	
   The	
   fact	
   that	
   the	
  
proposals	
  made	
  on	
  enquetebeteiligung.de	
  were	
  of	
  such	
  a	
  high	
  quality	
  that	
  a	
  committee,	
  composed	
  
of	
  professional	
  politicians	
  and	
  experts,	
  decided	
  to	
  take	
  them	
  over	
  by	
  mentioning	
  full	
  quotes	
  in	
  their	
  
final	
  report.	
  Moreover	
  it	
  proved	
  the	
  opposite	
  to	
  everyone	
  which	
  was	
  the	
  opinion	
  that	
  the	
  average	
  of	
  
population	
   is	
   neither	
   interested	
   in	
   legislation	
   nor	
   able	
   to	
   suggest	
   high-­‐quality	
   contributions.	
  
Enquetebeteiligung.de	
  has	
  shown,	
  that	
  it	
  works.	
  If	
  we	
  strive	
  to	
  enhance	
  the	
  possibilities	
  to	
  involve	
  
citizens	
  in	
  online	
  policy-­‐making	
  processes,	
  setting	
  this	
  as	
  a	
  democratic	
  goal,	
  we	
  can	
  now	
  have	
  a	
  blue	
  
print	
  on	
  how	
  it	
  can	
  be	
  done.	
  
How	
  extensive	
  policy	
  maker	
  and	
  public	
  take	
  up	
  
Enquetebeteiligung.de	
  and	
  the	
  Enquete-­‐Commission	
  itself	
  received	
  a	
  wide,	
  positive	
  consideration	
  in	
  
the	
  German	
  media.	
  The	
  final	
  report,	
  in	
  which	
  citizen‘s	
  proposals	
  were	
  adopted,	
  has	
  been	
  recently	
  
published.	
   According	
   to	
   a	
   scientific	
   evaluation	
   made	
   by	
   the	
   Zeppelin	
   University	
   settled	
   in	
  
Friedrichshafen	
   (Germany),	
   the	
   quality	
   of	
   proposals	
   was	
   extraordinary	
   high	
   and	
   the	
   usage	
   of	
   the	
  
adhocracy-­‐platform	
   for	
   future	
   commissions	
   will	
   be	
   highly	
   recommended.	
   This	
   reflects	
   the	
   high	
  
accreditations	
  and	
  comforts	
  the	
  interviewed	
  citizens.	
  
	
  
2050	
  Pathways	
  Calculator	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
143	
  |	
  P a g e 	
  
Describe	
   briefly	
   the	
   context	
   and	
   functionalities	
   of	
   the	
   solution,	
   and	
   how	
   it	
   was	
   used	
   in	
   policy-­‐
making	
  
UK’s	
  Climate	
  Change	
  Act	
  2008	
  set	
  in	
  law	
  a	
  long-­‐term	
  greenhouse	
  gas	
  emissions	
  reduction	
  target	
  for	
  
the	
  year	
  2050,	
  as	
  well	
  as	
  a	
  framework	
  for	
  5-­‐yearly	
  “carbon	
  budgets”	
  to	
  reach	
  it.	
  In	
  drafting	
  its	
  Low	
  
Carbon	
  Transition	
  Plan	
  in	
  2009,	
  the	
  first	
  White	
  Paper	
  which	
  sought	
  to	
  bring	
  together	
  the	
  diverse	
  
challenges	
  of	
  the	
  newly	
  created	
  Department	
  of	
  Energy	
  and	
  Climate	
  Change	
  (DECC),	
  the	
  Department	
  
wished	
   to	
   investigate	
   further	
   what	
   options	
   the	
   country	
   had	
   in	
   meeting	
   its	
   target	
   to	
   reduce	
  
greenhouse	
  gas	
  emissions	
  by	
  80%	
  on	
  1990	
  levels	
  by	
  2050.	
  
The	
  Department	
  already	
  had	
  models	
  to	
  understand	
  long	
  term	
  options,	
  such	
  as	
  MarkAl,	
  but	
  none	
  of	
  
these	
  was	
  easy	
  or	
  quick	
  to	
  run	
  inside	
  the	
  Department	
  and	
  so	
  senior	
  decision	
  makers	
  did	
  not	
  feel	
  they	
  
had	
  an	
  opportunity	
  to	
  interrogate	
  the	
  results.	
  	
  
This	
  was	
  the	
  context	
  that	
  pointed	
  to	
  the	
  need	
  for	
  the	
  work,	
  but	
  also	
  highlighted	
  the	
  importance	
  of	
  
the	
   ethos	
   of	
   the	
   work	
   –	
   that	
   it	
   should	
   be	
   understandable,	
   radically	
   transparent,	
   interactive,	
   give	
  
quick	
  results,	
  and	
  set	
  out	
  easily	
  all	
  the	
  underpinning	
  assumptions.	
  	
  
The	
   solution	
   which	
   we	
   developed	
   in	
   DECC’s	
   Strategy	
   Directorate	
   was	
   the	
   “2050	
   Pathways	
  
Calculator”.	
   This	
   is	
   an	
   interactive	
   computer	
   model,	
   available	
   in	
   three	
   formats:	
   the	
   detailed	
   Excel	
  
model,	
   a	
   user-­‐friendly	
   web	
   tool,	
   and	
   a	
   simplified	
   ‘serious	
   game’	
   or	
   simulation.	
  
https://www.gov.uk/2050-­‐pathways-­‐analysis	
  	
  
By	
  publishing	
  the	
  2050	
  Calculator	
  in	
  full,	
  it	
  has	
  enabled	
  a	
  numerate	
  and	
  broader	
  public	
  debate	
  about	
  
the	
  UK’s	
  energy	
  demand	
  and	
  supply,	
  and	
  provided	
  a	
  platform	
  which	
  allowed	
  everyone	
  to	
  join	
  the	
  
discussion	
   on	
   the	
   same	
   terms.	
   The	
   model	
   seeks	
   to	
   encompass	
   all	
   physically	
   possible	
   outcomes,	
  
rather	
  than	
  point	
  to	
  only	
  those	
  thought	
  to	
  be	
  most	
  likely	
  at	
  any	
  one	
  time.	
  
	
  	
  
What	
  impact	
  did	
  it	
  have	
  on	
  the	
  quality	
  of	
  policies?	
  
The	
   2050	
   Calculator	
   helps	
   everyone	
   engage	
   in	
   the	
   debate	
   and	
   lets	
   Government	
   make	
   sure	
   our	
  
planning	
  is	
  consistent	
  with	
  the	
  long-­‐term	
  aim.	
  The	
  2050	
  Calculator	
  outlines,	
  in	
  minutes,	
  months	
  of	
  
work	
  from	
  technical	
  experts.	
  It	
  can	
  be	
  used	
  to	
  engage	
  a	
  range	
  of	
  audiences	
  on	
  the	
  challenges	
  and	
  
opportunities	
  of	
  the	
  energy	
  system.	
  It	
  brings	
  energy	
  and	
  emissions	
  data	
  alive,	
  showing	
  the	
  benefits,	
  
costs	
  and	
  trade-­‐offs	
  of	
  different	
  versions	
  of	
  the	
  future.	
  It	
  allows	
  you	
  to	
  explore	
  the	
  fundamental	
  
questions	
   of	
   how	
   the	
   UK	
   can	
   best	
   meet	
   energy	
   needs	
   and	
   reduce	
   emissions.	
   The	
   tool	
   has	
   been	
  
shared	
  transparently,	
  both	
  in	
  the	
  sense	
  of	
  sharing	
  all	
  its	
  assumptions	
  and	
  formulations,	
  and	
  also	
  in	
  
the	
  sense	
  of	
  sharing	
  its	
  results	
  in	
  a	
  way	
  that	
  people	
  can	
  understand	
  and	
  use.	
  
The	
  analysis	
  has	
  been	
  used	
  in	
  the	
  Government’s	
  Budget	
  statements,	
  Annual	
  Energy	
  Statements	
  and	
  
it	
  featured	
  centrally	
  in	
  the	
  UK	
  Government’s	
  Carbon	
  Plan	
  2011.	
  The	
  team	
  drew	
  out	
  key	
  conclusions	
  
from	
  the	
  work	
  which	
  have	
  been	
  picked	
  up	
  by	
  teams	
  across	
  government:	
  for	
  example:	
  the	
  potential	
  
doubling	
  of	
  electricity	
  demand	
  over	
  period	
  to	
  2050	
  even	
  as	
  energy	
  demand	
  as	
  a	
  whole	
  falls,	
  the	
  
limited	
  supply	
  of	
  bioenergy	
  with	
  competing	
  demand	
  in	
  different	
  sectors,	
  its	
  use	
  in	
  understanding	
  the	
  
renewable	
  strategy	
  and	
  targets.	
  It	
  has	
  helped	
  senior	
  people	
  in	
  the	
  Department	
  understand	
  issues	
  
such	
   as	
   insulation	
   ambition	
   levels,	
   fossil	
   fuel	
   usage,	
   power	
   grid	
   decarbonisation,	
   etc.	
   The	
   2050	
  
Calculator	
  has	
  been	
  shown	
  to	
  new	
  Ministers	
  when	
  they	
  join	
  the	
  Department.	
  It	
  was	
  used	
  at	
  points	
  
such	
  as	
  the	
  Fukushima	
  incident	
  to	
  respond	
  to	
  new	
  questions.	
  	
  
	
  
Take	
  up	
  from	
  the	
  public	
  and	
  policy	
  makers	
  
The	
  2050	
  Futures	
  team	
  often	
  train	
  colleagues	
  across	
  DECC	
  and	
  other	
  government	
  departments	
  in	
  
how	
  to	
  use	
  the	
  2050	
  Calculator,	
  and	
  it	
  has	
  been	
  widely	
  used	
  alongside	
  other	
  more	
  detailed	
  models.	
  
Outside	
   of	
   government,	
   the	
   2050	
   Calculator	
   has	
   also	
   been	
   widely	
   used.	
   	
   The	
   transparency	
   and	
  
accessibility	
  of	
  the	
  approach	
  has	
  led	
  to	
  collaborations	
  from	
  diverse	
  quarters:	
  	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
144	
  |	
  P a g e 	
  
	
  
• With	
   Parliamentary	
   Select	
   Committees	
   and	
   staff	
   in	
   parliament	
   to	
  help	
   give	
   MPs	
   a	
   factual	
  
basis	
  for	
  debate.	
  
• With	
  individual	
  enthusiasts	
  and	
  experts	
  who	
  have	
  contributed	
  bug	
  fixes	
  and	
  improvements	
  
to	
  our	
  modelling,	
  interfaces	
  and	
  documents.	
  
• With	
   Cardiff	
   University	
   to	
   understand	
   public	
   attitudes	
   to	
   the	
   choices	
   we	
   face	
   when	
  
considering	
  the	
  energy	
  system	
  as	
  a	
  whole.	
  
• With	
  the	
  Foreign	
  Office	
  and	
  China,	
  South	
  Korea,	
  Taiwan,	
  Bangladesh,	
  South	
  Africa	
  and	
  the	
  
Asian	
  Development	
  Bank	
  -­‐	
  helping	
  each	
  of	
  them	
  to	
  develop	
  their	
  own	
  versions	
  of	
  the	
  2050	
  
calculator.	
  We	
  are	
  discussing	
  potential	
  partnerships	
  with	
  many	
  other	
  countries.	
  
• With	
  many	
  schools	
  and	
  universities	
  in	
  their	
  teaching.	
  We	
  provided	
  a	
  ‘Schools	
  Toolkit’	
  to	
  help	
  
teachers	
   of	
   Geography,	
   Science,	
   Maths	
   and	
   Citizenship,	
   to	
   use	
   the	
   2050	
   Calculator.	
   The	
  
Toolkit	
  is	
  most	
  suited	
  to	
  students	
  aged	
  11	
  –	
  16	
  years	
  old.	
  We	
  also	
  funded	
  a	
  Youth	
  Panel	
  to	
  
engage	
  with	
  the	
  work	
  and	
  report	
  to	
  DECC.	
  
• With	
  companies	
  and	
  NGOs,	
  e.g.	
  the	
  infrastructure	
  company	
  National	
  Grid	
  and	
  Friends	
  of	
  the	
  
Earth	
  in	
  their	
  own	
  outreach	
  and	
  internal	
  thinking.	
  	
  
	
  	
  
In	
  terms	
  of	
  user	
  statistics:	
  
	
  
• My2050	
  has	
  over	
  16,000	
  pathways	
  submitted	
  by	
  public.	
  
• Team	
  presented	
  to	
  over	
  500	
  stakeholders	
  in	
  the	
  autumn	
  2010	
  Call	
  for	
  Evidence	
  period	
  
• Typically	
  10,000	
  unique	
  users	
  of	
  the	
  web	
  tool	
  over	
  a	
  three-­‐month	
  period	
  
• 100	
  people	
  registered	
  to	
  use	
  the	
  Wiki	
  (these	
  are	
  the	
  most	
  active	
  Calculator	
  users).	
  
	
  
	
  
4.4. Lessons	
  learnt	
  from	
  cases	
  and	
  prize	
  
What	
  emerged	
  from	
  the	
  analysis	
  of	
  the	
  prize	
  and	
  the	
  cases	
  is	
  that	
  evidence	
  for	
  uptake	
  is	
  clearly	
  
available	
  and	
  now	
  can	
  be	
  considered	
  mature.	
  
However,	
  the	
  evidence	
  presented	
  by	
  the	
  cases	
  and	
  the	
  prize	
  candidates	
  with	
  regard	
  to	
  their	
  impact	
  
remains	
  thin	
  and	
  anecdotal	
  in	
  nature.	
  There	
  is	
  no	
  thorough	
  assessment	
  of	
  the	
  impact	
  on	
  the	
  quality	
  
of	
  policies.	
  Typically,	
  the	
  impact	
  is	
  demonstrated	
  in	
  terms	
  of:	
  
-­‐ Visits	
  to	
  the	
  website	
  and	
  participation	
  rates
-­‐ 	
  feedback	
  and	
  visibility	
  towards	
  media	
  and	
  politicians,	
  	
  
-­‐ actual	
  influence	
  over	
  the	
  decisions	
  taken	
  
while	
  the	
  actual	
  impact	
  on	
  the	
  quality	
  of	
  policies	
  is	
  yet	
  to	
  be	
  demonstrated.	
  Some	
  initial	
  work	
  (in	
  the	
  
case	
  of	
  Gleam	
  and	
  Pathways	
  2050)	
  is	
  focussing	
  on	
  comparing	
  the	
  predictions	
  with	
  the	
  reality	
  as	
  it	
  is	
  
unfolding.	
  Only	
  the	
  case	
  of	
  Ideascale	
  presents	
  some	
  tangible	
  ex	
  ante	
  estimates	
  of	
  the	
  advantages	
  of	
  
the	
  decisions	
  taken	
  through	
  policy-­‐making	
  2.0,	
  but	
  no	
  thourough	
  ex	
  post	
  evaluation.	
  
It	
  is	
  fair	
  to	
  conclude	
  that	
  evidence	
  about	
  the	
  impact	
  remains	
  mostly	
  at	
  the	
  level	
  of	
  actual	
  usage	
  of	
  
the	
  final	
  results	
  in	
  the	
  policy	
  decisions.	
  Unfortunately,	
  there	
  is	
  no	
  systematic	
  ex	
  post	
  evaluation	
  of	
  
such	
  decisions.	
  This	
  weak	
  evidence	
  base	
  is	
  a	
  major	
  obstacle	
  to	
  encourage	
  further	
  uptake	
  of	
  those	
  
solution,	
  and	
  further	
  investment	
  in	
  them.	
  We	
  are	
  far	
  from	
  having	
  robust	
  impact	
  evaluation	
  of	
  policy-­‐
making	
  2.0,	
  even	
  at	
  the	
  micro-­‐level	
  of	
  individual	
  cases.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
145	
  |	
  P a g e 	
  
	
  
4.5. An	
   additional	
   research	
   challenge:	
   counterfactual	
   impact	
  
evaluation	
  of	
  Policy	
  Making	
  2.0	
  	
  
The	
   findings	
   of	
   the	
   case	
   studies,	
   the	
   survey	
   and	
   the	
   prize	
   are	
   consistent	
   that	
   no	
   systematic	
  
evaluation	
   of	
   the	
   impact	
   of	
   policy-­‐making	
   2.0	
   is	
   available.	
   This	
   represents	
   a	
   major	
   challenge	
   to	
  
further	
  adoption	
  and	
  experimentation	
  in	
  this	
  domain.	
  There	
  is	
  still	
  a	
  number	
  of	
  unresolved	
  questions	
  
regarding	
  policy	
  making	
  2.0	
  tools	
  and	
  methodologies:	
  	
  	
  
• Do	
  they	
  help	
  engaging	
  new	
  stakeholders	
  and	
  communities?	
  
• Do	
  they	
  help	
  predicting	
  impact	
  better	
  than	
  other	
  models?	
  	
  
• Do	
  they	
  bring	
  new	
  relevant	
  ideas	
  useful	
  for	
  policy-­‐making?	
  
• Do	
  they	
  actually	
  lead	
  to	
  better	
  policies?	
  	
  
	
  
These	
  questions	
  could	
  be	
  structured	
  in	
  a	
  new	
  evaluation	
  framework	
  for	
  policy-­‐making	
  2.0,	
  which	
  
encompassess	
  the	
  full	
  intervention	
  logic,	
  from	
  contextual	
  information,	
  to	
  the	
  intervention,	
  uptake,	
  
impact.	
  
	
  
Figure	
  20:	
  a	
  proposed	
  evaluation	
  framework	
  for	
  policy-­‐making	
  2.0	
  
The	
   originality	
   of	
   this	
   model	
   lies	
   in	
   its	
   comprehensiveness,	
   in	
   particularly	
   downstream.	
   Typical	
  
evaluation	
  of	
  policy	
  making	
  2.0	
  initiatives	
  stop	
  at	
  the	
  level	
  of	
  the	
  level	
  of	
  uptake,	
  such	
  as	
  visitors	
  and	
  
users.	
   In	
   the	
   best	
   practices	
   identified,	
   it	
   includes	
   actual	
   influence	
   on	
   the	
   decision	
   taken.	
   The	
  
proposed	
   framework	
   includes	
   the	
   actual	
   benefits	
   on	
   the	
   quality	
   of	
   policy	
   making,	
   such	
   as	
   the	
  
measurement	
   of	
   the	
   prediction	
   capacity,	
   the	
   improved	
   performance	
   of	
   public	
   sectors,	
   and	
   the	
  
improved	
  empowerment	
  of	
  citizens.	
  
In	
  this	
  respect,	
  there	
  is	
  a	
  lack	
  of	
  systematic	
  robust	
  evaluation	
  of	
  different	
  policy-­‐methods.	
  In	
  fact	
  
initial	
  and	
  anecdotal	
  evidence	
  point	
  to	
  the	
  presence	
  of	
  potential	
  impacts,	
  but	
  there	
  is	
  a	
  lack	
  of	
  a	
  
proper	
  counterfactual	
  impact	
  evaluation	
  approach	
  available	
  to	
  date.	
  In	
  what	
  follows	
  we	
  present	
  the	
  
main	
  methodologies	
  in	
  the	
  field,	
  and	
  how	
  they	
  can	
  be	
  applied	
  to	
  policy	
  making	
  2.0.	
  We	
  would	
  like	
  to	
  
stress	
  the	
  fact	
  that	
  counterfactual	
  impact	
  evaluation	
  is	
  more	
  likely	
  to	
  be	
  used	
  to	
  evaluate	
  policies	
  
and	
   initiatives	
   rather	
   than	
   technologies	
   and	
   methodologies.	
   Moreover	
   it	
   is	
   more	
   suitable	
   for	
  
evaluating	
  policies	
  impacting	
  a	
  number	
  of	
  distinctive	
  actors.	
  
Evaluating	
  the	
  impact	
  of	
  policies	
  is	
  a	
  complex	
  task	
  because	
  one	
  would	
  like	
  to	
  know	
  what	
  would	
  have	
  
been	
  the	
  value	
  for	
  a	
  given	
  output/outcome	
  variable	
  in	
  the	
  absence	
  of	
  the	
  project.	
  	
  This	
  is	
  a	
  value	
  
that,	
   by	
   definition,	
   cannot	
   be	
   observed	
   for	
   units	
   not	
   involved	
   in	
   the	
   project.	
   In	
   other	
   words,	
  
evaluators	
  cannot	
  know	
  what	
  would	
  have	
  been	
  the	
  behaviour	
  of	
  a	
  treated	
  unit	
  in	
  the	
  absence	
  of	
  
treatment.	
  Similarly,	
  we	
  have	
  no	
  counterfactuals	
  for	
  the	
  non-­‐treated	
  unit	
  (those	
  not	
  involved	
  in	
  the	
  
program).	
  This	
  is	
  a	
  well-­‐known	
  problem	
  in	
  policy	
  evaluation	
  analysis	
  (see	
  for	
  instance	
  Neyman,	
  1923	
  
Context'
• Socio'
poli.cal'
factors'
Interven.on'
• Design'of'
technology'
• Design'of'
methods'
• Cost'
Uptake'
• More'
par.cipants'
• More'
diverse'
par.cipa.on'
Impact'
efficiency'
• High'quality'
of'ideas'
• Impact'on'
actual'
decisions'
• BeCer'
predic.ons'
Impact'
effec.veness'
• Improved'
performanc
e'of'public'
sector'
• Improved'
empowerme
nt'of'ci.zens'
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
146	
  |	
  P a g e 	
  
and	
   Rubin,	
   1974,	
   1978,	
   1980,	
   1986),	
   which	
   has	
   been	
   overcome	
   using	
   several	
   methods.	
   What	
   is	
  
common	
  to	
  all	
  these	
  ‘alternative’	
  approaches	
  is	
  that	
  they	
  attempt	
  to	
  identify	
  or	
  create	
  the	
  most	
  
appropriate	
   control	
   group281
	
  in	
   order	
   to	
   overcome	
   the	
   two	
   main	
   obstacles	
   in	
   the	
   estimation	
   of	
  
counterfactual	
  
• The	
  'selection	
  bias',	
  which	
  consists	
  of	
  the	
  fact	
  that	
  target	
  population	
  differs	
  from	
  
counterfactual	
   population	
   due	
   to	
   pre-­‐intervention	
   features.	
   A	
   solution	
   is	
   the	
  
introduction	
  of	
  an	
  identification	
  hypothesis	
  stating	
  that	
  pre-­‐intervention	
  variables	
  
are	
   sufficient	
   to	
   'reconstruct'	
   the	
   control	
   group	
   of	
   non-­‐beneficiaries	
  
(counterfactual)	
  
• The	
   presence	
   of	
   spontaneous	
   dynamics,	
   due	
   to	
   the	
   fact	
   that	
   target	
   population	
  
differs	
  from	
  control	
  population	
  for	
  the	
  trend	
  of	
  the	
  result	
  variable.	
  A	
  solution	
  is	
  the	
  
introduction	
   of	
   an	
   identification	
   hypothesis	
   to	
   take	
   in	
   consideration	
   the	
  
spontaneous	
  dynamics	
  of	
  the	
  result	
  variable	
  trend	
  
There	
  are	
  basically	
  six	
  main	
  counterfactual	
  impact	
  assessment	
  methodologies	
  
Randomised	
  controlled	
  trials	
  
A	
  solution	
  can	
  be	
  found	
  in	
  case	
  of	
  randomized	
  processes	
  (this	
  happens	
  when	
  the	
  possibility	
  to	
  take	
  
part	
  to	
  a	
  project	
  is	
  made	
  available	
  to	
  people	
  on	
  the	
  basis	
  of	
  a	
  random	
  process).	
  In	
  this	
  situation	
  we	
  
do	
  not	
  expect	
  structural	
  differences	
  between	
  those	
  who	
  are	
  treated	
  (and	
  receive	
  support)	
  and	
  those	
  
who	
  are	
  not,	
  so	
  that	
  we	
  can	
  use	
  the	
  non-­‐supported	
  subjects	
  as	
  a	
  control	
  group	
  for	
  comparison	
  with	
  
the	
  former	
  group.	
  	
  
Difference-­‐in-­‐Difference	
  (DID)	
  
The	
  impact	
  of	
  a	
  policy	
  on	
  an	
  outcome	
  can	
  be	
  estimated	
  by	
  computing	
  a	
  double	
  difference,	
  one	
  over	
  
time	
  (before	
  and	
  after	
  the	
  treatment)	
  and	
  one	
  across	
  subjects	
  (between	
  treated	
  and	
  non	
  treated).	
  
This	
   simple	
   method	
   requires	
   only	
   aggregate	
   data	
   on	
   the	
   outcome	
   variable,	
   and	
   at	
   least	
   3	
  
observations	
  in	
  time:	
  two	
  observations	
  before	
  and	
  1	
  observation	
  after.	
  Unfortunately	
  the	
  difference	
  
in	
  difference	
  method	
  implies	
  that	
  the	
  trend	
  in	
  treatments	
  and	
  comparisons	
  are	
  the	
  same.	
  With	
  only	
  
four	
  points	
  of	
  observation	
  on	
  means	
  we	
  do	
  not	
  know	
  if	
  this	
  assumption	
  is	
  correct.	
  However,	
  with	
  
two	
  additional	
  pre-­‐intervention	
  data	
  points	
  the	
  parallelism	
  assumption	
  becomes	
  testable.	
  
Regression	
  Discontinuity	
  Design	
  (RDD)	
  
One	
  solution	
  that	
  has	
  been	
  proposed	
  in	
  the	
  literature	
  is	
  the	
  use	
  of	
  so	
  called	
  “regression	
  discontinuity	
  
design”.	
  This	
  method	
  can	
  be	
  applied	
  to	
  situations	
  in	
  which	
  it	
  is	
  possible	
  to	
  identify	
  a	
  clear	
  cut-­‐off	
  
level	
  for	
  treatment	
  access	
  and	
  in	
  which	
  treatment	
  status	
  is	
  based	
  on	
  observable	
  characteristics.	
  In	
  
this	
  case	
  the	
  cut-­‐off	
  is	
  defined	
  by	
  the	
  eligibility	
  rules	
  of	
  the	
  project	
  so	
  that	
  the	
  treatment	
  group	
  is	
  
made	
  up	
  by	
  people	
  that	
  just	
  satisfy	
  these	
  criteria	
  (and	
  hence	
  have	
  access	
  to	
  the	
  project),	
  whereas	
  
the	
  control	
  group	
  is	
  composed	
  of	
  people	
  that	
  are	
  just	
  below	
  the	
  cut-­‐off	
  level	
  and	
  do	
  not	
  have	
  access	
  
to	
   the	
   project.	
   In	
   such	
   a	
   circumstance	
   it	
   is	
   reasonable	
   to	
   assume	
  that	
   the	
   control	
   group	
   and	
   the	
  
treated	
  groups	
  are	
  very	
  similar	
  against	
  most	
  criteria,	
  and	
  that	
  the	
  small	
  difference	
  in	
  the	
  variables	
  
guaranteeing	
   access	
   to	
   treatment	
   are	
   not	
   sufficient	
   to	
   justify	
   a	
   different	
   value	
   of	
   the	
   outcome	
  
variable,	
  so	
  that	
  a	
  difference	
  in	
  the	
  latter	
  can	
  be	
  entirely	
  attributed	
  to	
  treatment.	
  	
  
Instrumental	
  variables	
  and	
  natural	
  experiments	
  
This	
  category	
  is	
  relevant	
  when	
  the	
  exposure	
  to	
  the	
  policy	
  is	
  to	
  a	
  certain	
  degree	
  determined	
  by	
  an	
  
external	
  force	
  which	
  does	
  not	
  affect	
  the	
  outcome	
  of	
  the	
  policy	
  directly,	
  but	
  only	
  indirectly,	
  through	
  
its	
  influence	
  on	
  the	
  exposure.	
  Angrist	
  and	
  Krueger	
  (2001)	
  define	
  this	
  situation	
  as	
  natural	
  experiment,	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
281
	
  For	
  an	
  introduction	
  to	
  policy	
  evaluation	
  see	
  Khandker,	
  Koolwal	
  and	
  Samad	
  (2010)	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
147	
  |	
  P a g e 	
  
i.e.	
  “where	
  the	
  forces	
  of	
  nature	
  or	
  government	
  policy	
  have	
  conspired	
  to	
  produce	
  an	
  environment	
  
somewhat	
  akin	
  to	
  a	
  randomized	
  experiment.”	
  There	
  are	
  two	
  main	
  approaches:	
  
• Wald	
   estimator,	
   in	
   which	
   the	
   treatment	
   effect	
   is	
   identified	
   by	
   the	
   ratio	
   of	
   the	
  
difference	
  in	
  average	
  outcome	
  between	
  units	
  eligible	
  and	
  not	
  eligible	
  for	
  treatment,	
  
weighted	
  by	
  the	
  probability	
  of	
  treatment	
  induced	
  by	
  the	
  instrument.	
  This	
  method	
  is	
  
used	
   in	
   case	
   of	
   randomization	
   with	
   partial	
   compliance	
   and	
   randomized	
  
encouragement	
  	
  
• Two	
  stage	
  least	
  squares,	
  consisting	
  of	
  a	
  first	
  stage	
  in	
  which	
  is	
  estimated	
  a	
  model	
  
predicting	
   the	
   probability	
   of	
   treatment	
   as	
   function	
   of	
   the	
   instrument	
   and	
   other	
  
variables,	
  and	
  a	
  second	
  stage	
  in	
  which	
  the	
  outcome	
  equation	
  is	
  estimated	
  using	
  the	
  
predicted	
   probability	
   of	
   treatment.	
   This	
   is	
   the	
   case	
   of	
   non-­‐randomized	
   natural	
  
experiments	
  
Unfortunately	
  this	
  method	
  is	
  not	
  often	
  feasible	
  as	
  does	
  not	
  work	
  when	
  treatment	
  exposure	
  is	
  not	
  
mandatory	
  and	
  depends	
  upon	
  some	
  selection	
  process	
  that	
  needs	
  to	
  be	
  controlled	
  for.	
  This	
  is	
  the	
  
case	
  at	
  hand,	
  in	
  which	
  the	
  participation	
  to	
  the	
  training	
  projects	
  has	
  been	
  voluntary.	
  Another	
  major	
  
weakness	
  of	
  the	
  approach	
  is	
  that	
  it	
  can	
  be	
  difficult	
  to	
  find	
  an	
  instrument	
  that	
  is	
  both	
  relevant	
  and	
  
exogenous.	
  
	
  
Matching	
  	
  
The	
  most	
  common	
  matching	
  method	
  is	
  the	
  propensity	
  score	
  matching.	
  This	
  approach	
  is	
  based	
  on	
  
the	
  premise	
  that,	
  for	
  each	
  unit	
  that	
  has	
  been	
  treated,	
  it	
  is	
  possible	
  to	
  find	
  at	
  least	
  one	
  non-­‐treated	
  
unit	
  that	
  is	
  “close”	
  enough	
  to	
  the	
  treated	
  counterpart.	
  In	
  this	
  context	
  “close”	
  means	
  that	
  it	
  exhibits	
  a	
  
value	
  for	
  the	
  propensity	
  score	
  very	
  similar	
  (if	
  not	
  identical)	
  to	
  the	
  one	
  observed	
  for	
  the	
  treated	
  unit.	
  
The	
   propensity	
   score	
   is	
   defined	
   as	
   the	
   conditional	
   probability	
   of	
   receiving	
   the	
   treatment	
   and	
   is	
  
usually	
  estimated	
  using	
  logit	
  or	
  probit	
  regressions.	
  After	
  having	
  computed	
  the	
  propensity	
  scores	
  for	
  
all	
  the	
  firms	
  in	
  the	
  dataset,	
  it	
  is	
  possible	
  to	
  use	
  this	
  value	
  to	
  match	
  firms	
  in	
  the	
  treated	
  group	
  with	
  at	
  
least	
   one	
   firm	
   in	
   the	
   control	
   group.	
   There	
   are	
   various	
   techniques	
   for	
   undertaking	
   this	
   matching	
  
process.	
  	
  Some	
  use	
  replacement	
  while	
  others	
  do	
  not,	
  and	
  some	
  use	
  more	
  complex	
  definitions	
  of	
  
distance,	
  but	
  the	
  logic	
  in	
  all	
  these	
  approaches	
  is	
  very	
  similar	
  -­‐	
  find	
  a	
  close	
  match	
  for	
  the	
  treated	
  unit	
  
within	
  the	
  group	
  of	
  untreated,	
  using	
  the	
  values	
  for	
  the	
  propensity	
  scores.	
  This	
  approach	
  works	
  well	
  if	
  
the	
  evaluator	
  has	
  access	
  to	
  a	
  representative	
  sample	
  of	
  the	
  underlying	
  population	
  and	
  can	
  control	
  for	
  
all	
   the	
   variables	
   determining	
   the	
   treatment	
   status	
   (the	
   so	
   called	
   “selection	
   on	
   observables”	
  
assumption);	
  otherwise	
  the	
  process	
  can	
  be	
  bedevilled	
  with	
  the	
  selection	
  bias	
  issue.	
  	
  
There	
  are	
  three	
  main	
  types	
  of	
  propensity	
  score	
  matching:	
  
• Nearest	
  available	
  matching,	
  according	
  to	
  which	
  each	
  treated	
  unit	
  is	
  matched	
  with	
  
the	
  one	
  untreated	
  unit	
  having	
  the	
  most	
  similar	
  initial	
  characteristics	
  
• Radius	
  matching,	
  according	
  to	
  which	
  each	
  treated	
  unit	
  is	
  matched	
  with	
  all	
  of	
  the	
  
untreated	
  units	
  having	
  a	
  propensity	
  score	
  within	
  a	
  certain	
  degree	
  of	
  tolerance	
  with	
  
respect	
  to	
  the	
  one	
  of	
  the	
  treated	
  unit	
  	
  
• Kernel	
   Matching,	
   in	
   which	
   the	
   outcome	
   of	
   each	
   treated	
   unit	
   is	
   compared	
   with	
   a	
  
weighted	
  average	
  of	
  the	
  outcomes	
  of	
  all	
  non-­‐treated	
  units	
  
There	
   is	
   a	
   very	
   important	
   difference	
   between	
   propensity	
   score	
   matching	
   and	
   multiple	
   regression	
  
analysis.	
  In	
  propensity	
  score	
  matching	
  pre-­‐intervention	
  characteristics	
  are	
  different	
  between	
  treated	
  
and	
   non-­‐treated	
   units,	
   affecting	
   differently	
   the	
   final	
   outcome	
   of	
   the	
   treated	
   and	
   non-­‐treated	
  
independently	
  from	
  the	
  effect	
  of	
  the	
  programme,	
  thereby	
  creating	
  a	
  selection	
  bias.	
  On	
  the	
  other	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
148	
  |	
  P a g e 	
  
hand	
  multiple	
  regression	
  analysis	
  makes	
  use	
  of	
  the	
  data	
  from	
  all	
  the	
  treated	
  and	
  non-­‐treated	
  units	
  
separating	
  the	
  impact	
  on	
  the	
  final	
  outcome	
  due	
  to	
  the	
  different	
  initial	
  characteristic	
  (included	
  in	
  the	
  
model	
   as	
   control	
   variables)	
   from	
   the	
   impact	
   of	
   the	
   programme.	
   So	
   the	
   trick	
   is	
   to	
   find	
   as	
   control	
  
variables	
  all	
  the	
  initial	
  characteristics	
  that	
  are	
  similar	
  between	
  the	
  treated	
  and	
  non-­‐treated	
  units	
  in	
  
order	
  to	
  compare	
  the	
  final	
  outcome	
  and	
  interpret	
  the	
  difference	
  as	
  the	
  impact	
  of	
  the	
  programme.	
  
When	
  there	
  will	
  be	
  a	
  higher	
  number	
  of	
  eInclusion	
  projects	
  we	
  will	
  adopt	
  also	
  the	
  multiple	
  regression	
  
approach	
  in	
  our	
  analysis.	
  
Matching	
   is	
   mostly	
   inspired	
   by	
   outcome	
   additionality	
   and	
   to	
   some	
   extent	
   overlooks	
   behavioural	
  
additionality.	
  Findings	
  from	
  matching	
  should	
  always	
  be	
  combined	
  with	
  real-­‐time	
  case	
  study	
  evidence	
  
to	
  allow	
  some	
  insight	
  into	
  the	
  causality	
  mechanisms.	
  The	
  matched	
  sample	
  approach,	
  in	
  fact,	
  always	
  
raises	
  questions	
  of	
  just	
  how	
  similar	
  the	
  subjects	
  are.	
  	
  
	
  
Self-­‐reported	
  counterfactuals	
  	
  
This	
   approach,	
   employed	
   especially	
   for	
   assessing	
   the	
   issue	
   of	
   behavioural	
   additionality	
   (Aslesen,	
  
Broch,	
   Koch,	
   &	
   Solum,	
   2001;	
   Davenport,	
   Grimes,	
   &	
   Davies,	
   1998),	
   consist	
   in	
   questioning	
   assisted	
  
subjects	
   directly	
   and	
   posing	
   them	
   counterfactual	
   questions.	
   This	
   involves	
   asking	
   the	
   recipients	
   of	
  
public	
  support	
  how	
  their	
  employment-­‐related	
  behaviour	
  changed,	
  asking	
  formerly	
  supported	
  people	
  
how	
   the	
   withdrawal	
   of	
   assistance	
   affected	
   their	
   innovation	
   related	
   behaviour,	
   and	
   asking	
   non-­‐
supported	
  people	
  how	
  they	
  think	
  their	
  innovation	
  related	
  behaviour	
  would	
  have	
  changed	
  had	
  they	
  
received	
   support.	
   Moreover,	
   as	
   one	
   of	
   the	
   objectives	
   of	
   our	
   investigation	
   is	
   to	
   improve	
   the	
  
intervention	
  process,	
  the	
  questioning	
  would	
  involve	
  also	
  the	
  intermediary	
  actors.	
  Surveys	
  are	
  a	
  good	
  
solution,	
  provided,	
  of	
  course,	
  the	
  respondents	
  do	
  not	
  answer	
  strategically	
  and	
  are	
  able	
  to	
  reflect	
  on	
  
behavioural	
  changes	
  in	
  a	
  counter-­‐factual	
  situation.	
  The	
  analysis	
  of	
  direct	
  questions	
  on	
  additionality	
  
assumes	
   that	
   the	
   respondents	
   are	
   indeed	
   able	
   to	
   reflect	
   on	
   their	
   behaviour	
   in	
   hypothetical,	
  
counterfactual	
  situations	
  and	
  that	
  they	
  are	
  telling	
  the	
  truth	
  to	
  the	
  best	
  of	
  their	
  knowledge.	
  However,	
  
as	
  respondents	
  have	
  an	
  interest	
  in	
  the	
  continuation	
  of	
  public	
  support,	
  they	
  might	
  be	
  tempted	
  to	
  
over-­‐emphasize	
   the	
   merits	
   thereof	
   (Sakakibara,	
   1997).	
   From	
   an	
   opposite	
   perspective,	
   one	
   could	
  
argue	
  that	
  some	
  people	
  might	
  be	
  reluctant	
  to	
  admit	
  their	
  dependence	
  on	
  public	
  support.	
  Either	
  way,	
  
the	
  differences	
  between	
  hypothetical	
  and	
  real	
  situations	
  should	
  be	
  controlled	
  for	
  through	
  a	
  mixture	
  
of	
  matching	
  and	
  self-­‐reported	
  counterfactuals.	
  	
  
	
  
Challenges	
  and	
  research	
  gaps	
  
• Often	
  we	
  are	
  not	
  facing	
  a	
  natural	
  experiments	
  situation,	
  as	
  the	
  treatment	
  exposure	
  is	
  
not	
  mandatory	
  and	
  depends	
  upon	
  some	
  selection	
  process	
  that	
  needs	
  to	
  be	
  controlled	
  	
  
• Often	
   it	
   is	
   not	
   clear	
   which	
   is	
   the	
   treated	
   unit.	
   For	
   example	
   a	
   policy	
   making	
   tool	
  
implemented	
  in	
  the	
  internet	
  can	
  affect	
  many	
  groups	
  of	
  people	
  from	
  different	
  countries,	
  
and	
  anyway	
  it	
  is	
  very	
  difficult	
  to	
  obtain	
  data	
  on	
  the	
  untreated	
  
• On	
   the	
   other	
   hand	
   often	
   the	
   same	
   units	
   are	
   treated	
   with	
   different	
   policies	
   and	
  
initiatives	
  	
  
• Sometimes	
   the	
   treated	
   unit	
   is	
   an	
   entire	
   country:	
   this	
   makes	
   it	
   impossible	
   to	
   apply	
  
methodologies	
  such	
  as	
  randomized	
  control	
  trials	
  or	
  matching	
  	
  
• Finally	
  there	
  is	
  the	
  need	
  to	
  develop	
  new	
  sets	
  of	
  indicators	
  used	
  for	
  assessing	
  the	
  impact	
  
	
  
The	
  most	
  promising	
  methods	
  seem	
  randomized	
  controlled	
  trials	
  and	
  self-­‐reported	
  counterfactuals.	
  
Randomized	
  control	
  trials	
  can	
  be	
  used	
  for	
  assessing	
  the	
  impact	
  of	
  policies	
  and	
  initiatives	
  especially	
  at	
  
local	
  levels.	
  On	
  the	
  other	
  hand	
  by	
  using	
  the	
  self-­‐reported	
  counterfactual	
  method	
  through	
  workshops	
  
or	
   survey	
   it	
   would	
   be	
   possible	
   to	
   extract	
   counterfactual	
   information	
   from	
   the	
   agents	
   joining	
   the	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
149	
  |	
  P a g e 	
  
programs	
   and	
   initiatives,	
   complementing	
   with	
   the	
   investigation	
   of	
   the	
   underlying	
   background	
  
information.	
  	
  
Let	
  us	
  see	
  now	
  some	
  examples	
  of	
  counterfactual	
  impact	
  evaluation	
  applied	
  to	
  open	
  government	
  for	
  
assessing	
  the	
  validity	
  of	
  claims	
  for	
  transparency	
  and	
  participation:	
  
• Zhang	
  (2012)282
	
  ran	
  a	
  pilot	
  field	
  experiment	
  in	
  Kenya	
  to	
  explore	
  how	
  variation	
  in	
  the	
  
content	
   of	
   an	
   information	
   campaign	
   can	
   impact	
   political	
   behavior	
   in	
   villages.	
   The	
  
experiment	
  involved	
  two	
  interventions.	
  The	
  first	
  provided	
  a	
  Constituency	
  Development	
  
Fund	
   (CDF)	
   report	
   card,	
   which	
   detailed	
   the	
   budgets	
   of	
   all	
   the	
   CDF	
   projects	
   allocated	
  
funding	
   in	
   the	
   constituency	
   for	
   that	
   fiscal	
   year,	
   to	
   see	
   if	
   villagers	
   respond	
   to	
  
unaccounted	
  for	
  money	
  in	
  locally	
  visible	
  projects.	
  The	
  second	
  intervention,	
  based	
  upon	
  
the	
  mixed	
  findings	
  in	
  the	
  literature	
  as	
  to	
  how	
  information	
  can	
  enable	
  citizens	
  to	
  take	
  
action	
   couples	
   the	
   report	
   card	
   with	
   a	
   public	
   participation	
   flyer,	
   to	
   see	
   if	
   information	
  
about	
   legal	
   rights	
   and	
   decision-­‐making	
   processes	
   is	
   necessary	
   for	
   citizens	
   to	
   use	
   the	
  
report	
  card	
  to	
  take	
  action	
  
• Olken	
  (2010)	
  ran	
  an	
  experiment	
  in	
  which	
  49	
  Indonesian	
  villages	
  were	
  randomly	
  assigned	
  
to	
   choose	
   development	
   projects	
   through	
   either	
   direct	
   election-­‐based	
   plebiscites	
   or	
  
through	
  representative-­‐based	
  meetings.	
  In	
  villages	
  where	
  plebiscites	
  were	
  performed,	
  
there	
  has	
  been	
  a	
  dramatic	
  increase	
  in	
  satisfaction	
  among	
  villagers,	
  in	
  knowledge	
  about	
  
the	
   project,	
   in	
   greater	
   perceived	
   benefits,	
   and	
   a	
   higher	
   reported	
   willingness	
   to	
  
contribute.	
  Moreover	
  we	
  have	
  that	
  changing	
  the	
  political	
  mechanism	
  had	
  much	
  smaller	
  
effects	
  on	
  the	
  actual	
  projects	
  selected,	
  with	
  some	
  evidence	
  that	
  plebiscites	
  resulted	
  in	
  
projects	
  chosen	
  by	
  women	
  being	
  located	
  in	
  poorer	
  areas.	
  According	
  to	
  the	
  outcomes	
  of	
  
the	
  study,	
  satisfaction	
  and	
  legitimacy	
  are	
  substantially	
  increased	
  by	
  direct	
  participation.	
  
	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
282
	
  Kelly	
  Zhang,	
  Increasing	
  Citizen	
  Demand	
  for	
  Good	
  Government	
  in	
  Kenya	
  (May	
  2012)	
  (unpublished	
  manuscript),	
  available	
  
at	
  http://cega.berkeley.edu/assets/cega_events/4/Zhang-­‐Kelly_Increasing-­‐Citizen-­‐Demand_Kenya_2012_v2.pdf	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
150	
  |	
  P a g e 	
  
5. Conclusions:	
  Policy-­‐Making	
  2.0	
  between	
  hype	
  and	
  reality	
  
In	
  this	
  final	
  section,	
  we	
  bring	
  together	
  the	
  findings	
  of	
  the	
  different	
  sections	
  and	
  put	
  policy-­‐making	
  
2.0	
  in	
  perspective	
  of	
  long	
  term	
  improvement	
  of	
  public	
  decision	
  making.	
  
A	
   first	
   question	
   to	
   be	
   addressed	
   is	
   to	
   what	
   extent	
   the	
   research	
   challenges	
   relate	
   to	
   a	
   specific	
  
challenge	
  in	
  policy-­‐making,	
  as	
  described	
  in	
  section	
  3	
  and	
  illustrated	
  in	
  the	
  figure	
  below.	
  
As	
  we	
  can	
  see,	
  the	
  described	
  research	
  challenges	
  capture	
  all	
  the	
  main	
  needs	
  of	
  policy-­‐makers,	
  and	
  in	
  
particular	
  the	
  capacity	
  to	
  detect	
  problems	
  early	
  and	
  to	
  leverage	
  the	
  collective	
  intelligence	
  in	
  policy-­‐
making.	
  
	
  
	
  
Figure	
  21:	
  Relation	
  Between	
  Policy-­‐Making	
  Needs	
  and	
  Research	
  Challenges	
  
In	
  view	
  of	
  this	
  analysis,	
  the	
  next	
  step	
  is	
  to	
  relate	
  each	
  of	
  the	
  research	
  challenges	
  in	
  the	
  policy-­‐making	
  
cycle.	
  Each	
  research	
  challenge	
  is	
  in	
  fact	
  relevant	
  for	
  one	
  or	
  more	
  of	
  the	
  specific	
  tasks,	
  not	
  for	
  all.	
  
The	
  figure	
  below	
  illustrates	
  this	
  relationship.	
  In	
  each	
  of	
  the	
  phases	
  of	
  the	
  cycle,	
  for	
  each	
  of	
  the	
  tasks,	
  
we	
  can	
  identify	
  the	
  potential	
  impact	
  of	
  the	
  research	
  challenges	
  described.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
151	
  |	
  P a g e 	
  
	
  
The	
  policy	
  cycle	
  starts	
  with	
  the	
  agenda	
  setting	
  phase,	
  where	
  the	
  problem	
  is	
  identified	
  and	
  analysed.	
  
In	
  this	
  section,	
  visualization	
  and	
  opinion	
  mining	
  can	
  help	
  to	
  identify	
  the	
  problems	
  at	
  an	
  early	
  stage.	
  
Advanced	
   modelling	
   techniques	
   are	
   then	
   used	
   to	
   untangle	
   the	
   casual	
   relationships	
   behind	
   the	
  
problem,	
  understanding	
  the	
  causal	
  roots	
  that	
  need	
  to	
  be	
  addressed	
  by	
  policy.	
  	
  
Once	
  the	
  problem	
  is	
  clearly	
  spelled	
  out,	
  we	
  move	
  to	
  the	
  policy	
  design	
  phase,	
  where	
  collaborative	
  
solutions	
  are	
  useful	
  to	
  identify	
  the	
  widest	
  range	
  of	
  options,	
  by	
  leveraging	
  collective	
  intelligence.	
  In	
  
order	
  to	
  facilitate	
  the	
  choice	
  of	
  the	
  most	
  effective	
  option,	
  immersive	
  simulations	
  support	
  decision-­‐
makers	
   by	
   taking	
   into	
   account	
   unexpected	
   impacts	
   and	
   relationships.	
   Collaborative	
   governance	
  
enables	
   then	
   to	
   develop	
   further	
   and	
   fine-­‐tune	
   the	
   most	
   effective	
   option,	
   for	
   example	
   through	
  
commentable	
  documents.	
  	
  
Once	
  the	
  option	
  is	
  developed	
  and	
  adopted,	
  we	
  enter	
  into	
  policy	
  implementation.	
  In	
  this	
  phase,	
  it	
  is	
  
crucial	
  to	
  ensure	
  awareness,	
  buy-­‐in	
  and	
  collaboration	
  from	
  the	
  widest	
  range	
  of	
  stakeholders:	
  social	
  
network	
  analysis,	
  crowdsourcing	
  and	
  serious	
  gaming	
  are	
  useful	
  to	
  deliver	
  this.	
  	
  
Already	
  during	
  this	
  implementation,	
  we	
  move	
  into	
  the	
  monitoring	
  and	
  evaluation.	
  Open	
  data	
  allow	
  
stakeholders	
   and	
   decision	
   makers	
   to	
   better	
   monitor	
   execution;	
   together	
   with	
   sentiment	
   analysis,	
  
they	
   can	
   be	
   used	
   to	
   evaluate	
   the	
   impact	
   of	
   the	
   policy,	
   also	
   through	
   advanced	
   visualization	
  
techniques.	
  
In	
  summary,	
  our	
  vision	
  for	
  2030	
  embodies	
  a	
  radically	
  different	
  context	
  for	
  policy-­‐making	
  2.0.	
  	
  
On	
  policy	
  modelling	
  and	
  simulation,	
  thanks	
  to	
  standardisation	
  and	
  reusability	
  of	
  models	
  and	
  tools,	
  
system	
   thinking	
   and	
   modelling	
   applied	
   to	
   policy	
   impact	
   assessment	
   has	
   become	
   pervasive	
  
throughout	
  government	
  activities,	
  and	
  is	
  no	
  longer	
  limited	
  to	
  high-­‐profile	
  regulation.	
  Model	
  building	
  
and	
  simulation	
  is	
  carried	
  out	
  directly	
  by	
  the	
  responsible	
  civil	
  servants,	
  collaborating	
  with	
  different	
  
domain	
  experts	
  and	
  colleagues	
  from	
  other	
  departments.	
  Visual	
  dynamic	
  interfaces	
  allow	
  users	
  to	
  
directly	
  manipulate	
  the	
  simulation	
  parameters	
  and	
  the	
  underlying	
  model.	
  
Policy	
   modelling	
   software	
   becomes	
   productized	
   and	
   engineered,	
   and	
   is	
   delivered	
   as-­‐a-­‐service,	
  
through	
   the	
   cloud,	
   bundled	
   with	
   added-­‐value	
   services	
   and	
   multidisciplinary	
   support	
   including	
  
mathematical,	
  physics,	
  economic,	
  social,	
  policy	
  and	
  domain-­‐specific	
  scientific	
  support.	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
152	
  |	
  P a g e 	
  
Cloud-­‐based	
   interoperability	
   standards	
   ensure	
   full	
   reusability	
   and	
   modularity	
   of	
   models	
   across	
  
platforms	
  and	
  software.	
  
System	
   policy	
   models	
   are	
   dynamically	
   built,	
   validated	
   and	
   adjusted	
   taking	
   into	
   account	
   massive	
  
dataset	
  of	
  heterogeneous	
  data	
  with	
  different	
  degrees	
  of	
  validity,	
  including	
  sensor-­‐based	
  structured	
  
data	
   and	
   citizens-­‐generated	
   unstructured	
   opinions	
   and	
   comments.	
   By	
   integrating	
   top-­‐down	
   and	
  
bottom-­‐up	
  agent	
  based	
  approaches,	
  the	
  models	
  are	
  able	
  to	
  better	
  explain	
  human	
  behaviour	
  and	
  to	
  
anticipate	
  possible	
  tipping	
  points	
  and	
  domino	
  effects	
  
On	
  collaborative	
  governance,	
  policy-­‐making	
  leverages	
  collective	
  intelligence	
  and	
  collective	
  action.	
  It	
  
accounts	
   for	
   the	
   greater	
   policentricity	
   of	
   our	
   governance	
   system.	
   While	
   traditional	
   tools	
   are	
  
designed	
  for	
  the	
  public	
  decision-­‐makers,	
  these	
  research	
  challenges	
  are	
  more	
  symmetric	
  by	
  nature,	
  
in	
   order	
   to	
   engage	
   stakeholders	
   all	
   through	
   the	
   phases	
   of	
   the	
   policy-­‐making	
   cycle.	
   Thanks	
   to	
  
visualisation	
  and	
  design,	
  it	
  is	
  able	
  to	
  reach	
  out	
  to	
  new	
  stakeholders	
  and	
  lower	
  the	
  barriers	
  to	
  entry	
  in	
  
the	
  policy	
  discussions.	
  Policy-­‐making	
  2.0	
  is	
  not	
  only	
  designed	
  to	
  be	
  more	
  effective,	
  but	
  also	
  more	
  
participatory.	
  
This	
   document	
   described	
   at	
   length	
   the	
   specific	
   opportunities	
   of	
   policy-­‐making	
   technology,	
   and	
  
identified	
  the	
  technological	
  bottlenecks	
  that	
  we	
  need	
  to	
  overcome	
  over	
  the	
  next	
  years	
  if	
  we	
  want	
  to	
  
grasp	
  the	
  opportunities	
  of	
  Policy-­‐Making	
  2.0.	
  The	
  research	
  challenges	
  identified	
  so	
  far	
  are	
  not	
  just	
  a	
  
simple	
  collection	
  of	
  research	
  issues,	
  but	
  an	
  integrated	
  bundle	
  of	
  innovative	
  solutions	
  that	
  together	
  
can	
  lead	
  to	
  a	
  paradigm	
  shift	
  in	
  policy-­‐making.	
  
Yet	
  it	
  does	
  not	
  fail	
  us	
  that	
  the	
  main	
  bottlenecks	
  to	
  achieving	
  this	
  vision	
  are	
  not	
  technological.	
  The	
  
reason	
  why	
  policy-­‐making	
  is	
  not	
  already	
  as	
  open	
  and	
  evidence-­‐based	
  as	
  it	
  could	
  be,	
  lies	
  less	
  in	
  the	
  
limitation	
  of	
  the	
  technology	
  than	
  on	
  the	
  concrete	
  needs	
  and	
  limitations	
  of	
  human	
  behaviour.	
  
This	
   is	
   a	
   lesson	
   we	
   learnt	
   from	
   many	
   years	
   of	
   studies	
   on	
   the	
   impact	
   of	
   ICT,	
   for	
   example	
   on	
   e-­‐
government.	
  Regardless	
  of	
  the	
  technological	
  tools	
  at	
  your	
  disposal,	
  the	
  key	
  barriers	
  to	
  change	
  lie	
  
with	
  cultural	
  and	
  organisational	
  issues.	
  
We	
   can't	
   claim	
   to	
   propose	
   a	
   more	
   human	
   centric	
   policy	
   making,	
   that	
   takes	
   into	
   account	
   the	
  
complexity	
  of	
  human	
  behaviour,	
  and	
  then	
  fail	
  to	
  recognize	
  the	
  humanity	
  of	
  policy-­‐makers.	
  Policy-­‐
makers	
  are	
  agents,	
  and	
  as	
  such	
  are	
  self-­‐interested	
  and	
  driven	
  by	
  an	
  own	
  agenda.	
  They	
  are	
  human,	
  
and	
  therefore	
  not	
  perfectly	
  rational	
  and	
  atomised.	
  Citizens	
  are	
  human,	
  and	
  not	
  that	
  interested	
  in	
  
public	
  policy.	
  
It	
  would	
  therefore	
  be	
  foolish	
  to	
  expect	
  that	
  the	
  simple	
  availability	
  of	
  the	
  technology	
  will	
  suddenly	
  
free	
  policy-­‐making	
  from	
  politicking,	
  corruption,	
  personal	
  interests	
  or	
  simple	
  incompetence.	
  It	
  is	
  not	
  
within	
  the	
  scope	
  of	
  this	
  roadmap	
  to	
  develop	
  generic	
  policy	
  recommendations	
  for	
  improving	
  policy-­‐
making	
  as	
  such,	
  yet	
  we	
  cannot	
  treat	
  non-­‐technological	
  factors	
  as	
  a	
  simple	
  black	
  box:	
  as	
  described	
  in	
  
section	
  2.3,	
  technological	
  tools	
  have	
  to	
  take	
  into	
  account	
  the	
  concrete	
  problems	
  of	
  policy-­‐making.	
  
We	
  propose	
  that	
  policy-­‐making	
  2.0	
  is	
  not	
  a	
  panacea	
  for	
  better	
  government,	
  yet	
  it	
  is	
  not	
  neutral	
  to	
  
power	
  relationship	
  that	
  enable	
  such	
  problems	
  as	
  corruption	
  and	
  incompetence	
  to	
  emerge.	
  In	
  other	
  
words,	
  these	
  are	
  not	
  “just	
  tools”	
  that	
  can	
  be	
  used	
  for	
  good	
  or	
  bad:	
  they	
  provide	
  the	
  opportunity	
  to	
  
re-­‐frame	
   the	
   system	
   of	
   check	
   and	
   balances	
   that	
   determine	
   the	
   likelihood	
   of	
   good	
   or	
   bad	
   policy-­‐
making.	
  
More	
  open	
  data,	
  more	
  transparent	
  models,	
  more	
  visually	
  accountable	
  policy	
  measures	
  can	
  facilitate	
  
the	
  uncovering	
  of	
  corruption,	
  personal	
  interests	
  and	
  incompetence.	
  The	
  emphasis	
  on	
  usability	
  and	
  
openness	
  of	
  modelling	
  is	
  opening	
  up	
  policy-­‐making	
  to	
  a	
  wider	
  range	
  of	
  stakeholders.	
  The	
  availability	
  
of	
   different	
   simulated	
   future	
   scenarios	
   enhances	
   the	
   accountability	
   of	
   today’s	
   decision	
   of	
   policy	
  
makers.	
  	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
153	
  |	
  P a g e 	
  
There	
  will	
  always	
  be	
  room	
  for	
  malpractice	
  and	
  greed	
  in	
  policy-­‐making	
  2.0	
  as	
  well	
  as	
  in	
  any	
  human	
  
activities.	
  This	
  is	
  however	
  not	
  an	
  argument	
  to	
  give	
  up	
  on	
  improving	
  the	
  available	
  methods.	
  Raising	
  
the	
  barriers	
  to	
  malpractice,	
  and	
  lowering	
  the	
  barrier	
  to	
  good	
  practice,	
  is	
  an	
  achievable	
  goal	
  worth	
  
pursuing.	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
154	
  |	
  P a g e 	
  
6. References	
  
1. Alexopoulos,	
  C.	
  &	
  Kim,	
  S.H.	
  (2002)	
  Output	
  Data	
  Analysis	
  for	
  Simulations,	
  Proceedings	
  of	
  the	
  2002	
  
Winter	
  Simulation	
  Conference,	
  San	
  Diego	
  (CA),	
  December	
  8-­‐11	
  
2. Alexopoulos,	
   C.	
   &	
   Seila,	
   A.F.	
   (1998)	
   Output	
   Data	
   Analysis,	
   in:	
   Banks,	
   J.,	
   ed.,	
   Handbook	
   of	
  
Simulation:	
  Principles,	
  Methodology,	
  Advances,	
  Applications	
  and	
  Practice,	
  John	
  Wiley,	
  New	
  York.	
  
3. Balci,	
  O.	
  (1989)	
  How	
  to	
  assess	
  the	
  acceptability	
  and	
  credibility	
  of	
  simulation	
  results,	
  Proceedings	
  
of	
  the	
  1989	
  Winter	
  Simulation	
  Conference,	
  Washington	
  (DC),	
  December	
  4-­‐6.	
  
4. Barabasi,	
  A.	
  L.,	
  2003.	
  Linked:	
  How	
  Everything	
  is	
  Connected	
  to	
  Everything	
  Else	
  and	
  What	
  it	
  Means	
  
for	
  Business	
  and	
  Everyday	
  Life,	
  Plume	
  Books.	
  	
  
5. Bederson,	
  B	
  and	
  Shneiderman,	
  B.	
  (2003).	
  The	
  Craft	
  of	
  Information	
  Visualization:	
  Readings	
  and	
  
Reflections	
  
6. Benjamin,	
   A.	
   O.	
   (2010)	
   "Direct	
   Democracy	
   and	
   Local	
   Public	
   Goods:	
   Evidence	
   from	
   a	
   Field	
  
Experiment	
   in	
   Indonesia."	
   American	
   Political	
   Science	
   Review	
   104.2:	
   243-­‐67.	
   MIT	
   and	
   National	
  
Bureau	
  of	
  Economic	
  Research.
7. Bertin,	
  J	
  (1967)	
  Sémiologie	
  Graphique.	
  Les	
  diagrammes,	
  les	
  réseaux,	
  les	
  cartes.	
  
8. Boyd,	
   D.,	
   Crawford,	
   K,	
   (2011)	
   Six	
   Provocations	
   for	
   Big	
   Data.	
   A	
   Decade	
   in	
   Internet	
   Time:	
  
Symposium	
  on	
  the	
  Dynamics	
  of	
  the	
  Internet	
  and	
  Society,	
  September	
  2011.	
  
9. Burkholder,	
   L,	
   ed.	
   (1992)	
   Philosophy	
   and	
   the	
   Computer,	
   Boulder,	
   San	
   Francisco,	
   and	
   Oxford:	
  
Westview	
  Press.	
  
10. Chen, C. (2005). "Top 10 unsolved information visualisation problems." IEEE Computer Graphics and
Applications 25:12-16.
11. Christakis	
  N	
  and	
  Fowler	
  J.“The	
  spread	
  of	
  obesity	
  in	
  a	
  large	
  social	
  network	
  over	
  32	
  years,”	
  in	
  New	
  
England	
  Journal	
  of	
  Medicine,	
  357,	
  370–379,	
  2007	
  
12. Colecchia,	
  A.	
  and	
  P.	
  Schreyer	
  (2002),	
  ICT	
  Investment	
  and	
  Economic	
  Growth	
  in	
  the	
  1990s:	
  Is	
  the	
  
United	
  States	
  a	
  Unique	
  Case?	
  A	
  Comparative	
  Study	
  of	
  Nine	
  OECD	
  Countries,	
  Review	
  of	
  Economic	
  
Dynamics,	
  Vol.	
  5,	
  No.	
  2,	
  April,	
  pp.	
  408–442.	
  
13. European	
  Commission	
  (2009),	
  Impact	
  Assessment	
  Guidelines,	
  SEC	
  (2009)	
  92	
  	
  
14. Friendly,	
  M.	
  (2008)	
  "Milestones	
  in	
  the	
  history	
  of	
  thematic	
  cartography,	
  statistical	
  graphics,	
  and	
  
data	
  visualization"	
  
15. Gass,	
   S.I.	
   (1983)	
   Decision-­‐aiding	
   models:	
   validation,	
   assessment	
   and	
   related	
   issues	
   for	
   policy	
  
analysis,	
  Operations	
  Research,	
  31(4),	
  pp.	
  601-­‐663	
  
16. Gass,	
  S.I.	
  &	
  Joel.,	
  L.	
  (1987)	
  Concepts	
  of	
  model	
  confidence,	
  Computers	
  and	
  Operations	
  Research,	
  
8	
  (4),	
  pp.	
  341-­‐346.	
  
17. Goldsman,	
   D.	
   &	
   Tokol,	
   G.	
   (2000)	
   Output	
   Analysis	
   procedures	
   for	
   computer	
   simulations,	
  
Proceedings	
  of	
  the	
  2000	
  Winter	
  Simulation	
  Conference,	
  Orlando	
  (FL),	
  10-­‐13	
  December	
  2000.	
  
18. Goldsman,	
   D.	
   &	
   Nelson,	
   B.L.	
   (1998)	
   Comparing	
   systems	
   via	
   simulation,	
   In:	
   Banks,	
   J.,	
   ed.,	
  
Handbook	
   of	
   Simulation:	
   Principles,	
   Methodology,	
   Advances,	
   Applications	
   and	
   Practice,	
   John	
  
Wiley,	
  New	
  York	
  
19. Heer, Jeffrey, Fernanda B. Viégas, and Martin Wattenberg. 2007. Voyagers and Voyeurs: Supporting
Asynchronous Collaborative Information visualisation. In CHI 2007, April 28–May 3, 2007, San Jose,
California, USA
20. Howard,	
  C.	
  (2005),	
  The	
  Policy	
  Cycle:	
  A	
  Model	
  of	
  Post-­‐Machiavellian	
  Policy	
  Making?.	
  Australian	
  
Journal	
  of	
  Public	
  Administration,	
  64:	
  3–13.	
  doi:	
  10.1111/j.1467-­‐8500.2005.00447.x	
  
21. Jorgenson,	
  D.	
  W.,	
  Ho,	
  M.	
  S.	
  and	
  K.	
  J.	
  Stiroh	
  (2008),	
  A	
  Retrospective	
  look	
  at	
  the	
  US	
  Productivity	
  
Growth	
  Resurgence,	
  Journal	
  of	
  Economic	
  Perspectives,	
  Volume	
  22,	
  Number	
  1,	
  2008,	
  Pages	
  3-­‐24.	
  
22. Keim,	
  D.,	
  2011.	
  Solving	
  Problems	
  with	
  Visual	
  Analytics	
  :	
  Challenges	
  and	
  Applications.	
  Society,	
  65,	
  
p.1.	
  	
  
23. Keim, D. A., Mansmann, F., Schneidewind, J., Thomas, J. and Ziegler H. (2008). "Visual Analytics: Scope
and Challenges"
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
155	
  |	
  P a g e 	
  
24. Kelton,	
   W.D.	
   (1997)	
   Statistical	
   Analysis	
   of	
   Simulation	
   Input,	
   Proceedings	
   of	
   the	
   1997	
   Winter	
  
Simulation	
  Conference,	
  Atlanta	
  (GA),	
  7-­‐10	
  December	
  1997.	
  
25. Kuhlmann,	
  S.	
  &	
  Meyer-­‐Krahmer,	
  F.,	
  1994.	
  Practice	
  of	
  Technology	
  Policy	
  Evaluation	
  in	
  Germany:	
  
Introduction	
  and	
  Overview.	
  In	
  G.	
  Becher	
  et	
  al.,	
  eds.	
  Evaluation	
  of	
  Technology	
  Policy	
  Programmes	
  
in	
  Germany.	
  Springer	
  Netherlands,	
  pp.	
  3-­‐29	
  
26. Landesberger,	
  T.	
  von,	
  Goerner,	
  M.,	
  Schreck,	
  T.	
  (2009)	
  Visual	
  Analysis	
  of	
  Graphs	
  with	
  Multiple	
  
Connected	
  Components.	
  IEEE	
  Symposium	
  on	
  Visual	
  Analytics	
  Science	
  and	
  Technology	
  	
  
27. Landesberger,	
  T.	
  von,	
  Knuth,	
  M.,	
  Schreck,	
  T.,	
  Kohlhammer,	
  J.	
  (2008)	
  Data	
  Quality	
  Visualization	
  
for	
  Multivariate	
  Hierarchic	
  Data,	
  IEEE	
  Information	
  Visualization	
  Conference	
  (INFOVIS),	
  Columbus,	
  
OH,	
  USA	
  (2008)	
  
28. Latour,	
  B.	
  (2009).	
  ‘Tarde’s	
  idea	
  of	
  quantification’,	
  in	
  The	
  Social	
  After	
  Gabriel	
  Tarde:	
  Debates	
  and	
  
Assessments,	
  ed	
  M.	
  Candea,	
  London:	
  Routledge,	
  pp.	
  145-­‐162.	
  
29. Law,	
  A.M.	
  (1983)	
  Statistical	
  Analysis	
  of	
  Simulation	
  Output	
  Data,	
  Operations	
  Research,	
  31(6),	
  pp.	
  
983-­‐1029.	
  
30. Law,	
  A.M.	
  (2006),	
  Simulation	
  Modelling	
  and	
  Analysis,	
  4th	
  Ed.,	
  McGraw-­‐Hill,	
  New	
  York.	
  
31. Lazer,	
  D.,	
  Pentland,	
  A.,	
  Adamic,	
  L.,	
  Aral,	
  S.,	
  Barabási,	
  A.,	
  Brewer,	
  D.,Christakis,	
  N.,	
  Contractor,	
  N.,	
  
Fowler,	
   J.,Gutmann,	
   M.,	
   Jebara,	
   T.,	
   King,	
   G.,	
   Macy,	
   M.,	
   Roy,	
   D.,	
   &	
   Van	
   Alstyne,	
   M.	
   (2009).	
  
‘Computational	
  Social	
  Science’.	
  Science	
  vol.	
  323,	
  pp.	
  721-­‐3.	
  
32. Lessig,	
  L.	
  (2009).	
  The	
  New	
  Republic.	
  
33. Livnat,	
   Y.	
   et	
   al.	
   (2005),	
   Visual	
   correlation	
   for	
   situational	
   awareness.	
   IEEE	
   Symposium	
   on	
  
Information	
  Visualization.	
  INFOVIS,	
  1,	
  p.95-­‐102.	
  
34. Nakayama,	
  M.K.	
  (2002)	
  Simulation	
  Output	
  Analysis,	
  Proceedings	
  of	
  the	
  2002	
  Winter	
  Simulation	
  
Conference,	
  San	
  Diego	
  (CA),	
  8-­‐11	
  December	
  2002.	
  
35. OECD.	
  (2005).	
  Modernising	
  government:	
  the	
  way	
  forward.	
  Paris,	
  France,	
  OECD.	
  
36. Oliner,	
  S.	
  D.	
  and	
  D.	
  Sichel	
  (2000),	
  The	
  Resurgence	
  of	
  Growth	
  in	
  the	
  Late	
  1990s:	
  Is	
  Information	
  
Technology	
  the	
  Story?	
  Journal	
  of	
  Economic	
  Perspectives,	
  14:	
  3-­‐22.	
  
37. Ormerod,	
  P.,	
  (2010).	
  N	
  Squared:	
  public	
  policy	
  and	
  the	
  power	
  of	
  networks,	
  RSA	
  Pamphlets	
  
38. Pang,	
  B.,	
  &	
  Lee,	
  L.	
  (2008).	
  Foundations	
  and	
  Trends®	
  in	
  Information	
  Retrieval,	
  2(1–2),	
  1-­‐135.	
  doi:	
  
10.1561/1500000011.	
  
39. Phelps	
  ed.	
  (1970),	
  Microeconomic	
  Foundations	
  of	
  Employment	
  and	
  Inflation	
  Theory.	
  New	
  York,	
  
Norton	
  and	
  Co.	
  ISBN	
  0-­‐393-­‐09326-­‐3	
  
40. Rhyne T.-M. et al.,( 2004) “Can We Determine the Top Unresolved Problems of visualisation?” Proc. IEEE
visualisation, IEEE Press, pp. 563-566.	
  
41. Rosenblum	
   (1994).	
   "Research	
   issues	
   in	
   scientific	
   visualization".	
   In:	
   Computer	
   Graphics	
   and	
  
Applications,	
  IEEE.	
  Volume	
  14,	
  Issue	
  2,	
  March	
  1994	
  Page(s):61	
  -­‐	
  63.	
  
42. Saffo,	
  Paul,	
  (1997),	
  Looking	
  Ahead:	
  Implications	
  Of	
  The	
  Present,	
  Are	
  You	
  Machine	
  Wise.	
  Harvard	
  
Business	
  Review	
  
43. Schelling,	
  T.C.,	
  (1969).	
  Models	
  of	
  segregation.	
  The	
  American	
  Economic	
  Review,	
  59(2),	
  pp.488-­‐
493.	
  
44. Sargent,	
  R.	
  (2009).	
  Verification	
  and	
  Validation	
  of	
  Simulation	
  Models.	
  Proceedings	
  of	
  the	
  2009	
  
Winter	
  Simulation	
  Conference.	
  
45. Schlesinger,	
  M.	
  (1979).	
  Terminology	
  for	
  model	
  credibility.	
  Simulation,	
  32(3),	
  103-­‐104.	
  
46. Shneiderman,	
   B.	
   (1996)	
   The	
   Eyes	
   Have	
   It:	
   A	
   Task	
   by	
   Data	
   Type	
   Taxonomy	
   for	
   Information	
  
Visualizations,	
  IEEE	
  Symposium	
  on	
  Visual	
  Languages	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
156	
  |	
  P a g e 	
  
47. Siewiorek,	
  A.	
  Smailagic,	
  J.	
  Furukawa,	
  A.	
  Krause,	
  N.	
  Moraveji,	
  K.	
  Reiger,	
  J.	
  Shaffer,	
  and	
  F.	
  L.	
  Wong,	
  
(2003)	
  “SenSay:	
  A	
  Context-­‐aware	
  Mobile	
  Phone,”	
  in	
  Proceedings	
  of	
  the	
  7th	
  IEEE	
  International	
  
Symposium	
  on	
  Wearable	
  Computers	
  (ISWC),	
  pp.	
  248–249	
  
48. Taleb,	
  N.	
  (2008).	
  The	
  Black	
  Swan:	
  The	
  Impact	
  of	
  the	
  Highly	
  Improbable.	
  Penguin.	
  
49. Thomas	
   and	
   Ahrweiler	
   Eds.	
   (2005).	
   Internationale	
   partizipatorische	
   Kommunikationspolitik	
   -­‐	
  
Strukturen	
  und	
  Visionen.	
  Münster:	
  LIT.	
  	
  
50. Thomas,	
   J.J.	
   and	
   K.	
   Cook	
   (2003),	
   Illuminating	
   the	
   Path:	
   The	
   R&D	
   Agenda	
   for	
   Visual	
   Analytics.	
  
2005:	
  IEEE	
  
51. Tizzoni M., M., Bajardi, P., Poletto, C., Ramasco, J., Balcan, D., Gonçalves, B., Perra, N., Colizza, V.,
Vespignani, A., (2012). Real-time numerical forecast of global epidemic spreading: case study of 2009
A/H1N1pdm, BMC Medicine, 10:165.
52. Tufte,	
  E.	
  R	
  (1983)	
  The	
  Visual	
  Display	
  of	
  Quantitative	
  Information,	
  Graphics	
  Press	
  
53. van	
  Wijk.	
  (2005).	
  The	
  Value	
  of	
  Visualization.	
  In	
  IEEE	
  Visualization	
  Conference.	
  pp.	
  79-­‐86	
  
54. Ware,	
   C.	
   (2004)	
   Information	
   Visualization	
   -­‐	
   Perception	
   for	
   Design,	
   Second	
   Edition,	
   Academic	
  	
  
Press	
  
55. Ware, Colin (2008): Visual Thinking: for Design. Morgan Kaufmann
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
157	
  |	
  P a g e 	
  
56. Wattenberg,	
  M.	
  (2008)	
  The	
  Word	
  Tree,	
  an	
  Interactive	
  Visual	
  Concordance,	
  IEEE	
  Transactions	
  on	
  
Visualization	
  and	
  Computer	
  Graphics,	
  Vol.	
  14,	
  No.	
  6	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
158	
  |	
  P a g e 	
  
7. List	
  of	
  Acronyms	
  	
  
ABM:	
  Agent	
  Based	
  Models	
  	
  
ADAMS:	
  Anomaly	
  Detection	
  at	
  Multiple	
  Scales	
  	
  
APES:	
  Agricultural	
  Production	
  and	
  Externalities	
  Simulator	
  
BI:	
  Business	
  Intelligence	
  	
  
BRICs:	
  Brasil,	
  Russia,	
  India	
  and	
  China	
  
CAPTCHA:	
  Completely	
  Automated	
  Public	
  Turing	
  test	
  to	
  tell	
  Computers	
  and	
  Humans	
  Apart	
  	
  
CDC:	
  Center	
  for	
  Disease	
  Control	
  and	
  Prevention	
  	
  
CGE:	
  Computational	
  General	
  Equilibrium	
  Models	
  	
  	
  
CINDER:	
  Cyber-­‐Insider	
  Threat	
  
COMA:	
  COllaborative	
  Modelling	
  Architecture	
  	
  
CSCW:	
  Computer-­‐Supported	
  Cooperative	
  Work	
  	
  
CTR:	
  Click-­‐through	
  Rate	
  	
  
CVADA:	
  Center	
  of	
  Excellence	
  on	
  Visualization	
  and	
  Data	
  Analytics	
  
DARPA:	
  Defense	
  Advanced	
  Research	
  Project	
  Agency	
  	
  	
  
DBMS:	
  Database	
  Management	
  Systems	
  	
  
DECC:	
  Department	
  of	
  Energy	
  and	
  Climate	
  Change	
  	
  
DID:	
  Difference-­‐in-­‐Difference	
  	
  
DHS:	
  Department	
  for	
  Homeland	
  Security	
  	
  
DOE:	
  Design	
  Of	
  Experiment	
  	
  
DSGE:	
  Dynamic	
  Stochastic	
  General	
  Equilibrium	
  Models	
  	
  	
  
DW:	
  Data	
  Warehouse	
  
ECB:	
  European	
  Central	
  Bank	
  	
  
EDW:	
  Enterprise	
  Data	
  Warehouse	
  
EEA:	
  European	
  Economic	
  Area	
  	
  
eID:	
  Electronic	
  Identity	
  	
  	
  
ERP:	
  Enterprise	
  Resource	
  Planning	
  
ETL:	
  Extract,	
  Transform,	
  Load	
  
FAO:	
  Food	
  and	
  Agriculture	
  Organization	
  
GHG:	
  Greenhouse	
  Gas	
  	
  	
  
GIS:	
  Geographic	
  Information	
  System	
  	
  
GLEAM:	
  Global	
  Epidemic	
  and	
  Mobility	
  Model	
  	
  
GPS:	
  Global	
  Positioning	
  System	
  	
  
GSS:	
  Global	
  Systems	
  Science	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
159	
  |	
  P a g e 	
  
HCI:	
  Human-­‐Computer	
  Interaction	
  
HLA:	
  Higher	
  Level	
  Architecture	
  
ICT:	
  Information	
  and	
  Communication	
  Technologies	
  	
  	
  
ILEs:	
  Interactive	
  Learning	
  Environments	
  	
  
I/O:	
  Input/Output	
  
KPIs:	
  Key	
  Performance	
  Indicators	
  
LOD:	
  Linked	
  Open	
  Data	
  	
  
LMS:	
  Learning	
  Management	
  Systems	
  	
  
LTA:	
  Land	
  Transport	
  Authority	
  	
  
MarkAl:	
  MARKet	
  ALlocation	
  	
  
MAS:	
  Multi-­‐Agent	
  Systems	
  	
  
MPOs:	
  Members	
  of	
  Public	
  Organizations	
  	
  
MPP:	
  Massive	
  parallel	
  processing	
  	
  
NAF:	
  New	
  America	
  Foundation	
  	
  	
  
NASA:	
  National	
  Aeronautics	
  and	
  Space	
  Agency	
  	
  
OECD:	
  Organisation	
  for	
  Economic	
  Co-­‐operation	
  and	
  Development	
  	
  
OGD:	
  Open	
  Government	
  Data	
  	
  
OGPL:	
  Open	
  Government	
  Platform	
  	
  
OLAP:	
  On-­‐line	
  analytical	
  processing	
  	
  
OOP:	
  Object	
  Oriented	
  Programming	
  	
  
PMOD:	
  Policy	
  Modelling	
  	
  
P2P:	
  Peer-­‐to-­‐Peer	
  
RDD:	
  Regression	
  Discontinuity	
  Design	
  
RDF:	
  Resource	
  Description	
  Framework	
  	
  
RTAP:	
  Real-­‐time	
  analytics	
  processing	
  	
  
SaaS:	
  Software	
  as	
  a	
  Service	
  	
  
SAML:	
  Security	
  Assertion	
  Markup	
  Language	
  	
  
SMD:	
  Social	
  Media	
  Data	
  	
  
SPARQL:	
  SPARQL	
  Protocol	
  and	
  RDF	
  Query	
  Language	
  
STREP:	
  Specific	
  Targeted	
  Research	
  Projects	
  	
  
TRM:	
  Technology	
  road-­‐mapping	
  
T21:	
  Treshold	
  21	
  
UNOSAT:	
  United	
  Nations	
  Operational	
  Satellite	
  Applications	
  Programme	
  
XML:	
  eXtensible	
  Markup	
  Language	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
160	
  |	
  P a g e 	
  
	
  
	
  
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  D2.2.1	
  INTERNATIONAL	
  RESEARCH	
  ROADMAP	
  
161	
  |	
  P a g e 	
  
	
  

Towards Policymaking 2.0

  • 1.
                                                                                                                                       0213F01     ICT  Seventh  Framework  Programme  (ICT  FP7)       Grant  Agreement  No:  288828   Bridging  Communities  for  Next  Generation  Policy-­‐Making         Towards  Policy-­‐making  2.0:   The  International  Research  Roadmap  on     ICT  for  Governance  and  Policy  Modelling       Internal  Deliverable  Form   Project  Reference  No.   ICT  FP7  288828   Deliverable  No.     D2.2.2   Relevant  Workpackage:   WP2   Nature:   Report   Dissemination  Level:   Restricted   Document  version:   FINAL  1.0   Date:   31  July  2013   Authors:   David   Osimo   &   Francesco   Mureddu   (T4I2),   Riccardo   Onori   &   Stefano  Armenia  (CATTID),  Gianluca  Carlo  Misuraca  (IPTS)   Reviewers:     Document  description:   This  deliverable  describes  the  final  version  of  the  new  International   Research   Roadmap   on   ICT   Tools   for   Governance   and   Policy  
  • 2.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   2  |  P a g e   Modelling           History   Version   Date   Reason   Revised  by   1.0   30/06/2013   1ST  VERSION                                    
  • 3.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   3  |  P a g e   TABLE  OF  CONTENTS     EXECUTIVE  SUMMARY...................................................................................................................................5   1.   BACKGROUND:  WHY  A  ROADMAP?........................................................................................................8   1.1.   The  rationale  of  the  roadmap:  what  is  the  problem? ............................................................................. 8   1.2.   An  open  and  recursive  methodology ...................................................................................................... 9   1.3.   Scope  and  definition.............................................................................................................................. 16   1.4.   Policy:  Between  politics  and  services ....................................................................................................19   2.   NOT  JUST  ANOTHER  HYPE:  THE  DEMAND  SIDE  OF  POLICY-­‐MAKING  2.0................................................ 20   2.1.   The  typical  tasks  of  policy-­‐makers:  the  policy  cycle ..............................................................................21   2.2.   The  traditional  tools  of  policy-­‐making...................................................................................................22   2.3.   The  key  challenges  of  policy-­‐makers.....................................................................................................23   2.3.1.   Detect  and  understand  problems  before  they  become  unsolvable............................................... 24   2.3.2.   Generate  high  involvement  of  citizens  in  policy-­‐making................................................................ 24   2.3.3.   Identify  “good  ideas”  and  innovative  solutions  to  long-­‐standing  problems ..................................24   2.3.4.   Reduce  uncertainty  on  the  possible  impacts  of  policies ................................................................ 25   2.3.5.   Ensure  long  -­‐  term  thinking ............................................................................................................28   2.3.6.   Encourage  behavioural  change  and  uptake ................................................................................... 28   2.3.7.   Manage  crisis  and  the  “unknown  unknown” ................................................................................. 28   2.3.8.   Moving  from  conversations  to  action ............................................................................................ 29   2.3.9.   Detect  non-­‐compliance  and  mis-­‐spending  through  better  transparency ......................................29   2.3.10.   Understand  the  impact  of  policies ............................................................................................... 29   2.4.   When  policy-­‐making  2.0  becomes  a  reality:  a  tentative  vision  for  2030............................................... 30   2.4.1.   Agenda  setting  phase:  recognizing  the  problem............................................................................30   2.4.2.   Policy  design...................................................................................................................................31   2.4.3.   Implementation.............................................................................................................................. 32   2.4.4.   Evaluation.......................................................................................................................................32   2.5.   The  key  challenges  for  policy  makers  and  the  corresponding  phases  in  the  policy  cycle ..................... 32   3.   THE  SUPPLY  SIDE:  CURRENT  STATUS  AND  THE  RESEARCH  CHALLENGES................................................ 34   3.1.   Policy  Modelling ....................................................................................................................................34   3.1.1.   Systems  of  Atomized  Models .........................................................................................................34   3.1.2.   Collaborative  Modelling ................................................................................................................. 43   3.1.3.   Easy  Access  to  Information  and  Knowledge  Creation ....................................................................54   3.1.4.   Model  Validation ............................................................................................................................ 57   3.1.5.   Immersive  Simulation..................................................................................................................... 60   3.1.6.   Output  Analysis  and  Knowledge  Synthesis..................................................................................... 62   3.2.   Data-­‐powered  Collaborative  Governance............................................................................................. 65   3.2.1.   Big  Data ..........................................................................................................................................65   3.2.2.   Opinion  Mining  and  Sentiment  Analysis......................................................................................... 79   3.2.3.   Visual  Analytics  for  collaborative  governance:  the  opportunities  and  the  research  challenges....86   3.2.4.   Serious  Gaming  for  Behavioural  Change ........................................................................................ 99   3.2.5.   Linked  Open  Government  Data....................................................................................................104   3.2.6.   Collaborative  Governance ............................................................................................................110   3.2.7.   Participatory  Sensing....................................................................................................................114   3.2.8.   Identity  Management...................................................................................................................118   3.2.9.   Global  Systems  Science ................................................................................................................121   4.   THE  CASE  FOR  POLICY-­‐MAKING  2.0:  EVALUATING  THE  IMPACT .......................................................... 128   4.1.   Cross  analysis  of  case  studies..............................................................................................................128   4.1.1.   Global  Epidemic  and  Mobility  Model ...........................................................................................129   Impact  of  Gleam.........................................................................................................................................129   4.1.2.   UrbanSim......................................................................................................................................130  
  • 4.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   4  |  P a g e   4.1.3.   Opinion  Space...............................................................................................................................131   4.1.4.   2050  Pathways  Analysis................................................................................................................133   4.1.5.   Cross  analysis  of  the  case  studies.................................................................................................135   4.2.   Survey  of  Users’  needs  results.............................................................................................................137   4.3.   Analysis  of  the  prize  winners...............................................................................................................139   4.4.   Lessons  learnt  from  cases  and  prize....................................................................................................143   4.5.   An  additional  research  challenge:  counterfactual  impact  evaluation  of  Policy  Making  2.0................144   5.   CONCLUSIONS:  POLICY-­‐MAKING  2.0  BETWEEN  HYPE  AND  REALITY .................................................... 149   6.   REFERENCES....................................................................................................................................... 153   7.   LIST  OF  ACRONYMS............................................................................................................................ 157             LIST  OF  FIGURES   Figure  1:  the  fragmentation  of  policy-­‐making  2.0.................................................................................................. 8   Figure  2  Outline  of  the  participatory  process ......................................................................................................10   Figure  3:  Policy  Cycle  and  Related  Activities ........................................................................................................22   Figure  4:  Total  Disasters  Reported...................................................................................................................... 29   Figure  5:  Agricultural  Production  and  Externalities  Simulator  (APES)............................................................... 37   Figure  6:  Conversational  Modelling  Interface ....................................................................................................46   Figure  7:  the  PADGET  Framework....................................................................................................................... 47   Figure  8:  the  Time-­‐Space  Matrix ......................................................................................................................... 50   Figure  9:  COMA,  COllaborative  Modelling  Architecture .................................................................................... 51   Figure  10:  OCOPOMO  eParticipation  Platform...................................................................................................52   Figure  11:  Twitrratr..............................................................................................................................................82   Figure  12:  Wordclouds.........................................................................................................................................83   Figure  13:  UserVoice............................................................................................................................................83   Figure  14    Open  Data  Business  Model  (source:  Istituto  Superiore  Mario  Boella)..............................................107   Figure  15  -­‐LOD  providers  and  their  linkages ......................................................................................................108   Figure  16  Rating  other  opinions'  in  Opinion  Space ............................................................................................132   Figure  17  Playing  the  My2050  game  for  the  demand  side.................................................................................134   Figure  18  Adoption  of  ICT  Tools  and  Methodologies  for  policy-­‐making  (source:  CROSSOVER  Survey  of  Users’   Needs  2012) .......................................................................................................................................................137   Figure   19   Needs   and   Challenges   in   the   Policy   Making   Process   (source:   CROSSOVER   Survey   of   Users’   Needs   2012) ..................................................................................................................................................................138   Figure  20:  a  proposed  evaluation  framework  for  policy-­‐making  2.0 .................................................................144   Figure  21:  Relation  Between  Policy-­‐Making  Needs  and  Research  Challenges...................................................149    
  • 5.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   5  |  P a g e   Executive  Summary   This   deliverable   introduces   and   describes   the   interim   version   of   the   new   International   Research   Roadmap  on  ICT  tools  for  Governance  and  Policy  Modelling,  renamed  by  the  project  team  as  “Policy-­‐ Making   2.0”,   one   of   the   core   outputs   of   the   Crossover   project,   which   is   developed   under   WP2   Content  Production.     The   roadmap   aims   to   establish   the   scientific   and   political   basis   for   long-­‐lasting   interest   and   commitment   to   next   generation   policy-­‐making   by   researchers   and   policy-­‐makers.   In   doing   so,   it   contains  an  analysis  of  what  technologies  are  currently  available,  for  what  concrete  purposes,  and   what  could  become  available  in  the  future.  The  main  rationale  for  such  a  document  is  the  current   fragmentation   of   the   landscape   between   different   stakeholders,   disciplines,   policy   domains   and   geographical  areas.     The  document  is  the  result  of  a  highly  participative  process  undergone  between  the  first  draft  and   the  final  roadmap,  with  the  involvement  of  hundreds  of  people  through  11  different  input  methods,   from  live  workshops  to  online  discussion.    
  • 6.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   6  |  P a g e   After  a  brief  introduction  of  the  background,  the  document  analyses  the  demand  side:  the  current   status  of  policy-­‐making,  with  the  key  tasks  (illustrated  by  the  traditional  policy  cycle)  and  existing   challenges:   a. Detect  and  understand  problems  before  they  become  unsolvable b. Generate  high  involvement  of  citizens  in  policy-­‐making c. Identify  “good  ideas”  and  innovative  solutions  to  long-­‐standing  problems d. Reduce  uncertainty  on  the  possible  impacts  of  policies e. Ensure  long  -­‐  term  thinking f. Encourage  behavioural  change  and  uptake g. Manage  crisis  and  the  “unknown  unknown” h. Moving  from  conversations  to  action i. Detect  non-­‐compliance  and  mis-­‐spending  through  better  transparency j. Understand  the  impact  of  policies It   then   presents   a   concrete   tentative   vision   of   how   policy-­‐making   could   look   in   2030,   if   these   challenges  were  overcome.   Section   3   represents   the   core   of   the   roadmap   and   presents   the   key   research   challenges   to   be   addressed   to   achieve   this   vision,   updating   the   original   version   based   on   the   input   of   the   consultation.  For  each  research  challenge,  it  presents  the  current  status,  the  existing  gaps,  and  short   and  long  term  research  perspectives.  The  key  research  challenges  are:   1. Policy  Modelling 1.1. Systems  of  Atomized  Models 1.2. Collaborative  Modelling 1.3. Easy  Access  to  Information  and  Knowledge  Creation 1.4. Model  Validation 1.5. Immersive  Simulation 1.6. Output  Analysis  and  Knowledge  Synthesis 2. Data-­‐powered  Collaborative  Governance 2.1. Big  Data 2.2. Opinion  Mining  and  Sentiment  Analysis 2.3. Visual  Analytics  for  collaborative  governance:  the  opportunities  and  the  research  challenges 2.4. Serious  Gaming  for  Behavioural  Change 2.5. Linked  Open  Government  Data 2.6. Collaborative  Governance 2.7. Participatory  Sensing 2.8. Identity  Management 2.9. Global  Systems  Science   But   to   what   extent   policy-­‐making   2.0   can   be   said   to   genuinely   improve   policy-­‐making?   Section   4   looks  at  the  available  evidence  about  the  impact  of  policy-­‐making  2.0,  across  case  studies,  the  survey   and  the  prize.  As  it  emerges  that  no  robust  impact  evaluation  is  available,  we  propose  an  additional  
  • 7.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   7  |  P a g e   research  challenge  on  impact  evaluation  of  policy-­‐making  accompanied  by  a  proposed  evaluation   framework.     Finally,   we   summarize   the   findings   of   the   document   bringing   together   the   different   sections,   suggesting   that   policy-­‐making   2.0   cannot   be   considered   the   panacea   for   all   issues   related   to   bad   public   policies,   but   that   at   the   same   time   it   is   more   than   just   a   neutral   set   of   disparate   tools.   It   provides  an  integrated  and  mutually  reinforcing  set  of  methods  that  share  a  similar  vision  of  policy-­‐ making   and   that   should   be   addressed   in   an   integrated   and   strategic   way;   and   it   provides   opportunities  to  improve  the  checks  and  balances  systems  behind  decision  making  in  government,   and  as  such  it  should  be  further  pursued.       and  as  such  it  should  be  further  pursued.     Context' • Socio' poli.cal' factors' Interven.on' • Design'of' technology' • Design'of' methods' • Cost' Uptake' • More' par.cipants' • More' diverse' par.cipa.on' Impact' efficiency' • High'quality' of'ideas' • Impact'on' actual' decisions' • BeCer' predic.ons' Impact' effec.veness' • Improved' performanc e'of'public' sector' • Improved' empowerme nt'of'ci.zens' Design Implement Monitor & evaluate Agenda setting Identify possible policy options Develop preferred option Revise option Induce behavioural change Generate collaboration Ensure Buy-in Monitor execution Collect feedback Identify problems Collect evidence Understand causal relationship Analyze data Collaborative governance (e.g. ideascale) Collaborative governance (e.g. co-ment) Social network analysis Serious gaming Crowd sourcing Open data Sentiment analysis Open Data visualization Visualizati on / opinion mining Modeling Policy cycle Tools Simulate impact of options Immersive simulation ADOPTION
  • 8.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   8  |  P a g e   1.   BACKGROUND:  WHY  A  ROADMAP?   1.1. The  rationale  of  the  roadmap:  what  is  the  problem?     The   CROSSOVER   project   aims   to   consolidate   and   expand   the   existing   community   on   ICT   for   Governance  and  Policy  Modelling  (built  largely  within  FP7)  by:     -­‐   Bringing   together   and   reinforcing   the   links   between   the   different   global   communities   of   researchers  and  experts:  it  will  create  directories  of  experts  and  solutions,  and  animate  knowledge   exchange  across  communities  of  practice  both  offline  and  online;   -­‐   Reaching   out   and   raising   the   awareness   of   non-­‐experts   and   potential   users,   with   special   regard  to  high-­‐level  policy-­‐makers  and  policy  advisors:  it  will  produce  multimedia  content,  a  practical   handbook  and  high-­‐level  policy  conferences  with  competition  for  prizes;   -­‐   Establishing  the  scientific  and  political  basis  for  long-­‐lasting  interest  and  commitment  to  next   generation  policy-­‐making,  beyond  the  mere  availability  of  FP7  funding:  it  will  focus  on  use  cases  and   a  demand-­‐driven  approach,  involving  policy-­‐makers  and  advisors.   The  CROSSOVER  project  pursues  this  goal  through  a  combination  of  content  production,  ad  hoc  and   well-­‐designed  online  and  offline  animation;  as  well  as  strong  links  with  existing  communities  outside   the  CROSSOVER  project  and  outside  the  realm  of  e-­‐Government.     The   present   deliverable   is   one   of   the   core   outputs   of   the   project:   the   International   Research   Roadmap  on  ICT  Tools  for  Governance  and  Policy  Modelling.  It  aims  to  create  a  common  platform   between  actors  fragmented  in  different  disciplines,  policy  domains,  organisations  and  geographical   areas,  as  illustrated  in  the  figure  below.     Figure  1:  the  fragmentation  of  policy-­‐making  2.0     But  most  of  all,  it  aims  to  provide  a  clear  outline  of  what  technologies  are  available  now  for  policy-­‐ makers  to  improve  their  work,  and  what  could  become  available  tomorrow.    
  • 9.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   9  |  P a g e   CROSSOVER  builds  on  the  results  of  the  CROSSROAD  project1 ,  which  elaborated  a  research  roadmap   on  the  same  topic  along  the  whole  of  2010.  With  respect  to  the  previous  roadmap,  this  document  is   firstly  a  revised  and  updated  version.  Beside  this,  it  contains  some  fundamental  novelties:   -­‐ A  demand-­‐driven  approach:  rather  than  focussing  on  the  technology,  the  present  roadmap   starts   from   the   needs   and   the   activities   of   policy-­‐making   and   then   links   the   research   challenges  to  them.     -­‐ An  additional  emphasis  on  cases  and  applications:  for  each  research  challenge,  we  indicate   relevant  cases  and  practical  solutions   -­‐ A   clearer   thematic   focus   on   ICT   for   Governance   and   Policy-­‐Modelling,   by   dropping   more   peripheral   grand   challenges   of   Government   Service   Utility   and   Scientific   Base   for   ICT-­‐ enabled  Governance   -­‐ A  global  coverage:  while  CROSSROAD  focussed  on  Europe,  CROSSOVER  includes  cases  and   experiences  from  all  over  the  world   -­‐ A  living  roadmap:  the  present  deliverable  is  accompanied  by  an  online  repositories  of  tools,   people  and  applications   1.2. An  open  and  recursive  methodology     The  present  Research  Roadmap  on  Policy-­‐Making  2.0  is  developed  with  a  sequential  approach  based   on  the  existing  research  roadmap  developed  by  the  CROSSROAD  project.  In  order  to  achieve  the   goals  of  overcoming  the  fragmentation,  an  open  and  inclusive  approach  was  necessary.   In  the  initial  phase  of  the  project,  up  to  M6  (March  2012),  the  consortium  started  a  collection  of   literature,  information  about  software  tools  and  applications  cases.  In  addition  to  this  desk-­‐based   review,  the  document  has  benefited  from  the  informal  discussions  being  held  on  the  LinkedIn  group   of  the  project  (Policy-­‐making  2.0),  where  more  than  800  practitioners  and  researchers  are  discussing   the  practices  and  the  challenges  of  policy-­‐making.   The   first   draft   of   the   roadmap   was   then   released   in   M9   (June   2012)   of   the   project,   for   public   feedback.   The   publication   of   the   deliverable   kicked   off   the   engagement   activities   of   the   project,   designed  to  provide  further  input  and  to  improve  the  roadmap:   -­‐ As   soon   as   it   was   released,   the   preliminary   version   of   the   roadmap   was   published   in   commentable   format   on   the   project   website   http://www.CROSSOVER-­‐project.eu/.   Animators   stimulated   discussion   about   it   and   generated   comments   by   researchers   and   practitioners  alike.  This  participatory  process  helped  enriching  the  roadmap,  which  was  then   published  in  its  final  version  after  validation  by  the  community/ies  of  practitioners  and  policy   makers   -­‐ Two   workshops   organised   by   the   project   aimed   at   gathering   input   on   the   research   challenges  and  feedback  on  the  proposed  roadmap     -­‐ An  online  survey,  as  well  as  several  focus  groups  and  meetings  with  practitioners  from  civil   society  and  government  helped  to  focus  the  roadmap  on  the  actual  needs                                                                                                                             1  http://CROSSROAD.epu.ntua.gr/  
  • 10.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   10  |  P a g e     Figure  2  Outline  of  the  participatory  process   The  process  for  updating  the  roadmap  included  therefore  a  wide  set  of  contributions.  Firstly,  the   Crossroad  roadmap  was  enriched  with  desk-­‐based  research:  202  cases  collected  in  the  platform  +  4   cases  collected  and  described  in  the  case  studies  performed  by  the  National  Technical  University  of   Athens  (NTUA),  and  the  50  applications  to  the  prize.     This  first  draft  was  then  published  for  comments  by  some  of  the  800  members  of  the  LinkedIn  group   who  also  provided  relevant  cases.  An  additional  survey  of  users’  needs  provides  provided  insights   from   240   respondents   and   over   200   people   presents   presented   at   focus   groups.   Additional   discussions   with   Global   Systems   Science     community,   third   party   workshops   and   the   US   Policy   Informatics  Network    helped  in  refine  refining  further  the  roadmap.   The   two   workshops   provided   high-­‐quality   insight   that   enriched   the   roadmap   with   specific   contributions.     In  the  table  below  we  outline  in  detail  the  specific  contribution  of  each  section  of  the  roadmap,  that   is  described  in  full  in  the  following  section.  
  • 11.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   11  |  P a g e       Type  of  contribution   Extent  of  the  contribution   Contribution  to  the  roadmap   1) Comments to the roadmap • 40  comments   • 9  different  experts   • Visual  Analytics   • Systems  of  Atomized  Models   • Model  Validation   • Serious  Gaming   2) Presentations in the PMOD workshop • Papers  received:  42   • Registered  participants:  70     • No.  Countries’  citizens  present:   20   • Linked  Open  Government  Data   3) Presentations in the Transatlantic workshop • 16  presentations   • 30  participants   • Collaborative  Modelling   • Systems  of  Atomized  Models   • Opinion  Mining   4) Survey of User’s Needs   • 236  respondents   • 33%  engaged  in  policy  design   • 27%  engaged  in  monitoring  and   evaluation   • 22%  engaged  in  agenda  setting   • 18%  engaged  in  policy   implementation   • Impact  of  policy  making  2.0   • Roadmap  methodology   • Linked  Open  Government  Data   • Opinion  Mining   • Collaborative  Governance 5) Focus groups   139  attendants  -­‐  Forum  PA,  the   Italian  leading  conference  on  e-­‐ government     • 35  attendants-­‐  INSITE  event  on   sustainability     • 40  attendants  -­‐  Webinar  for  the   United  Nations  Development   Programme     • Impact  of  policy  making  2.0   • Roadmap  methodology   6) Case studies • Collection  of  202  tools  and   practices   • Elicitation  of  20  best  practices   • Further  elicitation  of  4  best   practices  for  in-­‐depth  case   study   • Impact  of  policy  making  2.0   • Roadmap  methodology   • Annex  with  a  repository  of  cases   7) Analysis of the prize • 47  submission  received   • 10  short  listed   • 3  winners   • Analysis  of  the  prize  process  on  the   Impact  Chapter   8) LinkedIn group • 840  participants   • Comments  to  the  roadmap   • Increased  attendance  to  the   workshops   • Collection  of  practices  and  tools   Table  1  Contributions  to  the  roadmap   1) Comments  to  the  Roadmap   The  roadmap  has  been  published  in  commentable  format  in  two  different  versions:  a  short  one  on   Makingspeechtalk2 ,   and   a   full   version   (downloadable   after   answering   the   survey   on   the   needs   of                                                                                                                             2  http://makingspeechestalk.com/CROSSOVER/  
  • 12.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   12  |  P a g e   policy-­‐makers)   available   in   the   CROSSOVER   website3 .   Everybody   was   able   to   comment   on   single   parts  of  the  roadmap  or  to  propose  new  topics,  application  cases  and  research  challenges.  The  aim   of   publishing   the   document   in   commentable   format   was   to   get   the   input   from   experts   for   co-­‐ creating  the  roadmap.  More  specifically  we  were  interested  in  knowing  if  the  current  formulation  of   the   research   challenge   was   acceptable,   and   we   wanted   to   collect   best   practices   and   application   cases  from  the  community  of  experts  and  practitioners  at  large.  As  already  mentioned,  the  roadmap   received  over  40  useful  and  detailed  comments  from  a  number  of  experts  in  the  different  domains.   2) PMOD  Workshop   The   June   2012   workshop   was   the   first   of   three   to   be   organised   under   the   CROSSOVER   project.   Formally   titled   "Using   Open   Data:   policy   modelling,   citizen   empowerment,   data   journalism"   but   generally  referred  to  by  the  term  PMOD  (policy  modelling),  it  set  out  to  explore  whether  advocates'   claims   of   the   huge   potential   for   open   data   as   an   engine   for   a   new   economy,   as   an   aid   to   transparency   and,   of   particular   relevance   to   CROSSOVER,   as   an   aid   to   evidence-­‐based   policy   modelling,   were   justified.   In   terms   of   organization,   the   event   was   run   as   a   W3C/CROSSOVER   workshop  and  held  at  the  European  Commission's  Albert  Borschette  Conference  Centre  in  the  two   days  immediately  prior  to  the  Digital  Agenda  Assembly.  That  combination  helped  to  secure  good   support  from  a  high  calibre  audience.  42  papers  were  received  and  the  majority  was  accepted  by  the   programme  committee  for  full  presentation.  Authors  of  several  other  papers  plus  members  of  the   programme  committee,  the  CROSSOVER  animators  and  a  small  number  of  invited  guests  comprised   the  70  registered  attendees  of  which  67  turned  up.  The  event  reached  a  larger  audience  through   organising  a  networking  event  on  the  evening  following  the  workshop  to  which  attendees  of  the   data   workshop   at   the   Digital   Agenda   Assembly   were   invited.   Furthermore,   through   the   live   IRC   channel   and   Tweets   using   the   #pmod   hashtag,   others   were   able   to   monitor   proceedings.   The   agenda,  attendee  list  and  final  report  are  all  available  on  the  W3C    Web  site  which  provides  a  high   profile  for  the  workshop  and  the  project.   Most  of  the  results  of  the  workshop  were  used  to  improve  the  research  challenge  on  Linked  Open   Government  Data.     3) Transatlantic  Workshop   The   Transatlantic   Research   on   Policy   Modelling   Workshop   that   was   held   in   Washington,   DC   on   January   28th   and   29th ,   2013.   It   was   organized   by   the   Millennium   Institute   and   the   New   America   Foundation  (NAF),  Washington,  DC,  USA.  NAF  is  a  nonprofit,  nonpartisan  public  policy  institute  that   invests  in  new  thinkers  and  new  ideas  to  address  the  next  generation  of  challenges  facing  the  United   States.  This  event  brought  together  speakers  and  attendees  working  and/or  interested  in  improving   ICT   tools   for   education   and   policy   makers.   The   speakers   and   attendees   came   from   a   diverse   background,  both  technical  and  non-­‐technical  to  share  experiences  and  knowledge  and  discuss  ways   to  make  the  current  state  of  modelling  and  ICT  more  accessible  and  attractive  for  decision  makers   on  both  sides  of  the  Atlantic  Ocean.  The  models  presented  in  the  workshop  have  been  integrated  in   the   “Collaborative   Modelling”,   “Systems   of   Atomized   Models”   and   “Opinion   Mining”   research   challenges.     4) Survey  of  User’s  Needs                                                                                                                             3  http://www.CROSSOVER-­‐project.eu/ResearchRoadmap.aspx      
  • 13.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   13  |  P a g e   The   Survey   of   Users’   Needs   performed   within   the   scope   of   the   CROSSOVER   project   aimed   at   collecting   the   views   and   the   requirements   of   policy-­‐making   stakeholders.   More   in   particular   the   survey   intended   to   stimulate   actual   and   potential   practitioners,   such   as   decision   makers   (government   official   involved   in   the   policy-­‐making   process)   or   policy   advisors   (technical   expert   advising  decision-­‐makers  from  outside  government)  to  provide  input,  feedback  and  validation  to  the   new   research   roadmap   on   ICT   tools   for   Governance   and   Policy   Modelling   under   development   (CROSSOVER,  2012b).  About  450  people  took  part  in  the  overall  exercise,  combining  live  meetings   (214)  and  online  survey  (240+  answers),  providing  concrete  elements  to  improve  the  CROSSOVER   roadmap  and  the  other  activities  to  be  carried  out  by  the  project.     5) Focus  groups   In   addition   to   the   survey,   Tech4i2   ran   a   series   of   dedicated   meetings   where   the   roadmap   was   presented   and   followed   up   by   intense   dedicated   discussion.   These   events   where   all   high-­‐profile,   attended  by  policy-­‐makers  in  the  broad  sense:  not  only  government  officials,  but  also  policy  advisors   and  civil  society  organisations.  More  precisely  three  events  have  been  run:   • On  the  17th  of  May  2012  CROSSOVER  was  invited  to  give  a  keynote  speech  to  ForumPA   on   the   CROSSOVER   Research   Roadmap.   FORUM   PA   is   a   leading   European   exhibition   exploring  innovation  in  Public  Administration  and  local  systems.  For  22  years,  FORUM   PA  has  attracted  thousands  of  visitors  and  hundreds  of  exhibitors  (public  authorities,   private   companies   and   citizens)   to   come   together   and   learn   and   the   participation   of   important   leaders:   ministers,   Nobel   prize   winners   (Amartya   Sen,   Edward   Prescott),   industry  leaders  (Luca  Cordero  di  Montezemolo)  and  hundreds  of  speakers.   • On  May  24th  2012,  CROSSOVER  was  invited  to  attend  the  HUB/Insite  project  meeting  of   sustainability   practitioners   from   all   over   Europe.   The   Hub   and   the   INSITE   Project   brought  together  more  than  25  sustainability  practitioners  working  at  the  cutting  edge   of  innovation  within  industry,  urban  development,  energy,  technology  and  policy  across   Europe.  This  includes  people  tackling  today’s  key  challenges  in  carbon  reduction,  smart   cities,  governance  and  behavioural  change  across  all  these  areas.  Tech4i2  presented  the   Research   Roadmap,   and   facilitated   a   dedicated   session   CROSSOVER   was   invited   to   attend   the   HUB/Insite   project   meeting   of   sustainability   practitioners   from   all   over   Europe.     • On  March  22nd  2012,  CROSSOVER  was  invited  to  present  the  policy-­‐making  2.0  model   to   the   practitioners   of   the   “governance”   network   of   UNDP   –   Europe   and   CIS,   which   included   about   40   people   from   Central   and   Eastern   Europe.   Webinar   for   the   United   Nations  Development  Programme  –  Europe  and  CIS   6) Case  Studies   Within   the   scope   of   the   CROSSOVER   project,   the   European   Commission's   Joint   Research   Centre,   Institute  for  Prospective  Technological  Studies  (JRC-­‐IPTS),  in  collaboration  with  a  team  of  experts  of   the   National   Technical   University   of   Athens   (NTUA)   carried   out   the   activity   of   mapping   and   identification   of   Case   Studies   on   ICT   solutions   for   governance   and   policy   modelling   (CROSSOVER,   2013).   The   research   design   envisaged   a   set   of   macro   phases.   The   initial   phase   consisted   in   the   creation  of  a  case  study  repository  through  the  identification  and  prioritization  of  potential  sources   of  information,  an  open  invitation  for  proposal  of  cases  through  web2.0  channels,  followed  by  the   definition   of   the   1st-­‐round   criteria   for   selecting   at   least   twenty   practices   and   the   information-­‐ oriented  selection  of  the  corresponding  case  studies  on  applications  of  ICT  solutions  for  governance   and  policy  modelling.  In  the  second  phase,  case  studies  have  been  elicited  through  the  definition  of   the  2nd-­‐round  criteria  for  selecting  eight  promising  practices  and  the  application  of  a  multi-­‐criteria   method,  followed  by  further  elaboration  on  the  eight  case  studies  that  have  been  selected  by  the  
  • 14.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   14  |  P a g e   multi-­‐criteria   method   based   on   desk   research.   In   the   third   phase   the   final   four   cases   have   been   selected  and  subjected  to  an  in-­‐depth  analysis  carried  out  through  meticulous  study  of  the  available   public  documentation  and  the  conduction  of  interviews  with  key  involved  stakeholders.  After  the   final  selection  of  cases  and  the  in  depth  analysis,  the  findings  have  been  synthesized  through  the   analysis   of   the   emerging   trends   from   applications   of   ICT   solutions   for   governance   and   policy   modelling  as  well  as  the  development  of  key  considerations  for  the  CROSSOVER  roadmap  for  the   themes  that  refer  to  its  scope.  Finally  the  key  findings  of  the  analysis  of  the  four  cases  have  been   shared  with  the  CROSSOVER  partners  and  the  community  that  follows  closely  the  Policy  Making  2.0   domain  over  various  Web  2.0  channels,  to  provide  feedback  and  validation.  The  key  results  of  the   case  studies  are  described  later  in  the  impact  section.     7) Analysis  of  the  Prize   This   prize   was   given   to   the   best   policy-­‐making   2.0   applications,   that   is   are   for   the   best   use   of   technology  to  improve  the  design,  delivery  and  evaluation  of  Government  policy.  The  focus  of  the   jury  has  been  on  implementations  that  can  show  a  real  impact  on  policy  making,  either  in  terms  of   better  policy  or  wider  participation.  These  technologies  included,  but  are  not  limited  to:   • Visual  analytics   • Open  and  big  data   • Modelling  and  simulation  (beyond  general  equilibrium  models)   • Collaborative  governance  and  crowdsourcing   • Serious  gaming   • Opinion  mining   An  important  condition  for  participating  to  the  selection  has  been  the  real-­‐life  implementation  of   technology  to  policy  issues.     Out  of  50  applications,  the  jury  selected  the  best  12  and  eventually  the  3  winners,  which  received  an   IPAD  mini.    The  principal  domains  of  the  applications  were  as  follow:   • 23  in  the  “Collaborative  Governance  and  Crowd-­‐sourcing”  domain   • 13  in  the  “Open  and  Big  Data”  domain   • 4  in  the  “Visual  Analytics”  domain   • 2  in  the  “Modelling  and  Simulation  (beyond  general  equilibrium  models)”  domain   • 2  in  the  “Serious  Gaming”  domain   • 1   in   each   of   the   following   domains:   “Open   Source   Governance”,   “Opinion   Mining”,   “Participatory  Policy  Making”     All  the  relevant  applications  received  have  been  integrated  in  the  roadmap.  The  criteria  for  judging   the  applications  were:   • Impact  on  the  quality  of  policies   • Openness,  scalability  and  replicability   • Extensiveness  of  public  and  policymakers’  take  up   • Technological  innovativeness   To  this  respect,  the  applicants  to  the  prize  were  required  to  provide  the  following  information:   • Name  of  the  application    
  • 15.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   15  |  P a g e   • Year  of  launch     • Short  description  of  the  technological  domain   • Link  to  the  application     • Describe  the  impact  of  the  application  on  the  quality  of  policies     • Describe  the  public  and  policymaker  take  up  of  the  application   • Describe  to  what  extent  the  application  was  technologically  innovative   • Contact  details  of  the  applicant       8) LinkedIn  Group  Policy-­‐Making  2.0     A  crucial  element  in  the  engagement  of  stakeholders  is  given  by  the  creation  of  a  group  on  LinkedIn   called   Policy   Making   2.04 ,   which   is   a   virtual   place   where   actual   and   potential   practitioners   of   advanced  ICT  tools  for  policy-­‐making  can  exchange  experiences.  The  group  displays  a  high  selected   pool  of  high  level  members  (over  840)  engaging  in  discussions  and  exchange  of  views.  In  order  to   foster  debate  in  the  group,  the  CROSSOVER  consortium  posts  on  a  regular  base  info  about  the  new   cases  and  tools  to  be  integrated  in  the  knowledge  repository.  Some  other  discussion  topics  relate  to   the  best  ways  to  engage  the  government  in  online  policy  making,  the  posting  of  third  parties  content   and   info   about   incoming   CROSSOVER   workshops.   In   particular   the   group   is   being   used   for   disseminating  the  Survey  on  the  ICT  Needs  of  Policy  Makers,  as  well  as  the  roadmap  in  commentable   format.  The  Policy  Making  2.0  group  also  serves  as  a  liaison  channel  with  similar  projects  such  as   eGvoPoliNet   and   OCOPOMO.   As   agreed   the   eGovPoliNet   LinkedIn   group   has   merged   with   the   CROSSOVER  Policy  Making  2.0  group,  and  after  the  end  of  the  CROSSOVER  project  the  interaction   will  continue  led  by  the  eGovPoliNet  consortium.  Moreover  as  we  are  approaching  the  end  of  the   project  we  decided  to  shift  from  a  closed  LinkedIn  group  to  an  open  one.                                                                                                                               4  http://www.linkedin.com/groups?home=&gid=4165795  
  • 16.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   16  |  P a g e         1.3. Scope  and  definition   Policy-­‐making  2.0  refers  to  a  set  of  methodologies  and  technological  solutions  aimed  at  innovating   policy-­‐making.  As  we  will  describe  in  section  2.1,  the  scope  goes  well  beyond  the  focus  on  “Decision-­‐ making”  notion  typical  of  eParticipation,  and  encompasses  all  phases  of  the  policy  cycle.  The  main   goal   is   limited   to   improving   the   quality   of   policies,   not   of   making   them   more   consensual   or   representative.   Policy-­‐making  2.0  is  a  new  term  that  we  have  coined  to  express  in  more  understandable  terms  the   somehow  technical  notion  of  “ICT  for  governance  and  policy  modelling”.  Its  usage  in  the  course  of   the  project  proved  more  effective  than  the  latter  when  discussing  with  stakeholders.  Thereby  from   now  on  we  will  refer  to  the  roadmap  as  the  Research  Roadmap  on  Policy-­‐Making  2.0.   The  full  set  of  methodologies  and  tools  has  been  spelled  out  in  the  taxonomy  in  WP15 :   1.1.   Open  government  information  &  intelligence  for  transparency   1.1.1.   Open  &  Transparent  Information  Management   1.1.1.1.  Open  data  policy   1.1.1.2.  Open  data  licence   1.1.1.3.  Open  data  portal   1.1.1.4.  Code  list   1.1.1.5.  Vocabulary/ontology   1.1.1.6.  Reference  data   1.1.1.7.  Data  cleaning  and  reconciliation  tool   1.1.2.   Data  published  on  the  Web  under  an  open  licence   1.1.2.1.  Human-­‐readable  data   1.1.2.2.  Machine  readable  data  in  proprietary  format   1.1.2.3.  Machine-­‐readable  data  published  in  a  non-­‐proprietary  format   1.1.2.4.  Data  published  in  RDF   1.1.2.5.  SPARQL  endpoint  for  querying  RDF  data   1.1.2.6.  RDF  data  linked  to  other  data  sets   1.1.3.   Visual  Analytics   1.1.3.1.  Visualisation  of  a  single,  static,  embedded  data  set   1.1.3.2.  Visualisation  of  multiple  static  data  sets   1.1.3.3.  Visualisation  of  a  single  live  data  feed  or  updating  data  set   1.1.3.4.  Visualisation  of  multiple  data  points,  including  live  feeds  or  updates   1.2.   Social  computing,  citizen  engagement  and  inclusion   1.2.1.   Social  Computing   1.2.1.1.  Collaborative  writing  and  annotation   1.2.1.2.  Content  syndication   1.2.1.3.  Feedback  and  reputation  management  systems   1.2.1.4.  Social  Network  Analysis   1.2.1.5.  Participatory  sensing   1.2.2.   Citizen  Engagement                                                                                                                             5  The  taxonomy  presented  here  builds  on  CROSSROAD  taxonomy,  which  has  been  expanded,  reviewed  and  updated  by  the   members  of  the  Consortium  
  • 17.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   17  |  P a g e   1.2.2.1.  Online  deliberation   1.2.2.2.  Argumentation  support   1.2.2.3.  Petition,  Polling  and  voting   1.2.2.4.  Serious  games   1.2.2.5.  Opinion  mining   1.2.3.   Public  Opinion-­‐Mining  &  Sentiment  Analysis   1.2.3.1.  Opinion  tracking   1.2.3.2.  Multi-­‐lingual  and  Multi-­‐Cultural  opinion  extraction  and  filtering   1.2.3.3.  Real-­‐time  opinion  visualisation   1.2.3.4.  Collective  Wisdom  Analysis  and  Exploitation   1.3.   Policy  Assessment   1.3.1.   Policy  Context  Analysis   1.3.1.1.  Forecasting   1.3.1.2.  Foresight   1.3.1.3.  Back-­‐Casting   1.3.1.4.  Now-­‐Casting   1.3.1.5.  Early  Warning  Systems   1.3.1.6.  Technology  Road-­‐Mapping  (TRM)   1.3.2.   Policy  Modelling   1.3.2.1.  Group  Model  Building   1.3.2.2.  Systems  Thinking  &  Behavioural  Modelling   1.3.2.3.  System  Dynamics   1.3.2.4.  Agent-­‐Based  Modelling   1.3.2.5.  Stochastic  Modelling   1.3.2.6.  Cellular  Automata   1.3.3.   Policy  Simulation   1.3.3.1.  Multi-­‐level  &  micro-­‐simulation  models   1.3.3.2.  Discrete  Event  Simulation   1.3.3.3.  Autonomous  Agents,  ABM  Simulation,  Multi-­‐Agent  Systems  (MAS)   1.3.3.4.  Virtual  Worlds,  Virtual  Reality  &  Gaming  Simulation   1.3.3.5.  Model  Integration   1.3.3.6.  Model  Calibration  &  Validation   1.3.4.   Policy  Evaluation   1.3.4.1.  Impact  Assessment   1.3.4.2.  Scenarios   1.3.4.3.  Model  Quality  Evaluation   1.3.4.4.  Multi-­‐Criteria  Decision  Analysis   1.4.   Identity,  privacy  and  trust  in  governance   1.4.1.   Identity  Management   1.4.1.1.  Federated  Identity  Management  Systems   1.4.1.2.  User  centric,  self  managed  and  lightweight  credentials   1.4.1.3.  Legal-­‐social  aspects  of  eIdentity  management   1.4.1.4.  Mobile  Identity  (Portability)   1.4.2.   Privacy   1.4.2.1.  Privacy  and  Data  Protection   1.4.2.2.  Privacy  Enhancing  Technologies   1.4.2.3.  Anonymity  and  Pseudonymity   1.4.2.4.  Open  data  management  (including  Citizen  Profiling,  'digital  shadow'  tracing   and  tracking   1.4.3.   Trust  
  • 18.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   18  |  P a g e   1.4.3.1.  Legal  Informatics   1.4.3.2.  Digital  Rights  Management   1.4.3.3.  Digital  Citizenship  Rights  and  feedback  loops   1.4.3.4.  Intellectual  Property  in  the  digital  era   1.4.3.5.  Trust-­‐building   Services   (including   data   processing   and   profiling   by   private   actors  for  public  services)   1.5.   Future  internet  for  collaborative  governance   1.5.1.   Cloud  Computing   1.5.1.1.  Cloud  service  level  requirements   1.5.1.2.  Business  models  in  the  cloud   1.5.1.3.  Cloud  interoperability   1.5.1.4.  Security  and  authentication  in  the  cloud   1.5.1.5.  Data  confidentiality  and  auditability   1.5.1.6.  Cloud  legal  implications   1.5.2.   Pervasive  Computing  &  Internet  of  Things  in  Public  Services   1.5.2.1.  Ambient  intelligence   1.5.2.2.  Exploiting  smart  objects   1.5.2.3.  Standardization   1.5.2.4.  Business  models  for  pervasive  technologies   1.5.2.5.  Privacy  implications  and  risks   1.5.3.   Provision  of  next  generation  public  e-­‐services   1.5.3.1.  Fixed  and  mobile  network  access  technologies   1.5.3.2.  Mobile  web   1.5.3.3.  Models  for  information  dissemination   1.5.3.4.  Management  of  scarce  network  capacity  and  congestion  problems   1.5.3.5.  Large-­‐scale  resource  sharing   1.5.3.6.  Interworking  of  different  technologies  for  seamless  connectivity  of  users   1.5.4.   Future  Human/Computer  Interaction  Applications  &  Systems   1.5.4.1.  Web  accessibility   1.5.4.2.  User-­‐centered  design   1.5.4.3.  Augmented  cognition   1.5.4.4.  Human  senses  recognition     Policy-­‐making  2.0  encompasses  clearly  a  wide  set  of  methodologies  and  tools.  At  first  sight,  it  might   appear   unclear   what   the   common   denominator   is.   In   our   view,   what   they   share   is   that   they   are   designed  to  use  technology  in  order  to  inform  the  formulation  of  more  effective  public  policies.  In   particular,  these  technologies  share  a  common  approach  in  taking  into  account  and  dealing  with  the   full   complexity   of   human   nature.   As   spelled   out   originally   in   the   CROSSOVER   project   proposal:   “traditional  policy-­‐making  tools  are  limited  insofar  they  assume  an  abstract  and  unrealistic  human   being:  rational  (utility  maximizing),  consistent  (not  heterogeneous),  atomised  (not  connected),  wise   (thinking  long-­‐term)  and  politically  committed  (as  Lisa  Simpson)”.  Policy-­‐making  2.0  thus  accounts   for   this   diversity.   Its   methodologies   and   tools   are   designed   not   to   impose   change   and   artificial   structures,   rather   to   interact   with   this   diversity.   Agent-­‐based   models   account   for   the   interaction   between   agents   that   are   different   in   nature   and   values;   systems   thinking   accounts   for   long-­‐term   interacting  impacts;  social  network  analysis  deals  with  the  mutual  influences  between  people  rather   than   fully   rational   choices;   big   data   analyses   observed   behaviour   rather   than   theoretical   models;   persuasive   technologies   deal   with   the   complex   psychology   of   individuals   and   introduces   gaming   values   to   involve   more   “casual”   participants.   Moreover,   policy-­‐making   2.0   tools   allow   all   stakeholders  to  participate  to  the  decision-­‐making  process.  
  • 19.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   19  |  P a g e     1.4. Policy:  Between  politics  and  services   The  application  of  technology  to  governmental  issues  is  not  a  new  topic.  Indeed  e-­‐government  and   the  new  buzzword  of  government  2.0,  have  become  mainstream  in  recent  years:  how  and  why  a   future  looking  research  agenda  could  still  refer  to  the  2.0  paradigm  as  innovative?  The  novelty  lies  in   the  “policy”  part  of  the  definition.   So  far,  the  application  of  "2.0"  technologies  to  governmental  processes  has  focussed  mainly  on  the   usage  of  social  media  for  political  communication,  best  exemplified  by  the  Obama  campaign.  The   typical  narrative  is  that  in  the  age  of  social  media,  traditional  communication  campaigns  and  political   parties   are   unsuited   to   generate   commitment   and   action   by   citizens,   which   instead   want   to   take   active  part  in  the  campaign  and  self-­‐organize  via  social  media:  ""A  candidate  who  can  master  the   Internet  will  not  only  level  the  playing  field;  he  will  level  the  opposition."  RightClick  Strategies'  Larry   Purpuro.   A  second  area  of  strong  focus  proved  to  be  the  collaborative  provision  of  public  services  based  on   peer-­‐to-­‐peer   support   and   open   data,   best   exemplified   by   the   widely   spread   "appsfordemocracy"   contests.  The  narrative  here  is  that  government  should  act  as  a  platform  and  enable  third  parties   (and  citizens  themselves)  to  co-­‐create  and  deliver  public  services  based  on  open  government  data.     This  is  what  Goldsmith  and  Eggers  (2004)  call  "governing  by  network".   Indeed,   the   Obama   administration   clearly   shows   these   priorities,   moving   from   state-­‐of-­‐the-­‐art   campaigning   in   order   to   be   elected,   and   then   implementing   a   strong   open   data   policy   with   crowdsourcing  initiatives  to  let  citizens  create  services  based  on  these  data.   Between  "politics"  and  "public  services  co-­‐delivery",  much  less  attention  has  been  devoted  to  the   usage  of  social  technology  to  improve  public  policy.  While  politics  deal  with  the  legislative  branch,   the   Parliament,   policy-­‐making   is   mainly   the   realm   of   the   executive   branch.   Typically,   the   job   of   policy-­‐making   involves   a   great   deal   of   socio-­‐economic   analysis   as   well   as   consultation   with   stakeholders.     This  roadmap  aims  to  fill  this  gap,  by  providing  a  complete  picture  of  how  technology  can  improve   policy-­‐making.    
  • 20.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   20  |  P a g e   2. Not  just  another  hype:  the  Demand  side  of  policy-­‐making  2.0   In  the  context  of  new  technologies,  we  are  periodically  informed  about  the  emerging  wave  that  will   change  everything,  only  to  see  it  quickly  forgotten  after  years  or  even  month  in  what  Gartner  calls   “trough  of  disillusionment”.  While  some  of  this  emphasis  is  certainly  driven  by  commercial  interests,   in  many  other  cases  it  reflects  a  genuine  optimism  of  its  proponents,  who  tend  to  underestimate  the   real-­‐life  bottlenecks  to  adoption  by  less  enthusiast  people.     Movzorov   critically   calls   this   cyber-­‐utopianism   or   technological   solutionism   (Morozov   2013);   on   a   similar  note,  many  years  of  eGovernment  policy  have  revealed  the  fundamental  importance  of  non-­‐ technological  factors,  such  as  organisational  change,  skills,  incentives  and  culture.     One   way   to   prevent   policy-­‐making   2.0   to   become   yet   another   hype   in   the   Gartner   curve,   is   to   precisely   spell   out   the   challenges   that   these   new   technologies   help   to   address.   Indeed,   the   importance  of  this  demand-­‐driven  approach  based  on  grand  challenges  is  fully  embraced  by  the  new   Horizon2020   research   programme   of   the   European   Union. 6     Furthermore,   a   demand-­‐driven   approach  helps  us  to  frame  the  technological  opportunities  in  a  language  understandable  to  policy-­‐ makers,  thereby  supporting  the  awareness-­‐raising  objective  of  the  CROSSOVER  project.   When  analysing  the  demand  side,  our  first  consideration  is  that  policy-­‐making  is  more  important   and   complex   than   ever.    The  role  of  government  has  substantially  changed  over  the  last  twenty   years.  Governments  have  to  re-­‐design  their  role  in  areas  where  they  were  directly  involved  in  service   provision,  such  as  utilities  but  also  education  and  health.  This  is  not  simply  a  matter  of  privatisation,   or  of  a  linear  trend  towards  smaller  government.  Indeed,  even  before  the  recent  financial  turmoil   and  nationalisation  of  parts  of  the  financial  system,  government  role  in  the  European  societies  was   not   simply   “diminishing”,   but   rather   being   transformed.   At   the   same   time,   it   is   increasingly   recognized  that  the  emergence  of  new  and  complex  problems  requires  government  to  increasingly   collaborate   with   non-­‐governmental   actors   in   the   understanding   and   in   the   addressing   of   these   challenges7 .  As  an  OECD  report  states  the  following:     “Government  has  a  larger  role  in  the  OECD  countries  than  two  decades  ago.  But  the  nature  of  public   policy  problems  and  the  methods  to  deal  with  them  are  still  undergoing  deep  change.  Governments   are  moving  away  from  the  direct  provision  of  services  towards  a  greater  role  for  private  and  non-­‐ profit  entities  and  increased  regulation  of  markets.  Government  regulatory  reach  is  also  extending  in   new   socio-­‐economic   areas.   This   expansion   of   regulation   reflects   the   increasing   complexity   of   societies.   At   the   same   time,   through   technological   advances,   government’s   ability   to   accumulate   information  in  these  areas  has  increased  significantly.  As  government  face  more  new  and  complex   problems  that  cannot  be  dealt  with  easily  by  direct  public  service  provision,  more  ambitious  policies   require  more  complex  interventions  and  collaboration  with  non-­‐governmental  parties”   This  is  particularly  challenging  in  our  "complex"  societies.  “Complex”  systems  are  those  where  “the   behaviour  of  the  system  as  a  whole  cannot  be  determined  by  partitioning  it  and  understanding  the   behaviour  of  each  of  the  parts  separately,  which  is  the  classic  strategy  of  the  reductionist  physical   sciences”.  The  present  challenges  governments  must  face,  as  described  by  the  OECD,  are  complex  as   they  are  characterised  by  many  non-­‐linear  interactions  between  agents;  they   emerge  from  these   interactions   and   are   therefore   difficult   to   predict.   The   financial   crisis   is   probably   the   foremost   example  of  a  complex  problem,  which  proved  impossible  to  predict  with  traditional  decision-­‐making   tools.                                                                                                                             6  http://ec.europa.eu/research/horizon2020/index_en.cfm?pg=h2020     7  See  Ostrom:  http://www.nobelprize.org/nobel_prizes/economics/laureates/2009/ostrom-­‐lecture.html  
  • 21.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   21  |  P a g e     2.1. The  typical  tasks  of  policy-­‐makers:  the  policy  cycle   Policy-­‐making  is  typically  carried  out  through  a  set  of  activities  described  as  "policy-­‐cycle"  (Howard     2005).  In  this  document  we  propose  a  new  way  of  implementing  policies,  by  first  assessing  their   impacts  in  a  virtual  environment.   While  different  versions  of  the  cycle  are  proposed  in  literature,  in  this  context  we  adopt  a  simple   version  articulated  in  5  phases:   -­‐ agenda  setting  encompasses  the  basic  analysis  on  the  nature  and  size  of  problems  at  stakes   are  addressed,  including  the  causal  relationships  between  the  different  factors   -­‐ policy   design   includes   the   development   of   the   possible   solutions,   the   analysis   of   the   potential  impact  of  these  solutions8 ,  the  development  and  revision  of  a  policy  proposal   -­‐ adoption  is  the  cut-­‐off  decision  on  the  policy.  This  is  the  most  delicate  and  sensitive  area,   where  accountability  and  representativeness  are  needed.  It  is  also  the  area  most  covered  by   existing  research  on  e-­‐democracy     -­‐ implementation  is  often  considered  the  most  challenging  phase,  as  it  needs  to  translate  the   policy   objectives   in   concrete   activities,   that   have   to   deal   with   the   complexity   of   the   real   world  .  It  includes  ensuring  a  broader  understanding,  the  change  of  behaviour  and  the  active   collaboration  of  all  stakeholders.   -­‐ Monitoring  and  evaluation  make  use  of  implementation  data  to  assess  whether  the  policy  is   being  implemented  as  planned,  and  is  achieving  the  expected  objectives.   The   figure   below   (authors’   elaboration   based   on   Howard   2005   and   EC   2009)   illustrates   the   main   phases  of  the  policy  cycle  (in  the  internal  circle)  and  the  typical  concrete  activities  (external  circle)   that  accompany  this  cycle.  In  particular,  the  identified  activities  are  based  on  the  Impact  Assessment   Guidelines  of  the  European  Commission  (EC  2009).                                                                                                                             8  A  very  important  element  in  policy  design  and  formulation  is  given  by  ex-­‐ante  evaluation.  In  this  respect  ICT  tools  for   policy-­‐making  can  play  an  important  role,  simulating  alternative  policy  options  and  impacts  before  implementing  a  policy   action  
  • 22.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   22  |  P a g e     Figure  3:  Policy  Cycle  and  Related  Activities       Traditionally,  the  focus  about  the  impact  of  technology  in  policy-­‐making  has  been  on  the  adoption   phase,   analysing   the   implications   of   ICT   for   direct   democracy.   In   the   context   of   the   CROSSOVER   project,  we  adopt  a  broader  conceptual  framework  that  embraces  all  phases  of  policy-­‐making.     2.2. The  traditional  tools  of  policy-­‐making   Let  us  present  now  what  are  the  methodologies  and  tools  already  traditionally  adopted  in  policy-­‐ making.  Typically,  in  the  agenda-­‐setting  phase,  statistics  are  analysed  by  government  and  experts   contracted  by  government  in  order  to  understand  the  problems  at  stake  and  the  underlying  causes   of  the  problems.  Survey  and  consultations,  including  online  ones,  are  frequently  used  to  assess  the   stakeholders’  priorities,  and  typically  analysed  in-­‐house.  General-­‐equilibrium  models  are  used  as  an   assessment  framework.   Once  the  problems  and  its  causes  are  defined,  the  policy  design  phase  is  typically  articulated  through   an  ex-­‐ante  impact  assessment  approach.  A  limited  set  of  policy  options  are  formulated  in  house  with  
  • 23.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   23  |  P a g e   the   involvement   of   experts   and   stakeholders.   For   each   option,   models   are   simulated   in   order   to   forecast  possible  sectoral  and  cross-­‐sectoral  impacts.  These  simulations  are  typically  carried  out  by   general-­‐equilibrium   models   if   the   time   frame   is   focused   on   short   and   medium   term   economic   impacts  of  policy  implementation.  Based  on  the  simulated  impact,  the  best  option  is  submitted  for   adoption.   The  adoption  phase  is  typically  carried  out  by  the  official  authority,  either  legislative  or  executive   (depending   on   the   type   of   policy).   In   some   cases,   decision   is   left   to   citizens   through   direct   democracy,   through   a   referendum   or   tools   such   as   participatory   budgeting;   or   to   stakeholders   through  self-­‐regulation.   The   implementation   phase   typically   is   carried   out   directly   by   government,   using   incentives   and   coercion.   It   benefits   from   technology   mainly   in   terms   of   monitoring   and   surveillance,   in   order   to   manage  incentives  and  coercion,  for  example  through  the  database  used  for  social  security  or  taxes   revenues.   The  monitoring  and  evaluation  phase  is  supported  by  mathematical  simulation  studies  and  analysis   of   government   data,   typically   carried   out   in-­‐house   or   by   contractors.   Moreover,   as   numbers   aggregate  the  impacts  of  everything  that  happens,  including  policy,  it  is  difficult  to  single  out  the   impacts   of   one   policy   ex   post.   Final   results   are   published   in   report   format,   and   fed   back   to   the   agenda  setting  phase.     2.3. The  key  challenges  of  policy-­‐makers   Needless  to  say,  the  current  policy-­‐making  process  is  seldom  based  on  objective  evidence  and  not  all   views   are   necessarily   represented.   Dramatic   crises   seem   to   happen   too   often,   and   governments   struggle  to  anticipate  and  deal  with  them,  as  the  financial  crisis  has  shown.  Citizens  feel  a  sense  of   mistrust  towards  government,  as  shown  by  the  decrease  in  voters  turnout  in  the  elections.   In  this  section,  we  analyse  and  identify  the  specific  challenges  of  policy-­‐making.  The  goal  is  to  clearly   spell  out  "what  is  the  problem"  in  the  policy  making  process  that  policy-­‐making  2.0  tools  can  help  to   solve.   The  challenges  have  been  identified  on  desk-­‐based  research  of  "government  failure"  in  a  variety  of   contexts,  and  are  illustrated  by  real-­‐life  examples.   One   first   overarching   challenge   is   the   emergence   of   a   distributed   governance   model.   The   traditional  division  of  “market”  and  “state”  no  longer  fits  a  reality  where  public  decision  and  action  is   effectively  carried  out  by  a  plurality  of  actors.  Traditionally,  the  policy  cycle  is  designed  as  a  set  of   activities  belonging  to  government,  from  the  agenda  setting  to  the  delivery  and  evaluation.  However   in  recent  years  it  has  been  increasingly  recognized  that  public  governance  involves  a  wide  range  of   stakeholders,  who  are  increasingly  involved  not  only  in  agenda-­‐setting  but  in  designing  the  policies,   adopting   them   (through   the   increasing   role   of   self-­‐regulation),   implementing   them   (through   collaboration,  voluntary  action,  corporate  social  responsibility),  and  evaluating  them  (such  as  in  the   case  of  civil  society  as  watchdog  of  government).  As  Elinor  Ostrom  stated  in  her  lecture  delivered   when  receiving  the  Nobel  Prize  in  Economics9 :  “A  core  goal  of  public  policy  should  be  to  facilitate  the   development   of   institutions   that   bring   out   the   best   in   humans.   We   need   to   ask   how   diverse   polycentric  institutions  help  or  hinder  the  innovativeness,  learning,  adapting,  trustworthiness,  levels   of  cooperation  of  participants,  and  the  achievement  of  more  effective,  equitable,  and  sustainable   outcomes   at   multiple   scales”.   This   acknowledgement   leads   to   important   implications   for   the                                                                                                                             9  http://www.nobelprize.org/nobel_prizes/economics/laureates/2009/ostrom-­‐lecture.html  
  • 24.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   24  |  P a g e   CROSSOVER   roadmap:   policy-­‐making   2.0   tools   are   not   just   tools   for   government,   but   for   all   stakeholders  to  participate  in  the  policy-­‐making  process10 .     2.3.1. Detect  and  understand  problems  before  they  become  unsolvable   The  continuous  struggle  for  evidence-­‐based  policy-­‐making  can  have  some  important  and  potentially   negative  implications  in  terms  of  the  capacity  of  prompt  identification  of  problems.  Policy-­‐makers   have  to  balance  the  need  for  prompt  reaction  with  the  need  for  justified  action,  by  distinguishing   signal  from  noise.  Delayed  actions  are  often  ineffective;  at  the  same  time,  short-­‐term  evidence  can   lead   to   opposite   effects.   In   any   case,   government   have   scarce   resources   and   need   to   prioritize   interventions  on  the  most  important  problems.   For  instance  the  significant  underestimation  of  the  risks  of  the  housing  bubble  in  the  late  2000s,  and   the  systemic  reaction  that  it  would  lead  to,  led  to  delayed  reactions11 .     Systemic  changes  do  not  happen  gradually,  but  become  visible  only  when  it's  too  late  to  intervene  or   the   cost   of   intervening   is   too   high.   For   example,   ICT   is   today   recognized   as   a   key   driver   of   productivity  and  growth,  but  evidence  to  prove  this  became  available  at  a  distance  of  years  from  the   initial   investment.     In   fact   the   initial   lack   of   correlation   between   ICT   investment   and   productivity   growth   was   mostly   due   to   incorrect   measurement   of   ICT   capital   prices   and   quality.   Subsequent   methodologies  found  that  computer  hardware  played  an  increasing  role  as  a  source  of  economic   growth   (see   inter   al.   Colecchia   and   Schreyer   2002,   Jorgenson   and   Stiroh   2000,   Oliner   and   Sichel   2000).   The  problem  is  in  this  case  is  therefore  twofold:  to  collect  data  more  rapidly;  and  to  analyse  them   with   a   wider   variety   of   models   that   account   for   systemic,   long   term   effects   and   that   are   able   to   detect  and  anticipate  weak  signals  or  unexpected  wild  cards.   2.3.2. Generate  high  involvement  of  citizens  in  policy-­‐making   The  involvement  of  citizens  in  policy-­‐making  remains  too  often  associated  with  short-­‐termism  and   populism.     It  is  difficult  to  engage  citizens  in  policy  discussions  in  the  first  place:  public  policy  issues  are  not   generally  appealing  and  interesting  as  citizens  fail  to  understand  the  relevance  of  the  issues  and  to   see  "what's  in  it  for  me".  The  decline  in  voters’  turnout  and  the  lack  of  trust  in  politicians  reflects   this.   More   importantly,   there   are   innumerable   cases   where   the   "right"   policies   are   not   adopted   because  citizens  "would  not  understand"  or  because  it  is  not  politically  acceptable.   While  the  Internet  has  long  promised  an  opportunity  for  widespread  involvement,  e-­‐participation   initiatives  often  struggle  to  generate  participation.  Participation  is  often  limited  to  those  that  are   already  interested  in  politics,  rather  than  involving  those  that  are  not.   When   participation   occurs,   online   debates   tend   to   focus   on   eye-­‐catching   issues   and   polarized   positions,  in  part  because  of  the  limits  of  the  technology  available.  It  is  extremely  difficult  and  time   consuming  to  generate  open,  large  scale  and  meaningful  discussion.       2.3.3. Identify  “good  ideas”  and  innovative  solutions  to  long-­‐standing  problems                                                                                                                             10  However  in  our  project  we  mainly  focus  on  tools  that  are  used  or  can  be  adopted  by  Governments,  otherwise  we  would   risk  to  enlarge  too  much  the  scope  of  the  research  roadmap   11  http://www.wsws.org/en/articles/2013/01/26/fede-­‐j26.html  
  • 25.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   25  |  P a g e   Innovation  in  policy-­‐making  is  a  slow  process.  Because  of  the  technical  nature  of  issues  at  hand,  the   policy   discussion   is   often   limited   to   restricted   circles.   Innovative   policies   tend   to   be   "imported"   through  "institutional  isomorphism".  Innovative  ideas,  from  both  civil  servants  and  citizens,  fail  to   surface  to  the  top  hierarchy  and  are  often  blocked  for  institutional  resistance.   Existing  instruments  for  large-­‐scale  brainstorming  remain  limited  in  usage,  and  fail  to  surface  the   most  innovative  ideas.  Crowdsourcing  typically  focus  on  the  most  “attractive”  ideas,  rather  than  the   most  insightful.   2.3.4. Reduce  uncertainty  on  the  possible  impacts  of  policies     When  policy  options  have  been  developed,  simulations  are  carried  out  to  anticipate  the  likely  impact   of   policies.   The   option   with   the   most   positive   impact   is   normally   the   one   that   is   proposed   for   adoption.   Most  existing  methodologies  and  tools  for  the  simulation  of  policy  impacts  work  decently  with  well   known,  linear  phenomena.  However,  they  are  not  effective  in  times  of  crisis  and  fast  change,  which   unfortunately  turn  out  to  be  exactly  the  situations  where  government  intervention  is  most  needed.     As  an  example  nowadays  the  European  Central  Bank  bases  its  analysis  of  the  EURO  Area  economy   and   monetary   policy   on   a   derived   version   of   the   DSGE   model   developed   by   Frank   Smet   and   Raf   Wouters   in   200312 .   Smet   and   Wouters’   model   is   deeply   microfounded,   allowing   for   a   rigorous   theoretical   structure   of   the   model.   Moreover   in   this   setting   the   reduced   form   parameters   are   related  to  deep  structural  parameters  in  order  to  mitigate  Lucas’  critique,  while  the  utility  of  agents   can  be  taken  as  a  measure  of  welfare  in  the  economics  (Phelps  ed.  1970).     However,  the  DSGE  models  suffer  from  several  shortcomings  jeopardizing  their  ability  to  predict,  let   alone  to  prevent,  a  global  crisis:   • Agents   are   assumed   to   be   perfectly   rational,   having   perfect   access   to   information   and   adapting  instantly  to  new  situations  in  order  to  maximize  their  long-­‐run  personal  advantage   • So   far   agents   have   entered   the   models   as   homogeneous   representative   entities,   while   it   would  be  a  step  forward  being  able  to  take  into  account  agents  heterogeneity   • Canonical  models  consider  atomistic  agents  with  little  or  no  interactions  and  thereby  are  not   able  to  cope  with  network  externalities     But  most  of  all  it  is  the  very  notion  of  equilibrium  which  prevents  standard  models  from  dealing  with   crisis.   A   stable   steady   state   equilibrium   is   a   condition   according   to   which   the   behaviour   of   a   dynamical   system   does   not   change   over   time   or   in   which   a   change   in   one   direction   is   a   mere   temporary  deviation.  This  condition  is  proper  of  general  equilibrium  theory,  in  which  a  stable  steady   state  is  believed  to  be  the  norm  rather  than  the  exception.  When  in  the  canonical  model  we  are  out   of  equilibrium,  the  situation  is  seen  just  as  a  short  lapse  before  the  return  to  the  steady  state.  This  is   in   sharp   contrast   with   the   very   notion   of   crisis,   which   represents   a   steady   deviation   from   the   equilibrium.  Loosely  speaking,  the  crisis  phenomenon  is  not  even  conceived  within  the  framework  of   standard  models.   All  these  flaws  are  not  only  related  to  DSGE  models,  but  also  to  Computational  General  Equilibrium   (CGE)  or  macro-­‐econometric  forecasting  models,  which  are  the  traditional  policy  making  tools.  In   this  view  it  would  be  very  important  to  find  new  frameworks  capable  of  avoiding  those  shortcuts.                                                                                                                             12  http://www.ecb.int/home/html/researcher_swm.en.html  
  • 26.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   26  |  P a g e   Some  of  such  methodologies  and  methods  already  exists  and  some  governments  are  using  them.   Our  aim  is  to  push  forward  in  that  direction.   We  need  to  move  away  from  the  equilibrium  paradigm  in  order  to  be  able  to  assess  other  issues:   evolutionary  dynamics;  heterogeneity  of  technologies  and  firm;  political  and  legal  determinants  of   social  stability;  incentive  structures;  better  modelling  technological  change,  innovation  diffusion  and   economic   systems   (taking   into   account   finance,   debt   and   insurance);   interactions   between   heterogeneous  economic  agents  (firms  and  households)  and  central  governments;  heterogeneous   responses  to  government  incentives;  economic  dependence  from  the  ecosystem.       Trichet,   the   former   head   of   ECB,   clearly   put   it:   “This   doesn't   mean   we   have   to   abandon   DSGE...(but)...atomistic   rational   agents   don't   capture   behaviour   during   a   crisis...rational   expectations  theory  has  brought  macroeconomics  a  long  way  ...  but  there  is  a  clear  case  to  re-­‐ examine  the  assumptions”   But  the  need  for  new  policy  making  tools  is  not  limited  to  the  economic  realm:  in  the  future  it  will   become   more   and   more   important   to   anticipate   non-­‐linear   potentially   catastrophic   impacts   from   phenomena  such  as:  climate  change  (draught  and  global  warming);  threshold  climate  effects  such  as   poles’  sea-­‐ice  withdraw,  out-­‐gassing  from  melting  permafrost,  Indian  monsoon,  oceans  acidification;   social   instability   affecting   economic   well-­‐being   (social   conflict,   anarchy   and   mass   people   movements).       The   lack   of   understanding   of   systemic   impact   has   driven   to   short   term   policies   which   failed   in   grasping  long  term,  systemic  consequences  and  side  effects:   -­‐ An   example   of   this   approach   might   be   given   by   the   sovereign   debt   issue.   In   fact   it   is   relatively  easy  for  governments  under  popular  pressure  to  increase  expenditure  and  public   debt  to  cope  with  short  term  necessities,  such  as  offsetting  the  negative  impacts  brought   about  by  a  regional  or  global  crisis.  On  the  other  hand  it  is  harder  to  take  into  account  the   long   term   effect   determined   by   higher   interest   rates   on   private   investments   and   consumption  through  crowding  out  and  fiscal  pressure.   -­‐ Another  example  of  short-­‐termism  are  the  financial  policies  pursued  in  south  East  Asia  at  the   beginning  of  the  90s.  Many  countries,  such  as  Thailand,  liberalized  their  financial  markets   fostering  the  inflow  of  investments  aimed  at  sustaining  growth.  Unfortunately  those  capitals   triggered  a  real  estate  bubble  which  has  been  at  the  roots  of  the  1997-­‐1998  crisis.   -­‐ In  2008  the  Central  Bank  of  Iceland  yielded  liquidity  loans  for  saving  banks  on  the  verge  of   default  on  the  basis  of  newly-­‐issued,  uncovered  bonds,  i.e.  effectively  printing  fiat  money  on   demand,  causing  a  significant  rise  in  inflation.  To  cope  with  this  rise  in  prices,  the  Iceland   Central  Bank  had  to  keep  very  high  interest  rates  thereby  leading  to  an  economic  bubble.   -­‐ According   to   a   large   number   of   economists   the   financial   crisis   was   triggered   by   US   government   policies   spanning   across   two   administrations   which   were   intended   to   ensure   citizens’  right  but  instead  determined  an  unprecedented  high  number  of  risky  mortgages,  as   well   as   the   decline   in   mortgage   underwriting   standards   that   ensued.   According   to   the   “Financial  Crisis  Inquiry  Commission  Report”  13  those  policies,  together  with  the  deregulation   of  the  financial  system,  might  have  catalysed  the  crisis.                                                                                                                             13  http://fcic.law.stanford.edu/report  
  • 27.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   27  |  P a g e   -­‐ Other  examples  can  be  the  bail  out  of  financial  institutions:  in  the  short  run  those  actions   maintain  employment  and  economic  standards,  while  in  the  long  term  they  induce  moral   hazard,  keep  operating  inefficient  companies  and  decrease  the  trust  of  economic  agents  in   regulation,  which  is  the  funding  pillar  of  our  economic  system.    
  • 28.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   28  |  P a g e     2.3.5. Ensure  long  -­‐  term  thinking   In   traditional   economics,   decisions   are   utility-­‐maximising.   Agents   rationally   evaluate   the   consequences  of  their  actions,  and  take  the  decision  that  maximize  their  utility.  However,  it  is  well   known  that  this  rationalistic  view  does  not  fully  capture  human  nature.  We  tend  to  overestimate   short-­‐term  impact  and  underestimate  the  long  term.  In  policy-­‐making,  short-­‐termism  is  a  frequent   issue.  People  are  reluctant  to  accept  short-­‐term  sacrifices  for  long-­‐term  benefits.  Politicians  have   elections  typically  every  5  years,  and  often  their  decisions  are  taken  to  maximize  the  impact  “before   the   elections”.   There   is   also   the   perception   that   laypeople   are   less   sensitive   to   long   term   consequences,   which   are   instead   better   understood   by   experts.   Overall,   long-­‐term   impact   is   less   visible  and  easier  to  hide,  due  to  lack  of  evidence  and  data.  As  a  result,  decisions  are  too  often  taken   looking  at  short-­‐term  benefits,  even  though  they  might  bring  long  term  problems.     Climate  change  is  a  typical  policy  area  where  sub-­‐optimal  decisions  were  taken  because  the  short-­‐ term  costs  were  considered  to  outweigh  the  long  term  consequences.  The  long  term  impact  is  not   visible,  while  the  short  term  sacrifices  were,  even  though  ICT  had  an  important  role  in  stimulating   the  debate  and  catalysing  attention  of  the  media  on  the  issue.     2.3.6. Encourage  behavioural  change  and  uptake   Once   policies   are   adopted,   a   key   challenge   is   to   make   sure   that   all   stakeholders   comply   with   regulations  or  follow  the  recommendations.  It  is  well  known  how  the  greatest  resistance  to  a  policy   is  not  active  opposition,  but  lack  of  application.   For  instance,  several  programmes  to  reduce  alcohol  dependency  problems  in  the  UK  failed  as  they   excessively  relied  on  positive  and  negative  incentives  such  as  prohibition  and  taxes,  but  did  not  take   into  account  peer-­‐pressure  and  social  relationships.  They  failed  to  leverage  “the  power  of  networks”   (Ormerod   2010).   For   instance,   any   policy   related   to   reduction   of   alcohol   consumption   through   prohibitions  and  taxes  is  designed  to  fail  as  long  as  it  does  not  take  into  account  social  networks,  as   binge  drinkers  typically  have  friends  who  also  have  similar  problems.  In  another  classical  example   (Christakis  and  Fowler  1997),  a  large  scale  longitudinal  study  showed  that  the  chances  of  a  person   becoming  obese  rose  by  57  per  cent  if  he  or  she  had  a  friend  who  became  obese.   The   identification   of   social   networks   and   the   role   of   peer   pressure   in   changing   behaviour   is   not   considered  in  traditional  policy-­‐making  tools.   2.3.7. Manage  crisis  and  the  “unknown  unknown”   The  job  of  policy-­‐makers  is  increasingly  one  of  crisis  management.  There  is  robust  evidence  that  the   world  is  increasingly  interconnected,  and  unstable  (also  because  of  climate  change).  Crises  are  by   definition  sudden  and  unpredictable.  Dealing  with  unpredictability  is  therefore  a  key  requirement  of   policy-­‐making,  but  the  present  capacity  to  deal  with  crises  is  designed  for  a  world  where  crises  are   exceptional,  rather  than  the  rule.  Donald  Rumsfeld,  former  secretary  of  state,  famously  said  during   the  Iraq  war  that  while  the  US  government  was  capable  of  dealing  with  the  “known  unknown”,  the   difficulty  was  the  increasing  recurrence  of  “unknown  unknown”:  those  things  that  we  don’t  known   that  we  don’t  know.   There  is  evidence  that  the  instability  and  chaotic  natures  of  our  world  is  increasing,  because  of  its   increasing   connectedness.   Every   year,   intense   climate   phenomena   throw   our   cities   in   disarray,   because   of   snow,   flooding,   fires.   Each   crisis   seems   to   find   our   decision-­‐makers   unprepared   and   unable  to  deal  with  it  promptly.  As  Taleb  (2007)  puts  it,  we  live  in  the  age  of  "Extremistan":  a  world   of   "tipping   points"   (Schelling   1969)   “cascades”   and   "power   laws"   (Barabasi   2003)   where   extreme  
  • 29.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   29  |  P a g e   events   are   "the   new   normal".   There   are   many   indications   of   this   extreme   instability,   not   only   in   negative   episodes   such   as   the   financial   crisis   but   also   in   positive   development,   such   as   the   continuous   emergence   of   new   players   on   the   market   epitomised   by   Google.   The   random   vulnerability  of  today’s  world  is  well  illustrated  by  this  chart  from  the  EC  DG  RESEARCH.     Figure  4:  Total  Disasters  Reported   2.3.8. Moving  from  conversations  to  action   The  collaborative  action  of  people  is  able  to  achieve  seemingly  unachievable  goals:  experiences  such   as  ZooGalaxy  and  Wikipedia  show  that  mass  collaboration  can  help  achieve  disruptive  innovation.   Yet  too  often  web-­‐based  collaboration  is  confined  to  complaints  and  discussions,  rather  than  action.   As  one  blogger  put  it,  paraphrasing  Marx:  “Philosophers  have  only  interpreted  the  world:  the  point  is   to  complain  about  it”14 .     For  example,  the  2012  Italian  elections  saw  an  explosion  of  activity  in  social  media  discussing  about   the  different  candidates.  This  energy  then  failed  to  translate  into  concrete  action  in  the  aftermath  of   the  elections.         2.3.9. Detect  non-­‐compliance  and  mis-­‐spending  through  better  transparency   In  times  of  crisis,  it  is  ever  more  important  for  governments  to  ensure  that  financial  resources  are   well  spent  and  policies  are  duly  implemented.  But  monitoring  is  a  cost  in  itself,  and  a  certain  margin   of  inefficiency  in  resources  deployment  is  somehow  “natural”.  Yet  the  cost  of  this  mismanagement  is   staggering:  for  instance,  in  2010,  7.7%  of  all  Structural  Funds  money  was  spent  in  error  or  against  EU   rules15 .    OECD  estimates  place  the  cost  of  corruption  equals  5%  of  global  GDP16 .  Thereby  it  would  be   crucially  important  to  be  able  to  avoid  the  mismanagement    with  anticipatory  corrective  actions.   2.3.10.Understand  the  impact  of  policies                                                                                                                             14  Quoted   in   Mick   Fealty,   The   wisdom   of   crowds,   The   Guardian   24   February   2007   http://www.guardian.co.uk/commentisfree/2007/feb/24/towardsadeliberativedemocra   15  http://www.europeanvoice.com/article/2011/november/commission-­‐names-­‐worst-­‐managers-­‐of-­‐eu-­‐money/72613.aspx   16  http://www.oecd.org/dataoecd/51/5/49693613.pdf  
  • 30.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   30  |  P a g e   Measuring  the  impact  of  policies  remains  a  challenge.  Ideally,  policy-­‐makers  would  like  to  have  real-­‐ time  clear  evidence  on  the  direct  impact  of  their  choice.  Instead,  the  effects  of  a  policy  are  often   delayed  in  time;  the  ultimate  impact  is  affected  by  a  multitude  of  factors  in  addition  to  the  policy.   Timely  and  robust  evaluation  remains  an  unsolvable  puzzle.   This  is  particularly  true  for  research  and  innovation  policy,  where  the  results  from  investment  are   naturally  expected  at  years  of  distance.  As  Kuhlmann  and  Meyer-­‐Krahmer  (1994)  puts  it,  “the  results   of  evaluations  necessarily  arrive  too  late  to  be  incorporated  into  the  policy-­‐making  process”.     2.4. When  policy-­‐making  2.0  becomes  a  reality:  a  tentative  vision  for   2030   This   is   the   scenario   of   how   future   policy-­‐making   could   be   deployed   in   an   ideal   world,   if   all   the   opportunities  of  policy-­‐making  2.0  tools  were  taken.  It  aims  at  illustrating  how  these  technologies   and  methods  could  concretely  be  deployed  and  the  effect  they  would  have.  This  is  deliberately  a   normative  scenario,  describing  a  positive  and  concrete  future  at  a  very  high  level.   The   scenario   is   organised   alongside   the   typical   phases   of   the   policy-­‐making   cycle.   It   applies   to   a   hypothetical  new  privacy  directive  being  developed  in  2030.     2.4.1. Agenda  setting  phase:  recognizing  the  problem   Brussels,  2030.  The  EC  task  force  on  privacy  and  data  protection  is  alerted  by  a  number  of  events.   Their   yearly   report,   accompanied   by   the   publication   as   linked   open   data   in   February,   has   been   accessed  by  more  than  10.000  people  in  a  week.  Several  high  profile  online  blogs  have  published  the   geo-­‐visualized   mash   up   of   the   task   force   data   with   the   data   from   customer   complaints   about   broadband  slowness.  The  figures  speak  for  themselves:  the  complaints  from  customers,  collected   through  both  the  government  single  feedback  system  and  social  media,  about  privacy  infringements   and  identity  theft  mirror  exactly  the  broadband  disruption.  All  seem  to  point  to  some  kind  of  "data   theft"  at  the  infrastructural  level  of  the  Internet.  A  similar  analysis  on  open  linked  data  shows  an   abnormal  concentration  of  complaints  over  credit  card  fraud  from  users  of  a  limited  number  of  ISP   that  have  struggled  to  obtain  the  infrastructure  security  certification.  Anomalies  in  this  correlation   seem  to  weaken  the  case,  but  are  quickly  discovered  when  social  network  analysis  is  carried  out:  not   only   the   users   of   ISP   but   also   their   friends   and   contacts   are   most   likely   to   report   denounces   for   fraud.   While  some  years  ago  the  task  force  members  would  still  address  this  through  the  traditional  slow   policy  process,  only  to  realize  its  social  impact  after  mass  media  take  this  up,  today  a  quick  look  at   social  media  analytics  confirms  that  the  public  is  deeply  concerned.  Hashtags  like  #wherearemydata   are   drawing   thousands   of   comments.   The   task   force   obtains   real   time   report   on   sentiment   and   opinions  being  shared  publicly;  it  appears  clear  that  people  feel  unprotected  by  existing  instruments   and  regulation  and  voice  their  dissatisfaction  mainly  towards  the  Task  Force  itself.  In  particular,  the   reputation   report   quickly   identifies   a   limited   numbers   of   social   media   activists   that   show   high   influence  in  terms  of  shaping  the  public  opinion  on  the  matter,  as  their  message  is  quickly  spread.   Historical   text   analysis   of   social   media   allows   to   predict   that   users   that   complain   over   privacy   infringement  are  likely  to  dramatically  decrease  the  extent  to  which  they  share  information  and  data   on  the  web  over  the  following  weeks.  This  drastic  cuts  to  content  sharing  becomes  a  serious  liability   for  an  economy  which  is  now  built  on  the  assumption  that  people  naturally  share  and  collaborate  on   the  web.  Reduction  in  knowledge  sharing,  as  predicted  by  social  media  analytics,  could  lead  to  a   reduction  of  economic  activity  which  is  already  fragile.  
  • 31.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   31  |  P a g e   An  in-­‐depth  investigation  discovers  a  hardware  hacking  group  that  has  targeted  a  selected  number   of  lowly  protected  broadband  providers  to  steal  data  directly  from  their  traffic.  The  policy  agenda  of   the  Task  Force  are  quickly  switched  to  develop  a  revision  of  regulation  in  order  to  better  link  the   broadband  regulation  with  data  protection.     An  open  collaboration  group  is  convened,  with  the  direct  involvement  of  users  previously  identified   as  "highly  influential"  through  social  media  analytics.  In  addition,  cross-­‐analysis  reputation  tools  are   used   to   identify   "experts"   on   joined-­‐up   policy   approaches   to   data   protection   and   broadband   infrastructure,  based  on  integrated  data  from  social  media  (e.g.  Klout  and  Linkedin)  and  scientific   impact  (e.g.  Altmetrics,  ISI  impact  factor).  This  group  is  called  to  provide  independent  fact  based   analysis  of  the  problem  based  on  best  available  data.     In  particular,  the  group  is  called  to  understand  and  model  the  causal  relationship  between  fear  of   privacy   infringements   and   reduction   in   knowledge   sharing;   and   between   this   reduction   and   economic   growth.   The   analysis   is   carried   out   through   a   combination   of   network   analysis,   system   dynamics   and   agent   based   modelling.   Their   report   simulates   several   possible   scenarios,   but   the   common   theme   is   that   a   reduction   in   sharing   activities   by   key   influencers   could   lead   to   a   major   economic  downturn,  as  non-­‐sharing  behavior  will  soon  spread  from  the  "geeks"  to  the  general  public   through  imitation  and  social  pressure.     The  report  is  published  for  public  review,  enabling  in-­‐line  comments  and  in-­‐depth  analysis  of  the  raw   data  and  models  behind  the  analysis.  It  brings  in  hundreds  of  comments.  Once  a  quick  text  mining   analysis  is  carried  out,  the  comments  seem  to  cluster  on  an  unjustified  assumption  in  one  scenario,   and   on   a   limited   set   of   issue   regarding   the   potential   negative   impact   on   net   neutrality   when   implementing  new  regulation.  Hence,  the  scenario  is  double-­‐checked  and  the  assumption  clarified,   and  net  neutrality  experts  are  brought  on  board  in  the  working  group.     2.4.2. Policy  design   Once  the  nature  and  size  of  the  problem  is  clarified,  the  working  group  is  called  to  design  possible   policy  measures.  A  crowdsourcing  exercise  is  launched,  where  anyone  can  submit  ideas  for  specific   amendments   to   the   present   regulation.   The   analysis   is   based   on   the   most   voted   suggestion,   but   these  turn  out  to  be  not  extremely  insightful.  A  reputation  management  system  is  integrated  with   the   exercise,   allowing   to   identify   original   ideas   and   insight   based   on   voting   “weighted”   based   on   expertise  in  the  field.  A  set  of  10  recommendations  are  presented  for  further  analysis  by  the  working   group.   The  working  group,  based  on  this  input,  formulate  3  policy  options  alongside  these  axis:   -­‐  continuing  with  the  current  data  protection  framework   -­‐  enhancing  it  with  greater  forms  of  self-­‐regulation;  increased  transparency,  easier  enforcement  and   greater  empowerment  of  users   -­‐  define  a  new,  stricter  data  protection  regulation     The  three  options  are  then  run  through  the  large-­‐scale  simulation  engine,  which  combines  agent   based  modeling,  system  dynamics,  network  analysis  and  big  data  analytics.  This  allows  to  anticipate   the  unexpected  effect  of  a  new  stricter  regulation,  that  would  probably  induce  virtuous  broadband   providers   to   conform   to   the   minimum   requested   by   regulation,   while   private   companies   that   typically  choose  the  most  secure  providers  would  have  to  increase  their  expenditure  on  security,  in   particular   in   the   field   of   web   services,   which   is   already   weak   in   Europe   and   exposed   to   global   competition.   In   addition,   consumers   would   be   satisfied   with   a   perception   of   increased   security,  
  • 32.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   32  |  P a g e   thereby  reducing  their  attention  on  own  control  over  data,  which  would  in  the  long  term  increase   the  security  risks  of  another  crisis.     All   the   models   underlying   the   simulation   are   open   to   public   review,   in   order   to   ensure   the   transparency  of  the  initiative.  Indeed,  the  first  version  of  the  ex-­‐ante  impact  assessment  showed   that   option   1   was   the   less   risky   and   more   beneficial,   but   a   little   known   researcher   from   Greece   quickly  pointed  out  to  a  banal  coding  mistake  in  the  database  used  to  compute  historical  series  of   privacy  infringements.   In  the  end,  option  2  is  chosen  as  the  most  effective.  The  amendment  to  the  regulation  are  quickly   rapidly  drafted,  publicly  reviewed  and  then  turned  into  law  by  the  European  Parliament.       2.4.3. Implementation   The  regulation  envisages  a  strong  role  for  the  public  in  both  enforcement  and  self-­‐regulation.  Each   local  branch  of  broadband  providers  have  to  publish  in  real  time    as  open  linked  data  the  results  of   the  security  certification,  as  well  as  any  traffic  management  intervention  they  carry  out.     In  addition,  a  set  of  “persuasive  games”  has  been  developed  to  help  consumers  manage  and  control   their  data  flows.  Users  receive  badges  each  time  they  perform  a  data  safety  self-­‐assessment,  which   is  easy  to  carry  out  through  a  highly  visual  smartphone  app  which  highlights  to  what  extend  the   users  behaviour  diverges  from  the  public  recommendations  and  from  the  people  in  his/her  social   network.   Unexpectedly,   a   kind   of   game   about   being   “safest   kid   on   the   block”   starts   particularly   between  teenagers,  that  compete  in  trying  to  overcome  each  other  safety  provisions.  New  business   models  of  third  party  data  management  services  are  launched  for  those  less  interested  in  managing   their  data.       As   a   result,   the   propensity   to   buy,   share   and   collaborate   online   increase   sensibly,   driving   to   a   moderately  positive  economic  impact.     2.4.4. Evaluation   Real-­‐time   data   analytics   on   the   performance   of   data   providers,   as   well   as   anonymized   data   on   consumers  data  protection  measure,  allow  decision-­‐makers  to  identify  potential  breaches  as  soon  as   they  happen.   Participatory  sensing  tools,  combined  with  opinion  mining  allow  citizens  and  policy-­‐makers  to  easily   monitor  when  new  problems  emerge.   Open  data  on  measures  taken  by  regulators  allow  civil  society  organisation  to  ensure  the  adequacy   of  government  intervention.     2.5. The   key   challenges   for   policy   makers   and   the   corresponding   phases  in  the  policy  cycle   Let  us  now  relate  the  key  challenges  of  policy  making  activity  with  the  phases  in  the  policy  cycle:   • The  Agenda  Setting  phase  is  mostly  related  to  the  challenges   o Detect  and  understand  problems  before  they  become  unsolvable     o Manage  crisis  and  the  “unknown  unknown”    
  • 33.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   33  |  P a g e   o Ensure  long  -­‐  term  thinking:  Agenda   • The  Design  phase  is  mostly  related  to  the  challenges     o Encourage  behavioural  change  and  uptake   o Identify  “good  ideas”  and  innovative  solutions  to  long-­‐standing  problems   o Reduce  uncertainty  on  the  possible  impacts  of  policies   o Generate  high  involvement  of  citizens  in  policy-­‐making   • The  Implementation  phase  is  mostly  related  to   o Moving  from  conversations  to  action   o Reduce  uncertainty  on  the  possible  impacts  of  policies   • The  Monitor  and  Evaluation  phase  is  mostly  related  to   o Detect  non-­‐compliance  and  mis-­‐spending  through  better  transparency     o Manage  crisis  and  the  “unknown  unknown”    
  • 34.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   34  |  P a g e   3. The  supply  side:  current  status  and  the  Research  Challenges     In  this  section,  we  illustrate  in  detail  each  research  challenge  which  needs  to  be  addressed  in  order   to  make  the  vision  a  reality  and  address  the  policy-­‐challenges  described  in  the  previous  chapter,  by   describing:   -­‐ The  definition,     -­‐ The  potential  opportunities  for  governance,     -­‐ The  state  of  the  art  of  market  and  research,     -­‐ The  existing  challenges  and     -­‐ The  recommended  research  themes.   The   research   challenges   are   organised   in   2   groups:   the   first   regroups   6   challenges   on   Policy   Modelling,  while  the  second  one  regroups  9  challenges  on  Collaborative  Governance.           3.1. Policy  Modelling     3.1.1. Systems  of  Atomized  Models     Introduction  and  definition   This  research  challenge  seeks  to  find  the  way  to  model  a  system  by  using  already  existing  models  or   composing   more   comprehensive   models   by   using   smaller   building   blocks,   sometimes   also   called   “atoms”,   either   by   reusing   existing   objects/models   or   by   generating/building   them   from   the   very   beginning.  Therefore,  the  most  important  issue  is  the  definition/identification  of  proper  (or  most   apt)  modelling  standards,  procedures  and  methodologies  by  using  existing  ones  or  by  defining  new   ones.   Further   to   that,   the   present   sub-­‐challenge   calls   for   establishing   the   formal   mechanisms   by   which  models  might  be  integrated  in  order  to  build  bigger  models  or  to  simply  exchange  data  and   valuable  information  between  the  models.  Finally,  the  issue  of  model  interoperability  as  well  as  the   availability   of   interoperable   modelling   environments   should   be   tackled,   as   well   as   the   need   for   feedback-­‐rich   models   that   are   transparent   and   easy   for   the   public   and   decision   makers   to   understand.     Why  it  matters  in  governance   Using  existing  objects/models  that  are  able  to  describe  systems,  sub-­‐systems  and  interaction  among   them,  allows  everyone  to  build  his  own  insight  on  a  specific  problem/solution.  So,  in  governance,   such  opportunity  gives  us  the  chance  to:   • Release   public   data,   linking   them   and   producing   visual   representations   able   to   reveal   unanticipated  insights.  
  • 35.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   35  |  P a g e   • Use  social  computing  to  promote  engagement  and  citizens’  inclusion  in  policy  decision,  and   exploit  the  power  of  ICT  in  mining  and  understanding  the  opinions  they  express.   • Analyse  policies  and  produce  models  that  can  be  visualised  and  run  to  produce  simulations   able  to  show  the  effects  and  impacts  from  different  perspectives  such  as  political,  economic,   social,  technological,  environmental  and  legal  facets.   Current  Practice  and  Inspiring  cases   In  systems  analysis,  it  is  common  to  deal  with  the  complexity  of  an  entire  system  by  considering  it  to   consist   of   interrelated   sub-­‐systems.   This   leads   naturally   to   consider   models   as   consisting   of   sub-­‐ models.   Such   a   (conceptual)   model   can   be   implemented   as   a   computer   model   that   consists   of   a   number   of   connected   component   models   (or   modules).   Component-­‐oriented   designs   actually   represent  a  natural  choice  for  building  scalable,  robust,  large-­‐scale  applications,  and  to  maximize  the   ease  of  maintenance  in  a  variety  of  domains.   An  implementation  based  on  component  models  has  at  least  two  major  advantages:     • First,  new  models  can  be  constructed  by  coupling  existing  component  models  of  known  and   guaranteed   quality   with   new   component   models.   This   has   the   potential   to   increase   the   speed  of  development.   • Secondly,  the  forecasting  capabilities  of  two  different  component  models  can  be  compared,   as  opposed  to  compare  whole  simulation  systems  as  the  only  option.       Further,   common   and   frequently   used   functionalities,   such   as   numerical   integration   services,   visualisation   and   statistical   ex-­‐post   analyses   tools,   can   be   implemented   as   generic   tools   and   developed  once  for  all  and  easily  shared  by  model  developers.  By  the  way,  the  current  practice  in   composing  and  re-­‐using  models  is  still  not  sufficiently  widespread.  In  relation  to  Model  Reuse,  this  is   mainly  due  to  the  fact  that  little  to  no  repository  actually  exists17 .  Moreover,  the  publicly  available   models   are   not   “open”   to   modification   or   re-­‐use.   It   would   be   useful   if   every   paper   containing   a   model   included   a   link   to   on-­‐line   version   that   people   could   run   and   modify, Some   modelling   environments   (or   modelling   suites)   provide   some   examples   and   small   libraries   of   ready-­‐to-­‐use   models,  but  in  most  cases,  they  are  not  completely  open  nor  any  explanation  is  provided  on  how  to   reproduce  them  (their  structure,  parameters,  etc.).  As  an  inspiring  case  see  the  SEAMLESS  project,   which  was  funded  by  the  EU  Framework  Programme  6  (Global  Change  and  Ecosystems),  ran  from   2005   till   March   2009,   and   developed   a   computerized   framework   for   integrated   assessment   of   agricultural  systems  and  the  environment18 .  During  the  project,  a  modular  approach  was  chosen  to   develop  a  system  named  “Agricultural  Production  and  Externalities  Simulator  (APES)”,  illustrated  in   figure  (5).  APES  is  a  modular  simulation  system  targeted  at  estimating  the  biophysical  behaviour  of   agricultural  production  systems  in  response  to  the  interaction  of  weather,  soils  and  different  options   of  agro-­‐technical  management.  Although  a  specific,  limited  set  of  components  is  available  in  the  first   release,   the   system   is   being   built   to   incorporate,   at   a   later   time,   other   modules   which   might   be   needed  to  simulate  processes  not  included  in  the  first  version.  The  processes  are  simulated  in  APES   with  deterministic  approaches  which  are  mostly  based  on  mechanistic  representations  of  biophysical   processes.   APES   was   used   to   compare   alternative   agricultural   and   environmental   policy   options,   facilitating  the  process  of  assessing  key  indicators  that  characterize  interactions  between  agricultural   systems,  natural  and  human  resources,  and  society.  The  developed  framework,  named  SEAMLESS-­‐IF                                                                                                                             17  This  is  true  for  most  of  the  sectors,  even  though  for  instance  most  energy  models  are  based  on  the  MARKAL   family  of  models.  Furthermore  something  to  consider  is  that  models  need  to  be  customized,  so  that  having  a   single  framework  readily  applicable  to  different  contexts  and  sectors  may  actually  be  counter  productive   18  http://www.seamless-­‐ip.org  
  • 36.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   36  |  P a g e   in   a   finale   stage,   also   enabled   linkage   of   quantitative   models,   pan-­‐European   databases   and   qualitative  procedures  to  simulate  the  impact  on  society  of  biophysical,  economic  and  behavioural   changes.  SEAMLESS-­‐IF  now  facilitates  ex-­‐ante  assessments  at  the  full  range  of  scales  from  the  global   to  the  field  level  to  support  policy  and  decision  making  for  sustainable  development.  SEAMLESS-­‐IF   nowadays   can   be   used   to   investigate   the   effects   of   agricultural   and   environmental   policies   while   accounting   for   technical   innovations.   Further,   the   interactions   of   such   policies   with   other   major   trends  such  as  climate  change  and  increasing  land  used  for  bio-­‐fuel  crops  can  be  studied  efficiently   in  the  near  future.  
  • 37.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   37  |  P a g e                                                            Figure  5:  Agricultural  Production  and  Externalities  Simulator  (APES)       Analyses  with  SEAMLESS-­‐IF  can  be  done  at  multiple  scales  and  with  varying  time  horizons,  whilst   focusing  on  the  most  important  issues  emerging  at  each  scale.  This  is  possible  as  the  framework  is   based   on   research   innovations   in   linking   models   across   scales   allowing   consistent   “micro-­‐macro”   analysis   as   well   as   linking   models   across   disciplines   allowing   “economicbiophysical”   analysis.   The   linked  models  range  from  a  bio-­‐physical  field  model  to  a  farm  model  and  to  an  agricultural  sector   model  for  the  EU;  in  other  words  they  ensure  a  consistent  analysis  of  what  effects  EC  policies  may   have  on  agricultural  markets,  farming  systems  and  the  environment.  In  addition,  the  effectiveness  of   a  policy  in  its  institutional  context  is  assessed  by  applying  qualitative  procedures.  The  interlinked   pan-­‐European  database  provides  the  relevant  data  needed  at  different  scales.       For  another  inspiring  example  have  a  look  at  the  Insight  Maker  case  at  http://insightmaker.com/.   Insight  Maker  allows  to  build  simulation  models  ("Insights")  for  all  scales:  from  the  smallest  cell,  to   the  social  effects  of  product  adoption,  to  global  climate  change.  Once  they  are  built  one  can  share   them   with   others.   The   models   are   called   “an   Insight”   as   they   will   typically   reveal   one   or   more   fascinating  point  about  the  system  under  study.  All  the  simulations  built  with  Insight  Maker  can  be   shared  via  the  web.    This  means  people  can  change  the  variables  and  see  the  results  for  themselves.     Vensim  Molecules19  is  a  software  used  for  constructing  system  dynamics  models  from  molecules  of   system  dynamics  structure.  Molecules  are  made  of  primitive  stock  and  flow  or  auxiliary  elements   and   are,   in   turn,   the   building   blocks   of   complete   models,   elements   of   substructure   serving   a   particular  purpose.  Molecules  provide  a  framework  for  presenting  important  and  commonly  used   elements  of  model  structure  making  faster  and  easier  to  develop  system  dynamics  models.                                                                                                                             19  http://www.vensim.com/molecule.html  
  • 38.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   38  |  P a g e     Anylogic20 ,   a   multi-­‐method   simulation   modelling   tool   capable   of   integrating   and   combining   the   following   modelling   approaches:   system   dynamics,   discrete   event   simulation   and   agent-­‐based   modelling.  Anylogic’s  simulation  language  is  composed  by  stock  and  flow  diagrams  (used  for  System   Dynamics   modelling),   statecharts,   which   define   the   agents’   behaviour   in   Agent   Based   modelling,   action   charts   (used   to   define   algorithms),   and   finally   process   flowcharts   which   are   the   basic   constructions  for  defining  processes  in  Discrete  Event  modelling.     Available  Tools   A   very   interesting   tool   is   En-­‐ROADS21 ,   a   global   simulation   model   that   focuses   on   how   changes   in   global   GDP,   energy   efficiency,   R&D   results,   carbon   price,   fuel   mix,   and   other   factors   will   change   carbon  emissions,  energy  access,  and  temperature.       En-­‐ROADS   is   designed   to   complement,   other   more   disaggregated   models   addressing   these   questions,  and  relies  on  the  other  models  and  EIA  projections  for  testing  and  data.                                                                                                                                 20  http://www.xjtek.com/   21  http://climateinteractive.org/simulations/en-­‐roads  
  • 39.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   39  |  P a g e             En-­‐ROADS   is   customized   to   address   enquiries   regarding   how   much   might   technological   breakthroughs  contribute  to  addressing  climate  change.  These  particular  breakthroughs  include  for   instance  R&D  and  scale-­‐up  of  a  new  zero-­‐carbon  energy  supply,  renewable  energy,  energy  efficiency,   inexpensive   natural   gas,   etc.   More   precisely   En-­‐ROADS   investigates   which   assumptions   about   the   technology  and  the  economy  would  be  necessary  for  a  breakthrough  to  grow  with  enough  speed   and  scale  to  deliver  climate  goals.            
  • 40.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   40  |  P a g e     One  of  the  most  innovative  parts  of  En-­‐ROADS  regards  its  capabilities  to  test  assumptions  about  the   potential  success  of  R&D  towards  zero-­‐carbon  energy.  More  in  particular  the  simulation  investigate   what  are  the  likely  dynamics  of  the  emergence  of  a  new  energy  supply,  as  well  as  how  fast  could  it   grow  and  displace  high-­‐carbon  sources  and  reduce  carbon  emissions.  En-­‐ROADS  is  an  extension  of   the  C-­‐ROADS  model,  which  will  be  described  below  in  the  roadmap.  The  distinction  between  the  two   models  is  that  while  C-­‐ROADS  focuses  on  how  the  changes  in  national  and  regional  emissions  could   affect   GHG   emissions   and   climate   outcomes,   En-­‐ROADS   focuses   on   how   changes   in   the   energy,   economic,  and  public  policy  systems  could  influence  GHG  emissions  and  climate  outcomes.     Key  challenges  and  gaps   With  regards  to  implementation  architecture  and  use  of  modelling  frameworks,  there  are  two  major   problems:     • the   framework   design   and   implementation   must   be   optimized   to   balance   carefully   its   flexibility  and  its  usability  to  avoid  incurring  either  a  performance  penalty  or  users  having   too  steep  a  learning  curve,  and     • developing  components  for  a  specific  framework  constrains  their  use  to  that  framework.     The   most   immediate   option   to   overcome   such   problems   is   developing   inherently   reusable   components  (i.e.  non  framework  specific),  which  can  be  used  in  a  specific  modelling  framework  by   encapsulating  them  using  dedicated  classes  called  “wrappers”;  such  classes  act  as  bridges  between   the  framework  and  the  component  interface.  The  disadvantage  of  this  solution  is  the  creation  of   another  “layer”  in  the  implementation,  which  adds  to  the  already  implemented  machinery  in  the   framework.   The   appropriateness   of   this   solution,   both   as   ease   of   implementation   and   overall   performance,  must  be  evaluated  case  by  case.   Regardless   of   the   choice   of   developing   framework   specific   or   intrinsically   reusable   components,   there   is   a   basic   choice   which   must   be   carefully   evaluated   prior   to   that   and   which   is   related,   in   general   terms,   to   the   framework   as   a   flexible   modelling   environment   to   build   complex   models  
  • 41.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   41  |  P a g e   (model   linking),   but   also   to   the   framework   as   an   efficient   engine   for   simulation,   calibration   and   simulation  of  model  components  (model  execution).  Modern  software  technologies  allow  building   flexible,   coherent   and   elegant   constructs,   but   that   comes   at   a   performance   cost.   Without   even   introducing  specific  references  to  Object  Oriented  Programming  (OOP),  it  seems  important  to  point   out   that   the   use   of   object-­‐oriented   programming   constructs,   which   actually   enhance   flexibility,   modularity  and  reuse  of  software,  all  nice  things,  require  the  compiler  to  use  virtual  methods  calls,   dynamic  dispatching,  and  so  on.  All  these  operations  are  resource  intensive  and  in  some  cases,  they   can  heavily  affect  the  code  performance,  and  this  becomes  evident  in  applications  in  which  such  use   is  done  thousand  times  every  simulation  step.     By   the   way,   the   Model   Composition   horizon   is   even   more   clouded   as   the   potential   advantages   resulting  from  the  possibility  of  composing  bigger  models  from  smaller  ones  have  been  shown  only   recently.  It  is  essentially  due  to  the  problem  of  interoperability  and  integration  of  different  vendors’   (thus   proprietary)   model   formats   and   to   the   lack   of   standards   allowing   performing   composition   tasks.   Another   problem   stems   from   the   fact   that   many   models   are   still   too   dependent   on   their   implementation  methodology.  Moreover,  model  integration  is  at  present  almost  non-­‐existing.  Very   few   modelling   environments/suites   provide   the   import/export   functionalities   and   a   standard   language  for  model  interoperability  is  not  currently  available.  Most  of  the  current  practice  for  data   communication   or   information   transfer   is   performed   by   means   of   third   party   solutions   (e.g.:   interoperability  in  most  cases  is  achieved  by  transferring  data  via  electronic  spreadsheets  or,  only  in   rare  cases,  by  using  Database  Management  Systems  (DBMS)  or  Enterprise  Resource  Planning  (ERP)   systems.     Current  research   Current  research,  as  well  as  previous  research,  has  not  yet  worked  on  (with  the  exception  of  just  a   few  cases)  the  problem  of  different  models  integration.  At  present,  due  to  the  plethora  of  different   modelling/simulation  environments/suites,  as  well  as  to  differences  at  the  scientific  field  level,  many   competing  file  formats  exist.  It  is  possible  that  vendors  perceive  the  modelling  practice  as  a  very   small  market  niche  (as  the  users  stem  mainly  from  Academia  and  to  a  very  small  extent  from  private   companies  where  a  Decision  Support  Systems  is  used,  what  is  more  the  Public  Administration  share   is  negligible)  and  therefore  are  reluctant  to  introduce  interoperable  features.   Also,  current  research,  as  well  as  previous  research,  has  only  recently  begun  to  explore  the  following   issues:   • Open-­‐source  modelling  and  simulation  environments  (there  are  open  environments  that  are   rising  in  importance  in  the  research  community,  albeit  in  most  cases  they  only  provide  the   possibility  to  implement  and  simulate  a  model  according  to  the  modelling  methodology  they   refer  to).   • Communication   of   data   among   models   developed   in   different   proprietary   (or   open)   environments  by  depending  on  third  party  solutions  (e.g.:  interoperability  is  in  most  cases   only   achieved   by   transferring   data   by   means   of   electronic   spreadsheets   or,   only   in   rare   cases,  by  using  a  DBMS  or  an  organisation’s  ERP).   • Open  visualisation  of  results  stemming  from  model  simulation  (e.g.:  online  visualisation  of   simulation  results  in  a  browser  by  interfacing  -­‐  only  in  a  few  cases  -­‐  the  simulation  engines,   or  -­‐  as  it  is  more  often  the  case  -­‐  by  connecting  to  a  third  party  mean,  as  described  in  the   previous  bullet  point).  
  • 42.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   42  |  P a g e     Future  research   Future  research  should  therefore  focus  on:   • Definition   of   standard   procedures   for   model   composition/decomposition,   e.g.   how   to   deductively  pass  from  a  macro-­‐description  of  models  to  the  fine  definition  of  its  building-­‐ blocks   or   molecules   (top-­‐down   approach),   how   to   inductively   conceive   a   progressive   composition   of   bigger   models   by   aggregating   new   modules   as   soon   as   they   are   needed   (bottom-­‐up  approach)  or  by  expanding  already  existing  objects.   • Proposition  of  a  minimum  set  of  archetypical  structures,  building  blocks  or  molecules  that   might  be  used  according  to  the  proper  level  of  decomposition  of  the  model  (e.g.  systemic   archetypes,  according  to  the  Systems  Thinking  /  System  Dynamics  approach,  might  be  useful   to  describe  the  overall  behaviour  thanks  to  the  main  variables  in  the  system  to  be  modelled   at   a   macro-­‐to-­‐middle   level).   The   procedures   to   implement,   validate   and   redistribute   any   further  improvement  of  these  “minimal”  objects  should  be  investigated.   • Definition   of   open   modelling   standards,   as   the   basis   for   interoperability,   that   is   defining   common  file  formats  and  templates  (i.e.:  by  means  of  XML),  which  would  allow  the  models   described  by  means  of  these  XML  files  to  be  opened,  accessed  and  integrated  into  every   (compliant)  model-­‐design  and  simulation  environment22 .   • Interoperability,  also  intended  in  terms  of  Service  Oriented  Architectures  (e.g.:  certain  stand-­‐ alone  and  always  operative  models  might  expose  some  “services”  in  order  to  make  available   either  their  endogenous  data  or  bits  of  information,  or  some  peculiar  function  or  structural   part,  while  some  other  may  request  to  use  those  services  when  needed.  In  consequence,  it   creates   a   need   for   a   definition   of   model   repositories,   a   list   of   operative   models   and   the   functionalities  that  they  might  expose  which  finally,  entails  the  definition  of  a  SOA  among   interoperable  models).   • Definition  and  implementation  of  model  repositories  (and  procedures  to  add  new  objects  to   them),   even   if   they   are   restricted   to   hosting   models   developed   according   to   a   specific   methodology  (Agent  Based,  System  Dynamics,  Event  Oriented,  Stochastic,  etc.)   • Definition  and  implementation  of  new  relationships  that  are  created  when  two  models  are   integrated.   All   possible   important   relationships   resulting   from   a   model   integration/composition   should   be   identified   and   eventually   included   in   the   new   deriving   integrated  model.   • Input   /   Output   definition   /   re-­‐definition:   the   integration   of   modelling   techniques   is   a   pertinent  issue  in  the  scope  of  this  challenge.  The  multi-­‐modelling  tools  should  be,  in  the   future,  available  not  only  to  experts  but  also  to  lay  users.  Moreover,  at  present,  only  a  few  of   the  actually  available  modelling/simulation  suites  are  able  to  provide  the  possibility  to  build   a  model  by  referring  to  a  different  modelling  methodology.                                                                                                                                 22    Making  portability  is  very  hard  to  implement:  in  the  past  there  has  been  a  significant  effort  with  SMILE  and  later  XMILE   to  make  SD  models  portable  between  different  software  programs.  This  was  really  a  best  case  scenario  as  the  software   programs  in  play  were  so  very  similar  to  start,  but  it  has  ended  up  unsuccessfully  as  agree  and  implementation  has  not   been   reached.   Deciding   on   a   universal   format   to   bind   existing   things   together   seems   like   it   will   not   be   able   to   work   (http://xkcd.com/927/).  There  has  been  greater  success  with  things  like  Modelica  where  you  get  a  format  and  an  app  and   then  other  programs  decide  to  adopt  the  format  themselves,  a  more  organic  process  
  • 43.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   43  |  P a g e   3.1.2. Collaborative  Modelling     Introduction  and  definition   The  English  sayings  “two  heads  are  better  than  one”  and  “too  many  cooks  spoil  the  broth”  give  an   idea   of   the   expectations   that   arise   from   a   collaboration   of   people.   On   the   one   hand,   one   would   expect  that  a  group  of  people  is  able  to  better  observe  and  perceive  situations  as  well  as  to  make   better  decisions  than  a  single  person  would  be  able  to  make.  On  the  other  hand,  it  is  also  common   knowledge   that   the   collaboration   of   several   people   entails   the   problem   of   group   coordination,   which,  if  disregarded,  can  make  group  work  inefficient,  compared  to  the  work  of  a  single  person.   There  is  the  need  for  an  authoritative  gatekeeper  that  is  also  the  modeler  implementing  the  model,   otherwise   collaborative   sessions   would   get   out   of   control   quickly   with   non-­‐implementable   ideas   becoming  the  focus  of  discussion  very  fast.     There  are  three  kinds  of  problems  that  are  typically  approached  by  groups:     • cognition  problems,  problems  with  a  definite  solution  or  a  set  of  solutions  that  are  certainly   better  than  others;   • coordination  problems,  problems  that  require  the  group  to  figure  out  how  to  coordinate  the   behaviour  of  its  members;   • cooperation  problems,  problems  which  feature  the  involvement  of  several  self-­‐interested,   distrustful  people  who  have  to  work  together.       Collaborative  modelling  (also  called  group  model  building)  refers  to  a  process  where  a  number  of   people  actively  contribute  to  the  creation  of  a  model.  The  weakest  form  of  involvement  is  feedback   to  the  session  facilitator,  similar  to  the  conventional  way  of  modelling.  Stronger  forms  are  proposals   for  changes  or  (partial)  model  proposals.  In  this  particular  approach  the  modelling  process  should  be   supported   by   a   combination   of   narrative   scenarios,   modelling   rules,   and   e-­‐Participation   tools   (all   integrated  via  an  ICT  e-­‐Governance  platform):  so  the  policy  model  for  a  given  domain  can  be  created   iteratively  using  cooperation  of  several  stakeholder  groups  (decision  makers,  analysts,  companies,   civic  society,  and  the  general  public).     As  a  matter  of  fact  groups  require  rules  (or  cultural  norms)  to  maintain  order  and  coherence,  as  well   as   diversity   and   independence   of   its   group   members   in   order   to   create   a   kind   of   a   collective   intelligence.   Bringing   together   people   with   diverse   perspectives   and   backgrounds   for   working   together  in  multi-­‐disciplinary  teams  is  expected  to  improve  the  overall  group  performance,  so  the   first  issue  on  which  the  collaborative  process  should  be  based  is  the  definition  of  a  shared  modelling   rules  framework  (the  social  norms),  guiding  the  modelling  team  in  determining  whether  a  proposal   is  accepted  or  rejected.  Two  usually  adopted  types  of  rules  are:   • Rules  of  majority,  where  a  certain  number  of  group  members  had  to  support  or  oppose  a   proposal  in  order  for  the  whole  group  to  accept  or  reject  it  (e.g.,  more  than  half).  A  tie-­‐break   rule   was   sometimes   specified   (e.g.,   for   the   case   of   an   equal   number   of   supporters   and   opponents).  The  tie-­‐break  could  involve  seniority  issues.   • Rules  of  seniority,  where  the  weight  of  a  group  member’s  support  or  opposition  was  related   to  his  or  her  status  within  the  group.  This  status  could  be  acquired  (e.g.,  by  experience)  or   associated  with  a  position  to  which  the  member  was  appointed.  A  frequent  example  of  this   was   the   case   of   a   more   experienced   modeller   who   was   considered   as   the   leader   by   the   group  and  took  decisions  on  their  behalf.  The  other  members  filled  the  role  of  consultants  in   such  a  case.  
  • 44.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   44  |  P a g e     These  rules  were  sometimes  set  up  explicitly  before  the  group  began  their  work,  or  in  an  early  phase   of   this   work.   But   in   most   cases   they   rather   emerged   as   the   result   of   each   member’s   behaviour.   Individuals   making   regular   contributions   of   high   quality   were   likely   to   acquire   seniority.   In   homogeneous  teams  majority  rules  were  used  more  often.     Why  it  matters  in  governance   From   a   very   high   level   of   abstraction,   collaborative   modelling   itself   can   be   seen   as   a   social   interaction   between   several   people,   while   these   people   who   together   perform   the   modelling   process  form  a  social  entity.  Thus,  the  process  of  collaboratively  defining  and  implementing  a  model,   with  a  particular  reference  to  the  public  policy  modelling,  is  strictly  connected  with  the  public  aspect   of   every   citizen’s   life,   starting   from   the   communities   bridged   by   the   decision   makers   that   collaboratively  define  some  policies,  to  an  average  citizen  which  interacts  with  other  citizens  within   the  rules  framework  defined  by  the  policies  themselves.     Starting  from  the  needs  perceived  by  the  citizens,  the  limitations  of  existing  modelling  techniques   adopted  in  policy  making  include  the  following  issues:   • Changing  models  is  too  time-­‐consuming  and  integrating  to  other  diagrams  is  difficult.  Also   there  are  version  control  problems.     • It  is  not  possible  for  more  than  one  person  to  work  on  the  same  diagram  at  the  same  time.     • Modelling  has  to  be  done  at  the  specific  location  where  the  modeller  is  present.   • Contribution  to  the  model  comes  from  those  interviewed  or  at  a  group  meeting,  limiting  the   potential  contribution  from  a  larger  group.   • Low  model  acceptance:  the  model  resulting  from  the  modelling  session  is  not  supported  by   some  of  the  stakeholders.   • Participants   feel   misunderstood:   as   a   consequence   of   bad   elicitation   or   a   wrong   understanding  of  the  model.   • Low   perceived   model   quality   and   limited   model   comprehension:   Individuals   do   not   fully   understand  the  model  or  do  not  agree  with  it.     Reasons  that  argue  for  conducting  policy  modelling  in  a  collaborative  manner  are:   • No  person  typically  understands  all  requirements  and  understanding  tends  to  be  distributed   across  a  number  of  individuals.     • A  group  is  better  capable  of  pointing  out  shortcomings  than  an  individual.     • Individuals  who  participate  during  analysis  and  design  are  more  likely  to  cooperate  during   implementation.  
  • 45.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   45  |  P a g e     Collaborative   modelling   calls   for   the   definition   of   the   citizen’s   role   in   the   public   policy   modelling   process  (e.g.:  the  mass  participation  issues  and  processes  have  been  already  researched  in  depth  by   the   e-­‐Participation   research   programs).   In   order   to   guarantee   participation   there   are   some   prerequisites  that  should  be  fulfilled:   • All   citizens   who   access   ICT   services   in   order   to   participate   should   represent   the   views   of   communities  affected  by  the  given  policy;   • All  citizens  are  able  to  take  part  in  the  modelling  process  via  intuitive  IT  systems  that  enable   them  an  effective  and  efficient  contribution;   • All   citizens   possess   proper   skills   (or   are   assisted)   to   purposely   follow   a   process   of   group   model-­‐building  in  order  to  avoid/abate  wrong  mental  models  and  thus  ultimately  reach  a   shared  vision  of  the  problem.                                                                                                                                                                                                                                                                             Current  Practice  and  Inspiring  cases   In  current  practice,  collaborative  modelling  is  mainly  performed  offline;  still  the  rules  and  guidelines   for  session  processes  are  not  yet  sufficiently  widespread.  In  fact,  the  abatement  of  wrong  mental   models  and  the  creation  of  knowledge  from  information  usually  imply  the  dialogue  among  people   with   different   views   of   the   problem   as   well   as   the   need   for   critical   skills.   Further   to   that,   the   information   that   occurred   in   a   discussion   has   to   be   grounded   and   definitively   transferred   to   the   formal   model.   Thus,   e-­‐Participation   might   be   of   help   in   achieving   a   critical   mass   of   data   and   information   exchange   online   but   in   itself   does   not   solve   the   problem   of   mass   cooperation   and   collaboration  in  a  formal  modelling  process.  Even  more,  the  participation  in  this  process  entails,  at   present,  a  thorough  knowledge  on  modelling  processes  or  tools  that  an  average  citizen  does  not   have.   Therefore,   there   is   an   urgent   need   for   Intuitive   Interfaces,   Modelling   Wizards   and   guided   simplified  approaches  to  modelling.  Starting  from  the  relevance  of  collaborative  modelling  in  policy   making,  as  a  very  former  inspiring  case  Maarten  Sierhuis  and  Albert  M.  Selvin,  working  at  NYNEX   Science  &  Technology  Inc  in  New  York,  presented  in  1996  a  applied  research  report  on  “Towards  a   Framework   for   Collaborative   Modelling   and   Simulation”,   describing   methodologies   for   modelling   and  simulation  in  a  collaborative  analysis  or  design  project,  and  describing  a  case  study  in  which   Conversational   Modelling,   a   software-­‐supported   technique   for   collaborative   modelling,     enabled   participants  to  construct  static  knowledge  models  in  collaborative  sessions.  The  sessions  described   in   the   report   resulted   in   the   identification   of   207   queries.   Of   these,   24   were   chosen   for   detailed   modelling.   As   a   result   of   the   modelling,   44   resources,   29   knowledge   items,   58   data   items,   and   8   organizational  issues  were  identified.  The  response  from  participants  was  positive.  Many  stated  that   they  had  learned  more  about  each  other  work  in  the  conversational  modelling  sessions  then  they   had  been  able  to  in  the  course  of  their  normal  work  activities.  The  development  organization  has   been   able   to   use   the   output   of   the   sessions   to   generate   design   requirements.   A   picture   of   the   interface  (figure  6)  used  during  the  sessions  follows.  
  • 46.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   46  |  P a g e          Figure  6:  Conversational  Modelling  Interface       As   more   recent   inspiring   case,   one   can   refer   to   the   results   of   the   FP7   projects   OCPOMO23  or   PADGETS24 .   This   last   one,   PADGETS,   aims   at   bringing   together   two   well   established   domains,   the   mashup   architectural   approach   of   web   2.0   for   creating   web   applications   (gadgets)   and   the   methodology  of  system  dynamics  in  analysing  complex  system  behaviour.  The  objective  is  to  design,   develop   and   deploy   a   prototype   toolset   that   will   allow   policy   makers   to   graphically   create   web   applications  that  will  be  deployed  in  the  environment  of  underlying  knowledge  in  Web  2.0  media.   The  project  introduces  the  concept  of  Policy  Gadget  (PADGET)  –  similarly  to  the  approach  of  gadget   applications  in  web  2.0  –  to  represent  a  micro  web  application  that  combines  a  policy  message  with   underlying  group  knowledge  in  social  media  (in  the  form  of  content  and  user  activities)  and  interacts   with  end  users  in  popular  locations  (such  as  social  networks,  blogs,  forums,  news  sites,  etc.)  in  order   to  get  and  convey  their  input  to  policy  makers.                                                                                                                               23    Open  Collaboration  in  Policy  Modelling,  http://www.ocopomo.eu   24  http://www.padgets.eu  
  • 47.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   47  |  P a g e                        Figure  7:  the  PADGET  Framework     PADGET  is  composed  of  four  main  components:   • A  message,  that  is  a  policy  in  any  of  its  stages  and  forms   • A   set   of   interaction   services,   that   allows   users   to   interact   with   the   policy   gadget   (find   it,   access  its  content,  comment  its  content,  share  it  etc.).  These  interfaces  may  be  provided  by   either  the  underlying  social  media  platforms  in  which  the  PADGET  Campaign  is  launched  or   by  the  PADGET  itself  when  it  takes  the  form  of  a  micro  application  (i.e.  in  the  case  of  the   iGoogle  gadget).   • The  social  context,  that  is  the  framework  describing  the  social  activity  and  content  relating   with  the  policy  gadget  in  each  individual  social  media  platform  where  the  policy  gadget  is   present.   • The  decision  services,  which  are  offered  by  two  modules.  The  PADGETS  analytics  and  the   PADGETS   simulation   model.   The   decision   services   component   is   responsible   for   the   generation  of  the  information  outputs  to  be  presented  to  the  PADGET  initiator  (usually  a   policy  maker).       PADGETS  will  use  publicly  available  APIs  for  interconnecting,  publishing  and  retrieving  content  from   underlying  social  media  platforms.  The  collected  information  and  user  activities  that  policy  gadgets   invoke   in   the   media   platforms   will   be   categorized   using   semantic   tags   as   to   their   relation   to   the   policies  in  order  to  help  the  policy  maker  form  an  opinion  about  what  the  users  think  about  relevant   issues  and  policies.      
  • 48.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   48  |  P a g e   An   interesting   model   is   Threshold   21   (T21),   built   using   the   System   Dynamics   methodology   to   facilitate  decision  making.  Feedback  relations  among  key  variables  within  sectors  are  endogenously   simulated  by  T21,  and  those  relations  can  lead  to  further  feedback  to  other  sectors.  This  process  is   valuable  for  learning  more  about  the  complex  interactions  among  and  across  sectors  that  need  to  be   taken  into  account  in  order  to  develop  more  effective  policies  and  mitigate  or  avoid  negative  side   effects.  This  approach  allows  to  bring  experts  together  from  different  sectors  to  better  understand   these  relations  and  obtain  the  data  needed  to  translate  a  qualitative  causal  diagram  (a  map  of  the   system)  into  a  quantitative  model,  using  a  participatory  approach.         Source:   “Macro   Economic   Policy   Analysis   Applications”,   presented   by   John   Shilling   at   the   Transatlantic  Research  on  Policy  Modelling  Workshop25       An  interesting  case  of  application  of  the  model  to   inform  policy  making  regarded  China.  More  in   particular,  a  number  of  agencies  and  NGOs  working  in  China  cooperated  with  the  MI  in  an  attempt   to   address   a   wide   variety   of   issues   related   to   achieving   more   sustainable   growth,   dealing   with   resource  constraints  (e.g.  water,  agriculture,  energy)  in  the  face  of  a  large  and  growing  population,   reducing  GHG  emissions,  and  promoting  more  innovation  to  move  down  a  greener  path.    The  effects   of   the   growth   in   population,   the   economy,   transportation,   energy   consumption,   food   imports   on   growth   prospects   have   been   examined.     The   study   identified   also   some   important   cross   sector   factors,  such  as  that  slowing  the  growth  of  the  per  capita  size  of  housing  units  would  have  many   beneficial  effects,  including  less  cement  and  steel  production,  leading  to  less  GHG  emissions,  less                                                                                                                             25  http://www.CROSSOVER-­‐project.eu/Workshop/WorkshopProgram.aspx   Legend: (+) Positive link (-) Negative link Society Economy Environment population health employment emissions - renewable energy + + fuel prices + water stress water demand agriculture production GDP - + + - government expenditure + + productivity + + + fossil fuel consumption + + - + income per capita + + + nutrition + + B R Nutrition loop Agriculture -water loop B Health-GDP -energy loop R Health-GDP loop + + Causal Diagram Example
  • 49.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   49  |  P a g e   energy  demand,  less  reduction  in  arable  land  for  agriculture.  These  illustrate  important  cross  sector   effects  and  show  how  policies  need  to  take  a  broad  view  of  their  results.    By  generating  scenarios   over  the  longer  term  till  2030,  the  model  was  able  to  show  how  continuing  business  as  usual  would   lead  to  significant  challenges  and  tipping  points  on  water  demand,  agricultural  production,  and  GHG   emissions.     Then   it   was   demonstrated   how   policies   that   would   shift   level   of   consumption   and   innovation  would  have  significant  impacts  on  sustainability,  including  that  some  slower  increase  in   overall  consumption  may  be  critical  for  achieving  sustainability  on  these  indicators,  despite  lower   GDP  growth  and  job  creation.    It  shows  that  it  is  important  to  develop  policies  that  mitigate  the   weaker   performance   while   assuring   sustainability   and   that   provide   everyone   with   an   acceptable   living  standard.    It  clearly  illustrates  the  policy  challenges  faced  and  lays  the  basis  for  developing   more  effective  policies.     Source:  “  Consumption  and  Sustainability:  A  Quantitative  Approach  Based  on  T21  China”,  presented   by  Weishuang  Qu  at  the  Transatlantic  Research  on  Policy  Modelling  Workshop  26       Available  Tools   Research  about  collaborative  software  has  been  conducted  since  the  mid  1980's,  when  computer-­‐ human  interaction,  office  automation,  and  support  for  group  work  became  the  focus  of  research   projects.  The  term  computer-­‐supported  cooperative  work  (CSCW)  was  first  used  in  1984  and  focused   on  the  support  of  small  groups  of  people.  Other  terms  are  used  as  synonyms  for  CSCW,  especially:   collaborative  computing,  computer  mediated  communication,  and  group  decision  support  systems.   CSCW  is  defined  as  a  “computer-­‐assisted  coordinated  activity  such  as  communication  and  problem                                                                                                                             26  http://www.CROSSOVER-­‐project.eu/Workshop/WorkshopProgram.aspx   Scenario summary for 2030 Low Consump High Consump Unit Baseline High Tech Low Tech real GDP RMB2000/Yr 6.67E+13 5.64E+13 7.23E+13 per capita real GDP RMB2000/Yr 46,829 39,745 47,580 unemployment rate % of workforce 6.12% 18.67% 0.00% total electricity demand Bn KWH/Yr 8,191 7,124 8,949 total petroleum demand MT/Yr 1,059 791 1,294 fossil fuel CO2 emission Ton/Yr 1.16E+10 8.87E+09 1.40E+10 Agriculture Land Ha 1.15E+08 1.19E+08 1.08E+08 total water demand Ton/Yr 6.07E+11 4.61E+11 7.93E+11
  • 50.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   50  |  P a g e   solving   carried   out   by   a   group   of   collaborating   individuals"   or   as   a   system,   which   “looks   at   how   groups  work  and  seeks  to  discover  how  technology  (especially  computers)  can  help  them  work".  The   term  groupware  also  stems  from  the  1980's  and  is  defined  as  “computer-­‐based  systems  that  support   groups   of   people   engaged   in   a   common   task   (or   goal)   and   that   provide   an   interface   to   a   shared   environment".  Interestingly,  some  authors  see  groupware  as  advanced  software  that  has  to  provide   awareness   support,   while   other   authors   also   understand   code   management   or   emailing   as   groupware  systems.  In  contrast  to  groupware,  CSCW  does  not  only  comprise  technological  aspects   of  collaboration,  but  also  incorporates  psychological,  social,  and  organizational  effects.     Collaborative   technologies,   especially   in   the   field   of   groupware   and   CSCW,   are   typically   classified   using   the   time-­‐space   taxonomy   which   distinguishes   between   communication   that   occurs   at   the   same  space  or  concurrently  at  different  spaces,  and  communication  that  occurs  in  the  same  time   (synchronously)   or   in   different   times   (asynchronously).   This   view   was   established   in   1988   by   R.   Johansen   (“GroupWare:   Computer   Support   for   Business   Teams”,   The   Free   Press,   New   York)   and   taken  on  in  various  related  publications.  The  following  figure  depicts  the  typical  time-­‐space  matrix  as   presented  in  these  publications.        Figure  8:  the  Time-­‐Space  Matrix     The  matrix  divides  collaborative  technologies  into  four  possible  constellations,  while  each  of  these   constellations  can  be  supported  better  or  worse  by  different  communication  media.   By  the  way  the  architecture  of  a  collaborative  modelling  tool,  i.e.,  a  system  that  supports  a  group  in   developing   models,   is   still   under   investigation.   Some   authors   have   suggested   groupware   systems   that   help   teams   in   collective   sense-­‐making   which   is   an   important   part   of   the   modelling   process.   Conklin,   Selvin,   Buckingham   and   Sierhuis   in   “Facilitated   Hypertext   for   Collective   Sensemaking:   15   Years  on  from  gIBIS”,  a  paper  presented  in  2003  during  the  8th  International  Working  Conference  on   the  Language-­‐Action  Perspective  on  Communication  Modelling  (Tilburg,  The  Netherlands),  reports   on  an  approach,  Compendium,  that  is  the  result  of  15  years  of  experience.  Compendium  combines   three  different  areas:  meeting  facilitation,  graphical  hypertext  and  conceptual  frameworks.  To  make   them   work,   facilitation   is   viewed   as   essential   to   remove   the   cognitive   overhead   for   the   group  
  • 51.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   51  |  P a g e   members.  As  groupware  systems  address  the  important  issue  of  collective  sense-­‐making  they  can  be   used   as   the   core   of   a   collaborative  modelling  tool.  So  far  these  systems  are  typically  tailored   for   specific   modelling   languages   though.   For   a   collaborative   modelling   tool   they   need   to   be   more   modular  so  that  any  modelling  language  can  be  “plugged  in”  (e.g.,  other  enterprise  or  information   systems  modelling  languages).  In  addition,  there  is  also  the  need  for  a  negotiation  component  that   facilitates  structured  arguments  and  decisions  regarding  modelling  choices.  Based  on  this  reflections   and  issues,  recently  two  tools  are  emerging:   • The   COllaborative   Modelling   Architecture,   COMA27 ,   allows   group   modelling.   Any   group   member  can  work  on  the  models  whenever  it  suits  them.  Any  participant  can  contribute  in   the   way   they   can:   by   just   looking   at   proposals   and   commenting   them,   by   making   minor   changes  to  them  or  maybe  even  by  making  their  own  proposals.  The  facilitator  can  see  the   status   of   the   modelling   process   at   any   time   and   can   decide   whether   a   certain   proposal   should   be   adopted   or   needs   improvement   based   on   the   comments   by   the   other   group   members  and  his  own  judgment.                                      Figure  9:  COMA,  COllaborative  Modelling  Architecture   COMA's  design  has  been  inspired  by  theoretical  insights  from  organizational  semiotics  and   driven  by  observations  of  group  modelling  behavior.  The  tool  is  implemented  in  Visual  C++   2005  on  Windows  based  on  the  UML  Pad  and  with  the  wxWidgets  GUI  library28 .       • The   OCOPOMO   eParticipation   platforms,   deployed   by   Open   Collaboration   for   Policy   Modelling  FP7  project,  that  will  end  in  December  2012.                                                                                                                               27  www.coma.nu   28  http://www.wxwidgets.org/  
  • 52.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   52  |  P a g e                                              Figure  10:  OCOPOMO  eParticipation  Platform   The  platform  is  a  suite  of  ICT  tools  for:   o Iterative  development  of  policies  in  a  form  of  narrative  scenarios;   o Policy  modelling,  creation  of  agent-­‐based  formal  policy  models;   o Open  and  transparent  collaboration  in  the  process  of  policy  development;   o Seamless,  goal-­‐oriented  information  exchange  between  all  the  stakeholders  (policy   analysts,  operators,  decision  makers,  wider  interest  groups,  general  public,  etc.);   o Simulation  and  visualisation  of  policy  alternatives  and  their  consequences;   A   First   prototype   was   released   in   autumn   2011   and   tested   on   a   1st   round   of   pilot   applications  started  on  winter  2011.  2nd  pilot  applications  and  evaluation  started  in  autumn   2012,  and  the  platform  has  been  released  in  December  2012.     Key  challenges  and  gaps   This  research  challenge  is  connected  to  the  research  on  Web  2.0  and  the  next  generation  web.  As  far   as  the  Policy  Modelling  in  Governance  is  concerned,  this  research  challenge  bridges  the  gap  between   citizens   and   decision   makers.   It   permits   an   early   stage   evaluation   of   the   decision   maker   mental   models   by   opening   a   dialogue   with   citizens   and   allows   for   an   exchange   of   perspectives.   It   finally   enables   the   collaboration   in   the   public   policy   modelling   process   with   the   use   of   a   rigorous   and   formal  scientific  process.     Current  research   According  to  current  research,  the  following  issues  are  being  explored:   • Group   model   building   and   systems   thinking,   focusing   on   models   when   tackling   a   mix   of   interrelated   strategic   problems   to   enhance   team   learning,   foster   consensus,   and   create   commitment;   although   people   have   different   views   of   the   situation   and   define   problems   differently,  this  current  field  of  research  shows  that  this  can  be  very  productive  if  and  when   people  learn  from  each  other  in  order  to  build  a  shared  perspective.    
  • 53.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   53  |  P a g e   • Web  2.0  tools  for  collaboration,  as  recently  pointed  out  in  the  FP7  project  OCOPOMO  (Open   Collaboration  in  Policy  Modelling),  which  aim  to  implement  collaborative  scenario  building   and  policy  modelling  via  an  integrated  ICT  toolbox.  OCOPOMO  provides  an  innovative  "off   the  mainstream"  bottom-­‐up  approach  to  policy  development,  combined  with  advanced  ICT   tools  and  techniques  supporting  open  collaboration.  The  project  is  developing  an  ICT-­‐based   environment   integrating   lessons   and   practical   techniques   from   complexity   science,   agent   based  social  simulation,  foresight  scenario  analysis  and  stakeholder  participation  in  order  to   formulate   and   monitor   social   policies   to   be   adopted   at   several   levels.   The   project   is   co-­‐ funded  by  the  European  Commission  under  the  7th  Framework  Program,  Theme  7.3  (ICT  for   Governance  and  Policy  Modelling).     Future  research   Future  research  should  therefore  focus  on:   • Collaborative   Internet-­‐based   modelling   tools,   allowing   more   than   one   modeller   to   cooperate,  at  the  same  time,  on  a  single  model.   • Definition  of  frameworks  allowing  even  “low-­‐skilled”  citizens  to  provide  their  contribution   (even  if  in  a  discursive  way)  to  the  modelling  process.   • Design  of  more  intuitive  and  accessible  Human-­‐Computer  Interfaces.  
  • 54.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   54  |  P a g e     3.1.3. Easy  Access  to  Information  and  Knowledge  Creation     Introduction  and  definition   According  to  a  cybernetic  view  of  intelligent  organisations  knowledge  supersedes  1.  the  facts,  2.  data   (statements  about  facts)  and  3.  meaningful  information  (what  changes  us),  the  last  also  defined  as   “the  difference  that  makes  the  difference”.  Knowledge  most  often  defined  as  “whatever  is  known,   the  body  of  truth,  information  and  principles  acquired”  by  a  subject  on  a  certain  topic.  Therefore   knowledge  is  always  embodied  in  someone.  It  implies  insight,  which,  in  turn,  enables  orientation,   and  thus  may  be  also  use  as  a  potential  for  action  (when  we  are  able  to  use  information  in  a  certain   environment,   then   we   start   to   learn,   which   is   the   process   that   helps   developing   and   grounding   knowledge).  Two  more  concepts  come  after  knowledge  on  the  same  scale,  and  are  Understanding   and  Wisdom.  Understanding  is  the  ability  to  transform  knowledge  into  effective  action,  i.e.  in-­‐depth   knowledge,  involving  both  deep  insights  into  patterns  of  relationships  that  generate  the  behaviour   of  a  system  and  the  possibility  to  convey  knowledge  to  others,  whereby  wisdom  is  a  higher  quality  of   knowledge  and  understanding  the  ethical  and  aesthetic  dimensions.   The  research  challenge  is  related  to  the  elicitation  of  information  which,  in  turn,  during  the  overall   model  building  and  use  processes  will  help  decision  makers  to  learn  how  a  certain  system  works  and   ultimately   to   gain   insights   (knowledge)   and   understanding   (apply   the   extracted   knowledge   from   those  processes)  in  order  to  successfully  implement  a  desired  policy.  It  is  important  to  note  that   other  research  fields  (in  particular,  ICT  disciplines)  tend  to  misuse  the  word  “knowledge”  and  invert   it  with  ”information”.       Why  it  matters  in  governance   Proper  information  acquisition  and  knowledge  development  are  the  key  aspect  in  all  research  fields,   so   this   research   challenge   has   a   horizontal   importance   for   research   in   general.   According   to   the   general  need  for  policy  assessment  and  evaluation,  there  are  some  specific  issues  stemming  from   this  research  challenge,  which  are  strongly  related  to  governance:   • Public  data  use  and  thus  public  information  elicitation  (by  citizens)   •  Citizens’  behavioural  data  which  are  gradually  becoming  essential  for  any  policy  assessment   process   • Interoperability  of  public  IT  systems   • Creation  of  a  common  understanding  on  a  certain  system’s  behaviour  (by  means  of  learning)   in   order   to   develop   a   shared   vision   on   the   problems   that   a   certain   policy   might   want   to   overcome     Current  Practice  and  Inspiring  cases   In   current   practice,   information   is   drawn   from   data   stored   in   different   types   of   media   (mainly   DBMS/ERPs).  Web  2.0  has  further  transformed  the  way  we  create  data  and  elicit  information  from   data.  Data  availability  ceased  to  pose  problems  as  a  result  of:   • The  Internet  growth  and  its  uptake   • User  Generated  Content  in  Social  Networks  
  • 55.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   55  |  P a g e   • Cooperation   of   IT   systems   from   different   organisations   thanks   to   the   Service-­‐Oriented   Architectures   (even   among   old   legacy   systems),   which   resulted   also   in   private   data   availability   • Public  Administration  Transparency  and  Public  Data  use/reuse     Available  Tools   A  review  of  the  available  tools  is  to  be  finalized.     Key  challenges  and  gaps   The  knowledge  is  still  mostly  created  and  passed  on  by  formal  methods  of  teaching,  even  though  the   advents   of   the   e-­‐Learning,   m-­‐Learning   and   webinar   fields   allow   for   an   increased   possibility   to   perform   Distance   Learning   on   the   Web.   But,   since   knowledge   is   developed   and   grounded   by   the   learning  process  through  action  in  the  environment,  the  learning  in  real  life  comes  from  committing   mistakes.  In  the  field  of  real  life  governance,  it  entails  implementing  a  wrong  policy  and  observing   the  positive  and  negative  consequences  that  this  policy  generates  (for  example  due  to  a  system’s   “policy  resistance”).  Learning  of  successes  is  also  important,  as  the  A.I.  method  is  based  on  positive   psychology.  At  present,  thanks  to  the  increasing  data  availability,  information  elicitation  process  is   much   easier,   either   by   tacitly   bringing   users   (data   generators)   to   provide   data   in   a   guided   way   (according   to   a   pre-­‐set   framework   for   data   input)   or   with   a   help   of   a   specific   process   (e.g.:   consultations  in  e-­‐Participation  tools).     Current  research   According  to  current  research,  the  main  focus  is  put  on  the  Knowledge  Management  field  or  also   (more  properly,  as  in  our  case)  to  the  Knowledge  Elicitation  field.  The  latter  basically  encompasses   the  following  steps:   • Data  retrieval  and  extraction   • Data  analysis  and  interpretation  (which  usually  produces  information)   • Data/information  adaptation  and  integration  (this  is  particularly  the  case  where  information   needs  to  be  used  in  a  model)     Future  research   There  is  still  a  large  field  to  be  explored  –  the  methods  of  extraction  of  meaningful  information  from   unstructured  sources  of  data,  e.g.  when  analysing  free  texts,  which  applies  to  all  sources  of  User-­‐ Generated  Content  (forums,  wikis,  social  networks,  etc.),  where  the  semantic  dimension  is  essential   to   derive   meaningful   information   rather   than   just   quantitatively   analysing   the   syntax   of   text.   In   general,  a  lot  of  data  is  generated  by  citizens  and  particularly  by  their  behaviour  online,  so  that  the   available  aggregated  data  sets  contains  information  on  what  a  citizen  does,  what  s/he  likes,  how   s/he   behaves   in   certain   environments,   and   so   on.   This   data   is   considered   very   valuable   both   for   private  and  public  organisations  (even  though  under  privacy  restrictions  which  have  to  be  properly   addressed).     Also,  according  to  the  knowledge  creation  and  development  of  understanding  (regarding  a  specific   system),  there  is  some  research  currently  carried  out  on  how  to  improve  the  learning  process  via  the   use  of  e-­‐Learning  systems.  In  this  respect,  it  is  crucial  to  boost  the  research  on  micro-­‐worlds,  i.e.  
  • 56.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   56  |  P a g e   complex  virtual  environments  where  reality  is  somehow  reproduced  and  where  a  decision  maker  is   trained   in   order   to   implement   his/her   strategies   and   hypothesis   and   perform   what-­‐if   analysis   without  the  need  to  necessarily  learn  from  mistakes  in  real  life.     Future  research  will  thus  have  to  focus  on  the  following  issues:   • Information   elicitation   by   analysing   and   interpreting   data,   also   taking   into   account   the   semantic  point  of  view.   • Creation   of   proper   micro-­‐worlds   (or   ILEs,   Interactive   Learning   Environments),   where   the   acquired  information  on  a  certain  system  is  used  (by  means  of  actions),  and  knowledge  is   developed   by   observation   of   the   outcomes   of   the   actions.   Also,   ILEs   will   have   to   be   integrated   into   LMS   (Learning   Management   Systems)   in   order   to   extend   the   potential   of   distance  learning  practices,  eventually  also  in  a  cooperative  way  (mass  learning).   • Interoperability  of  data  sources  in  order  to  integrate/aggregate  different  types  of  data  and   be  able  to  automatically  infer  information  from  more  meaningful  datasets.   • In  view  of  the  “Internet  of  Things”,  the  provision  of  “portable”  models/tools  for  citizens  in   order  to  gather  valuable  data  based  on  citizens’’  real  behaviours.  Moreover,  these  models   and  tools  would  enable  citizens  to  check  the  results  of  their  actions  by  analysing  in  real-­‐time   the  response  of  the  model  to  the  information  they  are  contributing  to  generate,  and  thus   evaluating  the  eventual  benefits  they  are  receiving  from  their  virtuous  behaviour  or  harm   they  are  creating  either  to  their  environment  or  to  themselves  (e-­‐Cognocracy).            
  • 57.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   57  |  P a g e   3.1.4. Model  Validation     Introduction  and  definition   Policy   makers   need   and   use   information   stemming   from   simulations   in   order   to   develop   more   effective  policies.  As  citizens,  public  administration  and  other  stakeholders  are  affected  by  decisions   based  on  these  models,  the  reliability  of  applied  models  is  crucial.  Model  validation  can  be  defined   as   ”substantiation   that   a   computerised   model   within   its   domain   of   applicability   possesses   a   satisfactory  range  of  accuracy  consistent  with  the  intended  application  of  the  model”  (Schlesinger,   1979).   Therefore,   a   policy   model   should   be   developed   for   a   specific   purpose   (or   context)   and   its   validity  determined  with  respect  to  that  purpose  (or  context)29 .  If  the  purpose  of  such  a  model  is  to   answer  a  variety  of  questions,  the  validity  of  the  model  needs  to  be  determined  with  respect  to  each   question.  A  model  is  considered  valid  for  a  set  of  experimental  conditions  if  the  model’s  accuracy  is   within   its   acceptable   range,   which   is   the   amount   of   accuracy   required   for   the   model’s   intended   purpose.   The   substantiation   that   a   model   is   valid   is   generally   considered   to   be   a   process   and   is   usually   part   of   the   (total)   policy   model   development   process     (Sargent,   2008).   For   this   purpose,   specific  and  integrated  techniques  and  ICT  tools  are  required  to  be  developed  for  policy  modelling.     Model  validation  is  composed  of  two  main  phases:   • Conceptual  model  validation,  i.e.  determining  that  theories  and  assumptions  underlying  the   conceptual  model  are  correct  and  that  the  model’s  representation  of  the  problem  entity  and   the  model’s  structure,  logic,  and  mathematical  and  causal  relationships  are  “reasonable”  for   the  intended  purpose  of  the  model.   • Computerised  model  verification  ensures  that  computer  programming  and  implementation   of   the   conceptual   model   are   correct,   as   well   as   states   that   the   overall   behaviour   of   the   model  is  in  line  with  the  available  historical  data.     Why  it  matters  in  governance   Model  Validation  is  connected  both  to  modelling  and  simulation.  According  to  the  general  need  for   policy   assessment   and   evaluation,   there   are   some   specific   issues   stemming   from   the   Model   Validation,  which  are  strongly  related  to  governance:   • Reliability  of  models:  policy  makers  use  simulation  results  to  develop  effective  policies  that   have  an  important  impact  on  citizens,  public  administration  and  other  stakeholders.  Model   validation  is  fundamental  to  guarantee  that  the  output  (simulation  results)  for  policy  makers   is  reliable.   • Acceleration   of   policy   modelling   process:   policy   models   must   be   developed   in   a   timely   manner  and  at  minimum  cost  in  order  to  efficiently  and  effectively  support  policy  makers.   Model   validation   is   both   cost   and   time   consuming   and   should   be   automated   and   accelerated.                                                                                                                             29  Some  researchers  claim  that  the  category  "validity"  has  little  meaning  in  relation  to  policy  models,  as  they  are  generally  a   form  of  narrative  or  storytelling  so  that  their  value  comes  in  the  act  of  obtaining  a  better  understanding  of  the  system  and   being  able  to  communicate  concepts  effectively  and  spur  discussion  between  different  stakeholders.  
  • 58.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   58  |  P a g e   • Modular  and  re-­‐usable  models:  a  policy  model  developer  deciding  to  re-­‐use  existing  models   or  compose  them,  stumble  across  the  issue  of  models’  reliability.  Model  validation  can  be   used  for  certifying  this  reliability  and  creating  a  database  of  validated  models.     Current  Practice  and  Inspiring  cases   In  current  practice  the  most  frequently  used  is  a  decision  of  the  development  team  based  on  the   results  of  the  various  tests  and  evaluations  conducted  as  part  of  the  model  development  process.   Another   approach   is   to   engage   users   in   the   validation   process.   When   developing   large-­‐scale   simulation  models,  the  validation  of  a  model  can  be  carried  by  an  independent  third-­‐party.  Needless   to  say,  that  the  third  party  needs  to  have  a  thorough  understanding  of  the  intended  purpose  of  the   simulation  model.  Finally,  the  scoring  model  can  be  used  for  testing  the  model’s  validity  (e.g.  see   Balci  1989;  Gass  1983;    Gass  &  Joel  1987).  Scores  (or  weights)  are  determined  subjectively  when   conducting   various   aspects   of   the   validation   process   and   then   combined   to   determine   category   scores  and  an  overall  score  for  the  simulation  model.  A  simulation  model  is  considered  valid  if  its   overall  and  category  scores  are  greater  than  some  passing  score.     Available  Tools   A  review  of  the  available  tools  is  to  be  finalized.     Key  challenges  and  gaps   Typically   all   above-­‐mentioned   approaches   are   applied   after   the   simulation   model   has   been   developed.  Performing  a  complete  validation  effort  after  the  simulation  model  has  been  finalised   requires   both   time   and   money.   However,   conducting   model   validation   concurrently   with   the   development  of  the  simulation  model  enables  the  model  development  team  to  receive  inputs  earlier   on   each   stage   of   model   development.   Therefore,   ICT   tools   for   speeding   up,   automating   and   integrating   model   validation   process   into   policy   model   development   process   are   necessary   to   guarantee  the  validity  of  models  with  an  effective  use  of  resources.     Current  research   In  Current  research,  there  are  a  large  number  of  subjective  and  objective  validation  techniques  used   for  verifying  and  validating  the  modules  and  the  overall  model.  Robert  G.  Sargent  at  the  Syracuse   University    in  2010  provided  a  relevant  ones:  Animation;  Comparison  to  Other  Model;  Degenerate   Tests;   Event   Validity;   Extreme   Condition   Tests;   Face   Validity;   Historical   Data   Validation;   Historical   Methods;   Internal   Validity;   Multistage   Validation;   Operational   Graphics;   Parameter   Variability   /   Sensitivity  Analysis;  Predictive  Validation;  Traces;  and  Turing  Tests.  Furthermore,  he  described  a  new   statistical  procedure  for  validating  simulation  and  analytic  stochastic  models  using  hypothesis  testing   when   the   amount   of   model   accuracy   is   specified.   This   procedure   provides   for   the   model   to   be   accepted   if   the   difference   between   the   system   and   the   model   outputs   are   within   the   specified   ranges  of  accuracy.  The  system  must  be  observable  to  allow  data  to  be  collected  for  validation.       Future  research   Future  research  should  explore  the  following  issues:   • In  order  to  speed  up  and  reduce  the  cost  of  a  model  validation  process,  user-­‐friendly  and   collaborative   statistical   software   should   be   developed,   possibly   combined   with   expert   systems  and  artificial  intelligence.  
  • 59.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   59  |  P a g e   • Due  to  the  big  gap  between  theory  and  practice,  the  considerable  opportunity  exists  for  the   study   and   application   of   rigorous   verification   and   validation   techniques.   In   the   current   practice,  the  comparison  of  the  model  and  system  performance  measures  is  typically  carried   out  in  an  informal  manner.   • Complicated  simulation  models  are  usually  either  not  validated  at  all  or  are  only  subjectively   validated;   for   example,   animated   output   is   eyeballed   for   a   short   while.   Therefore,   complexity  issues  in  model  validation  may  be  better  addressed  through  the  development  of   more  suitable  methodologies  and  tools.   • Model   validation   is   not   a   discrete   step   in   the   simulation   process.   It   needs   to   be   applied   continuously   from   the   formulation   of   the   problem   to   the   implementation   of   the   study   findings   as   a   completely   validated   and   verified   model   does   not   exist.   Validation   and   verification  process  of  a  model  is  never  completed.   • As  the  model  developers  are  inevitably  biased  and  may  be  concentrated  on  positive  features   of  the  given  model,  the  third  party  approach  (board  of  experts)  seems  to  be  a  better  solution   in  model  validation.   • Considering  the  ranges  that  simulation  studies  cover  (from  small  models  to  very  large-­‐scale   simulation  models),  further  research  is  needed  to  determine  with  respect  to  the  size  and   type  of  simulation  study     o Which  model  validation  approach  should  be  used,     o How  should  model  validation  be  managed,     o What  type  of  support  system  software  for  model  validation  is  needed.   • Validating  large-­‐scale  simulations  that  combine  different  simulation  (sub-­‐)  models  and  use   different  types  of  computer  hardware  such  as  in  currently  being  done  in  HLA  (Higher  Level   Architecture).  A  number  of  these  VV&A  issues  need  research,  e.g.  how  does  one  verify  that   the  simulation  clocks  and  event  (message)  times  (timestamps)  have  the  same  representation   (floating   point,   word   size,   etc.)   and   validate   that   events   having   time   ties   are   handled   properly.                            
  • 60.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   60  |  P a g e                                                                                 3.1.5. Immersive  Simulation     Introduction  and  definition   As  policy  models  grow  in  size  and  complexity,  the  process  of  analysing  and  visualising  the  resulting   large   amounts   of   data   becomes   an   increasingly   difficult   task.   Traditionally,   data   analysis   and   visualisation  were  performed  as  post-­‐processing  steps  after  a  simulation  had  been  completed.  As   simulations   increased   in   size,   this   task   became   increasingly   difficult,   often   requiring   significant   computation,   high-­‐performance   machines,   high   capacity   storage,   and   high   bandwidth   networks.   Computational  steering  is  an  emerging  technology  that  addresses  this  problem  by  “closing  the  loop”   and   providing   a   mechanism   for   integrating   modelling,   simulation,   data   analysis   and   visualisation.   This  integration  allows  a  researcher  to  interactively  control  simulations  and  perform  data  analysis   while  avoiding  many  of  the  pitfalls  associated  with  the  traditional  batch  /  post  processing  cycle.  This   research   challenge   refers   to   the   issue   of   the   integration   of   visualisation   techniques   within   an   integrated   simulation   environment.   This   integration   plays   a   crucial   role   in   making   the   policy   modelling  process  more  extensive  and,  at  the  same  time,  comprehensible.  In  fact,  the  real  aim  of   interactive   simulation   is,   on   the   one   hand,   to   allow   model   developers   to   easily   manage   complex   models  and  their  integration  with  data  (e.g.  real-­‐time  data  or  qualitative  data  integration)  and,  on   the  other  hand,  to  allow  the  other  stakeholders  not  only  to  better  understand  the  simulation  results,   but   also   to   understand   the   model   and,   eventually,   to   be   involved   in   the   modelling   process.   Interactive   simulation   can   dramatically   increase   the   efficiency   and   effectiveness   of   the   modelling   and   simulation   process,   allowing   the   inclusion   and   automation   of   some   phases   (e.g.   output   and   feedback  analysis)  that  were  not  managed  in  a  structured  way  up  to  this  point.     Why  it  matters  in  governance   Immersive   simulation   is   a   particular   aspect   of   simulation.   As   far   as   the   Policy   Assessment   in   Governance  is  concerned,  this  challenge  may:   • Accelerate  the  simulation  process:  policy  makers  would  be  able  to  analyse  simulation  results,   eventually  run  new  scenarios  and  make  decisions  as  soon  as  possible  and  at  the  minimum   cost.   •  Collaborative   environment:   the   bigger   is   the   number   of   stakeholders   involved   in   policy   modelling  and  simulation  process,  the  greater  is  the  necessity  of  an  interactive  simulation   environment  that  allows  non-­‐experts  to  use  the  model  and  understand  results  as  well  as   permit  experts  to  easily  understand  new  requirements  and  consequent  modification.   • Citizen   engagement:   interactive   simulation   tools   help   to   engage   citizens   in   policy-­‐making   process  and  to  display  to  them  in  a  simple  way  the  results.   • Data  integration:  interactive  simulation  tools  allow  better  managing  of  a  large  number  and   different  types  of  data  and  information,  both  for  input  and  output/feedback  analysis.       Current  Practice  and  Inspiring  cases   In   current   practice,   data   analysis   and   visualisation,   albeit   critical   for   the   process,   are   often   performed  as  a  post-­‐processing  step  after  batch  jobs  are  run.  For  this  reason,  the  errors  in  validating  
  • 61.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   61  |  P a g e   the  results  of  the  entire  simulation  may  be  discovered  only  during  post-­‐processing.  What  is  more,   the  decoupling  of  simulation  and  analysis/visualisation  can  present  serious  scientific  obstacles  to  the   researcher  in  interpreting  the  answers  to  “what  if”  questions.  Given  the  limitations  of  the  batch  /   post   processing   cycle,   it   might   be   advisable   to   break   the   cycle   and   improve   the   integration   of   simulation   and   visualisation.   Implementation   of   an   interactive   simulation   and   visualisation   environment  requires  a  successful  integration  of  the  many  aspects  of  scientific  computing,  including   performance   analysis,   geometric   modelling,   numerical   analysis,   and   scientific   visualisation.   These   requirements   need   to   be   effectively   coordinated   within   an   efficient   computing   environment.   Recently,   several   tools   and   environments   for   computational   steering   have   been   developed.   They   range   from   tools   that   modify   performance   characteristics   of   running   applications,   either   by   automated   means   or   by   user   interaction,   to   tools   that   modify   the   underlying   computational   application,  thereby  allowing  application  steering  of  the  computational  process.       Available  Tools   A  review  of  the  available  tools  is  to  be  finalized.     Key  challenges  and  gaps   The  development  of  immersive  tools  is  still  based  on  model  developers  needs  and  therefore  a  gap   still   exists   between   requirements   of   policy   makers   and   those   of   developers.   In   a   collaborative   modelling  environment,  interaction  is  fundamental  in  order  to  speed  up  the  process  and  make  ICT   tools  user-­‐friendly  for  all  the  stakeholders  involved  in  the  policy  model  development  process.       Current  research   In  the  current  research,  interactive  visualisation  typically  combines  two  main  approaches:  providing   efficient  algorithms  for  the  presentation  of  data  and  providing  efficient  access  to  the  data.  The  first   advance  is  evident  albeit  challenging.  Even  though  computers  continually  get  faster,  data  sizes  are   growing  at  an  even  more  rapid  rate.  Therefore,  the  total  time  from  data  to  picture  is  not  decreasing   for  many  of  the  problem  domains.  Alternative  algorithms,  such  as  ray  tracing    (Nakayama,  2002)  and   view   dependent   algorithms     (Lessig,   2009)   can   restore   a   degree   of   interactivity   for   very   large   datasets.   Each   of   those   algorithms   has   its   trade-­‐offs   and   is   suitable   for   a   different   scenario.   The   second  advance  is  less  evident  but  very  powerful.  Through  the  integration  of  visualisation  tools  with   simulation   codes,   a   scientist   can   achieve   a   new   degree   of   interactivity   through   the   direct   visualisation   and   even   manipulation   of   the   data.   The   scientist   does   not   necessarily   wait   for   the   computation  to  finish  before  interacting  with  the  data,  but  can  interact  with  a  running  simulation.   While  conceptually  simple,  this  approach  poses  numerous  technical  challenges.     Future  research   With  regard  to  future  research,  interactive  simulation  plays  a  crucial  role  in  a  collaborative  modelling   environment.  The  trade-­‐off  between  the  possibility  of  enlarging  models  and  including  several  kinds   of  data,  and  the  number  of  people  that  can  understand  and  modify  the  model  should  be  deeply   analysed.  For  this  purpose,  some  fundamental  issues  must  be  approached:   • Systems  should  be  modular  and  easy  to  extend  within  the  existing  codes.   • Users  of  the  systems  should  be  able  to  add  new  capabilities  easily  without  being  experts  in   systems  programming.  
  • 62.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   62  |  P a g e   • Input  /  output  systems  should  be  easily  integrated.   • Steering   systems   should   be   adaptable   to   hardware   ranging   from   the   largest   of   supercomputing  systems  to  low-­‐end  workstations  and  PCs.   3.1.6. Output  Analysis  and  Knowledge  Synthesis     Introduction  and  definition   Inputs   driving   a   simulation   are   often   random   variables,   and   because   of   this   randomness   in   the   components   driving   simulations,   the   output   from   a   simulation   is   also   random,   so   statistical   techniques  must  be  used  to  analyse  the  results.  In  particular,  the  output  processes  are  often  non-­‐ stationary  and  auto-­‐correlated  and  classical  statistical  techniques  based  on  independent  identically   distributed  observations  are  not  directly  applicable.  In  addition,  by  observing  a  simulation  output,  it   is  possible  to  infer  the  general  structure  of  a  system,  so  ultimately  gaining  insights  on  that  system   and   being   able   to   synthesise   knowledge   on   it.   There   is   also   the   possibility   to   review   the   initial   assumptions  by  observing  the  outcome  and  by  comparing  it  to  the  expected  response  of  a  system,   i.e.  performing  a  modelling  feedback  on  the  initial  model.  Finally,  one  of  the  most  important  uses  of   simulation   output   analysis   is   the   comparison   of   competing   systems   or   alternative   system   configurations.   Visualisation  tools  are  essentials  for  the  correct  execution  of  this  iterative  step.  The  present  research   challenge   deals   with   the   issue   of   output   analysis   of   a   policy   model   and,   at   the   same   time,   of   feedback  analysis  in  order  to  incrementally  increase  and  synthesise  the  knowledge  of  the  system.     Why  it  matters  in  governance   Output   analysis   is   a   specific   aspect   of   simulation.   According   to   the   general   need   for   policy   assessment  and  evaluation,  there  are  some  specific  issues  stemming  from  the  output  analysis,  which   are  strongly  related  to  governance:   • Acceleration   of   policy   assessment   process:   automated   output   analysis   tools   would   help   policy  makers  to  efficiently  and  effectively  analyse  the  impacts  of  a  policy  even  if  the  large   number  of  simulation  data  must  be  taken  into  account   • Citizen   engagement:   user-­‐friendly   automated   tools   for   output   analysis   can   be   offered   to   citizens  in  order  to  share  the  simulation  results  and  better  engage  them  in  policy-­‐making   process.     Current  Practice  and  Inspiring  cases   In   the   current   practice   a   large   amount   of   time   and   financial   resources   are   spent   on   model   development  and  programming,  but  little  effort  is  allocated  to  analyse  the  simulation  output  data  in   an  appropriate  manner.  As  a  matter  of  fact,  a  very  common  way  of  operating  is  to  make  a  single   simulation  of  somewhat  arbitrary  length  run  and  then  treat  the  resulting  simulation  estimates  as   being  the  "true"  characteristics  of  the  model.  Since  random  samples  from  probability  distributions   are   typically   used   to   drive   a   simulation   model   through   time,   these   estimates   are   realisations   of   random  variables  that  may  have  large  variances.  As  a  result,  these  estimates  could,  in  a  particular   simulation  run,  differ  greatly  from  the  corresponding  true  answers  for  the  model.  The  net  effect  is   that  there  may  be  a  significant  probability  of  making  erroneous  inferences  about  the  system  under   study.   Historically,   there   are   several   reasons   why   output   data   analysis   was   not   conducted   in   an   appropriate  manner.  First,  users  often  have  the  unfortunate  impression  that  simulation  is  just  an  
  • 63.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   63  |  P a g e   exercise   in   computer   programming.   Consequently,   many   simulation   studies   begun   with   heuristic   model   building   and   computer   coding,   and   end   with   a   single   run   of   the   program   to   produce   "the   answers."  In  fact,  however,  a  simulation  is  a  computer-­‐based  statistical  sampling  experiment.  Thus,   if  the  results  of  a  simulation  study  are  to  have  any  meaning,  appropriate  statistical  techniques  must   be  used  to  design  and  analyse  the  simulation  experiments  and  ICT  tools  must  be  developed  to  make   the   process   more   effective   and   efficient.   In   addition,   there   are   some   important   issues   of   output   analysis  that  are  not  strictly  connected  to  statistics.  In  particular,  an  evident  gap  in  literature  regards   the  analysis  and  integration  of  feedbacks  in  modelling  and  simulation  process.  Actually,  stakeholders   are   involved,   in   a   post-­‐processing   phase,   in   order   to   analysis   the   results   (more   often   only   the   elaboration  of  them)  and  understand  something  about  the  policy.  Sometimes  they  are  able  to  give  a   feedback   on   the   difference   between   their   expectations   and   the   result   but   the   process   is   not   structured  and  effective  tools  are  lacking.  The  development  of  tools  for  analysing  and  integrating   feedbacks  should  be  explored  in  order  to  enlarge  the  number  of  stakeholders  involved  and,  at  the   same  time,  to  allow  efficient  and  effective  modification  at  each  phase  of  the  process,  incrementally   increasing  the  knowledge  of  the  model  and,  consequently,  of  the  given  policy.     Available  Tools   A  review  of  the  available  tools  is  to  be  finalized.     Key  challenges  and  gaps   A  fundamental  issue  for  statistical  analysis  is  that  the  output  processes  of  virtually  all  simulations  are   non-­‐stationary   (the   distributions   of   the   successive   observations   change   over   time)   and   auto   correlated  (the  observations  in  the  process  are  correlated  with  each  other).  Thus,  classical  statistical   techniques  based  on  independent  identically  distributed  observations  are  not  directly  applicable.  At   present,  there  are  still  several  output-­‐analysis  problems  for  which  there  is  no  commonly  accepted   solution,   and   the   solutions   that   are   available   are   often   too   complicated   to   apply.   Another   impediment  to  obtaining  accurate  estimates  of  a  model's  true  parameters  or  characteristics  is  the   cost   of   the   computer   time   needed   to   collect   the   necessary   amount   of   simulation   output   data.   Indeed,  there  are  situations  where  an  appropriate  statistical  procedure  is  available,  but  the  cost  of   collecting  the  amount  of  data  dictated  by  the  procedure  is  prohibitive.           Current  research   In  current  research,  main  references  are  Law  (1983),  Nakayama  (2002),    Alexopoulos  &  Kim  (2002),     Goldsman  &  Tokol  (2000),  Kelton  (1997),  Alexopoulos  &  Seila  (1998),  Goldsman  &  Nelson  (1998),     Law  (2006).   For  output  analysis,  there  are  two  types  of  simulations:   • Finite-­‐horizon  simulations.  In  this  case,  the  simulation  starts  in  a  specific  moment  and  runs   until  a  terminating  event  occurs.  The  output  process  is  not  expected  to  achieve  steady-­‐state   behaviour  and  any  parameter  estimated  from  the  output  will  be  transient  in  a  sense  that  its   value   will   depend   upon   the   initial   conditions   (e.g.   a   simulation   of   a   vehicle   storage   and   distribution  facility  in  a  week  time).   • Steady-­‐state  simulations.  The  purpose  of  a  steady-­‐state  simulation  is  the  study  of  the  long-­‐ run   behaviour   of   the   system   of   interest.   A   performance   measure   of   a   system   is   called   a  
  • 64.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   64  |  P a g e   steady-­‐state  parameter  if  it  is  a  characteristic  of  the  equilibrium  distribution  of  an  output   stochastic  process  (e.g.  simulation  of  a  continuously  operating  communication  system  where   the  objective  is  the  computation  of  the  mean  delay  of  a  data  packet).     Future  research   Referring   to   previous   cited   works   and   in   particular   to   Goldsman   (2010),   future   research   should   further  explore  following  issues:   • ICT  tools  for  supporting  or  automating  output/feedback  analysis   • Allowing  an  incremental  understanding  of  the  model  (knowledge  synthesis)   • Adapting  Design  Of  Experiment  (DOE)  for  policy  model  simulation   • Use  and  integration  of  more-­‐sophisticated  variance  estimators   • Better  ranking  and  selection  techniques.                                                                        
  • 65.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   65  |  P a g e         3.2. Data-­‐powered  Collaborative  Governance   3.2.1. Big  Data     Summary  Overview     Current  free  tools   Top  market  tools   Current  and  Future  Research   The  freely  available  tools  permit  to   overcome   data   limitations,   simplify   the   analytical   process   and   visualize   results.  The  functionalities  provided   by  these  software  are:   -­‐Massively  parallel  processing  (MPP)   database   product   for   large-­‐scale   analytics   and   next-­‐gen   data   warehousing   -­‐Data-­‐parallel   implementations   of   statistical   and   machine   learning   methods   -­‐Visual  data  mining  modelling       -­‐Data  storage  platforms  and  other   information   infrastructure   solutions   -­‐Massive   parallel   processing   (MPP)   -­‐Dataflow   engines,   software   interconnect  technologies   -­‐Data   discovery   and   exploration   tools   -­‐Built-­‐in  text  analytics,  enterprise-­‐ grade   security   and   administrative   tools   -­‐Real-­‐time   analytics   processing   (RTAP)   -­‐Visualization   features   supporting   exploratory   and   discovery   analytics   -­‐On-­‐line   analytical   processing   (OLAP)   -­‐Business   intelligence   (BI),   Data   Warehouse  (DW)     -­‐Enterprise   Data   Warehouse   (EDW)       -­‐Technologies   for   collecting   cleaning,   storing   and   managing   data:   data   warehouse;   pivotal   transformation;   ETL;   I/O;   efficient   archiving,   storing,   indexing,   retrieving,   and   recovery;   streaming,   filtering,   compressed   sensing   sufficient   statistics;  automatic  data  annotation;  Large   Database   Management   Systems;   storage   architectures;   data   validity,   integrity,   consistency,   uncertainty   management;   languages,   tools,   methodologies   and   programming  environments   -­‐Technologies   for   summarizing   data   and   extracting   some   meaning:   reports;   dashboard;   statistical   analysis   and   inference;   Bayesian   techniques;   information   extraction   from   unstructured,   multimodal   data;   scalable   and   interactive   data   visualization;   extraction   and   integration   of   knowledge   from   massive,   complex,   multi-­‐modal,   or   dynamic   data;   data   mining;   scalable   machine   learning;   data-­‐driven   high   fidelity   simulations;   scalable   machine   learning;   predictive   modelling,   hypothesis   generation   and   automated   discovery   -­‐Technologies   for   using   data   a   decision   tool:  Decision  Trees,  Pro-­‐Con  Analysis,  Rule   Based  Systems,  Neural  Networks,  Tradeoff   based  Decisions       Introduction  and  definition   Big  Data  refers  to  dataset  that  cannot  be  stored,  captured,  managed  and  analysed  by  the  mean  of   conventional  database  software.  Thereby  Big  Data  is  a  subjective  rather  than  a  technical  definition,   because   it   does   not   involve   a   quantitative   threshold   (e.g.   in   terms   of   terabytes),   but   instead   a   moving  technological  one.  Keeping  that  in  mind,  the  definition  of  Big  Data  in  many  sectors  ranges   from  a  few  terabytes30  to  multiple  petabytes31 .  The  definition  of  Big  Data  does  not  merely  involve   the   use   of   very   large   data   sets,   but   concerns   also   a   computational   turn   in   thought   and   research   (Burkholder, L, ed. 1992).                                                                                                                               30    1  terabyte  is  equal  to  1  trillion  bytes   31    1  petabyte  is  equal  to  1000  terabytes  
  • 66.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   66  |  P a g e   As  stated  by  Latour  (2009)  when  the  tool  is  changed,  also  the  entire  social  theory  going  with  it  is   different.   In   this   view   Big   Data   has   emerged   a   system   of   knowledge   that   is   already   changing   the   objects  of  knowledge  itself,  as  it  has  the  capability  to  inform  how  we  conceive  human  networks  and   community.  Big  Data  creates  a  radical  shift  in  how  we  think  research  itself.  As  argued  by  Lazer  et  al.   (2009),  not  only  we  are  offered  the  possibility  to  collect  and  analyze  data  at  an  unprecedented  depth   and  scale,  but  also  there  is  a  change  in  the  processes  of  research,  the  constitution  of  knowledge,  the   engagement   with   information   and   the   nature   and   the   categorization   of   reality.   The   potential   stemming  from  the  availability  of  a  massive  amount  of  data  is  exemplified  by  Google.  It  is  widely   believed   that   the   success   of   the   Mountain   View   company   is   due   to   its   brilliant   algorithms,   e.g.   PageRank.   In   reality   the   main   novelties   introduced   in   1998,   which   brought   to   second   generation   search  engines,  involved  the  recognition  that  hyperlinks  were  an  important  measure  of  popularity   and  the  use  of  the  text  of  hyperlinks  (anchortext)  in  the  web  index,  giving  it  a  weight  close  to  the   page  title.  This  is  because  first  generation  search  engines  used  only  the  text  of  the  web  pages,  while   Google  added  two  data  set  (hyperlinks  and  anchortext),  so  that  even  a  less  than  perfect  algorithm   exploiting  this  additional  data  would  obtain  roughly  the  same  results  as  PageRank.  Another  example   is   the   Google’s   AdWords   keyword   auction   model.   Overture   had   previously   shown   that   ranking   advertisers   for   a   given   keyword   based   purely   on   their   bids   was   an   efficient   mechanism.   Google   improved  the  tool  by  adding  the  data  on  the  clickthrough  rate  (CTR)  on  each  advertiser's  ad,  so  that   advertisers  were  ranked  by  their  bid  and  their  CTR.       Why  it  matters  in  governance   Big  Data  have  a  huge  impact  also  in  governance  and  policy  making.  In  fact  their  benefits  apply  to  a   wide  variety  of  subjects:   • Health   care:   making   care   more   preventive   and   personalized   by   relying   on   a   home-­‐based   continuous   monitoring,   thereby   reducing   hospitalization   costs   while   increasing   quality.   Detection  of  infectious  disease  outbreaks  and  epidemic  development   • Education:   by   collecting   all   the   data   on   students’   performance,   it   would   be   possible   to   design  more  effective  approaches.  The  collection  of  these  data  is  made  possible  thanks  to   massive  Web  deployment  of  educational  activities   • Urban  planning:  huge  high  fidelity  geographical  datasets  describing  people  and  places  are   generated  from  administrative  systems,  cell  phone  networks,  or  other  similar  sources.   • Intelligent  transportation  based  on  the  analysis  and  visualization  of  road  network  data,  so  as   to  implement  congestion  pricing  systems  and  reduce  traffic   • The   use   of   ubiquitous   data   collection   through   sensors   networks   in   order   to   improve   environmental  modelling     • Analysis  and  clarification  of  the  energy  pattern  use  through  data  analytics  and  smart  meters,   which  can  be  useful  for  the  adoption  of  energy  saving  policies  avoiding  blackouts   • Integrated  analysis  of  contracts  in  order  to  find  relations  and  dependencies  among  financial   institutions  in  order  to  assess  the  financial  systemic  risk   • The  analysis  of  conversation  in  social  media  and  networks,  as  well  as  the  analysis  of  financial   transaction  carried  out  by  alleged  terrorists,  which  can  be  used  for  homeland  security     • Assessment   of   computer   security   by   the   mean   of   the   logged   information   analysis,   i.e.   Security  Information  and  Event  Management   • Better  track  of  food  and  pharmaceutical  production  and  distribution  chain  
  • 67.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   67  |  P a g e   • Collect  data  on  water  and  sewer  usage  in  order  to  reduce  water  consumption  by  detecting   leaks   • Use  of  sensors,  GPS,  cameras  and  communication  systems  for  crisis  detection,  management   and  response     • Use  of  sensors’  data  for  carbon  footprint  management         Policy  Applications  of  Big  Data  Tools   There  is  a  growing  body  of  evidence  highlighting  the  applications  of  Big  Data  not  only  in  traditional   hard  science  and  business,  but  also  in  policy  making  due  to  the  predictive  power  of  the  data.  Let  us   see  some  applications:   • Predictability  of  human  behaviour  and  social  events.  A  research  team  from  Northwestern   University32  was   able   to   predict   people’s   location   based   on   mobile   phone   information   generated   from   past   movements.   Moreover   Pentland   from   MIT33  conducted   a   research   showing  that  mobile  phones  can  be  used  as  sensors  for  predicting  human  behaviour,  as  they   can  quantify  human  movements  in  order  to  explain  changes  in  commuting  patterns  given  for   example  by  unemployment.  Recently  another  research  team  from  Northeastern  University   was  able  to  predict  the  voting  outcome  in  the  scope  of  a  famous  US  television  programme   (American  Idol)  based  on  Twitter  activity  during  the  time  span  defined  by  the  TV  show  airing   and  the  voting  period  following  it34   • Public   health.   Online   data   can   be   used   for   syndromic   surveillance,   also   called   infodemiology35 .   As   an   example   Google   Flu   Trends   is   a   tool   based   on   the   prevalence   of   Google  queries  for  flu-­‐like  symptoms.  As  shown  by  Ginsberg  et  al.  (2008)36  it  is  then  possible   to  use  search  queries  to  detect  influenza  epidemics  in  areas  with  a  large  population  of  web   search  users.  In  fact  according  to  the  US  Center  for  Disease  Control  and  Prevention  (CDC)37  a   great  availability  of  data  coming  from  online  queries  can  help  to  detect  epidemic  outbursts   before  laboratory  analysis.  Another  related  tool  is  the  Google  Dengue  Trend.  In  this  view  the   analysis  of  health  related  Tweets  in  US  by  Paul  and  Dredze  (2011)38  found  a  high  correlation   between   the   modeled   and   the   actual   flu   rate.   In   the   same   way   Twitter’s   data   can   be   analyzed  to  study  the  geographic  spread  of  a  virus  or  disease39 .  Finally  we  can  talk  about   Healthmap 40  in   which   data   from   online   news,   eyewitness   reports,   expert-­‐curated   discussions,  official  reports,  are  used  to  get  a  thorough  view  of  the  current  global  state  of   infectious  diseases  which  is  visualised  on  a  map   • Global  food  security.  The  Food  and  Agriculture  Organization  of  the  UN  (FAO)  is  chartered   with  ensuring  that  the  world’s  knowledge  of  food  and  agriculture  is  available  to  those  who   need  it  when  they  need  it  and  in  a  form  which  they  can  access  and  use41 .  In  fact  human   population   will   approach   9   billion   by   2050,   thereby   it   will   be   necessary   to   put   in   place                                                                                                                             32  http://online.wsj.com/article/SB10001424052748704547604576263261679848814.html     33  http://www.nytimes.com/2011/04/24/business/24unboxed.html?_r=1&src=tptw>   34  http://www.mobs-­‐lab.org/uploads/6/7/8/7/6787877/american_idol_finale.pdf   35  http://yi.com/home/EysenbachGunther/publications/2006/eysenbach2006c-­‐infodemiologyamia-­‐proc.pdf   36  http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/p   apers/detecting-­‐influenza-­‐epidemics.pdf  >   37  http://www.cdc.gov/ehrmeaningfuluse/Syndromic.html   38  http://www.cs.jhu.edu/%7Empaul/files/2011.icwsm.twitter_health.pdf     39  http://www.ncbi.nlm.nih.gov/pubmed/21573238     40  http://healthmap.org/en/   41  http://data.fao.org/  and  http://www.grdi2020.eu/Repository/FileScaricati/050e1e8a-­‐3e69-­‐4ba0-­‐86a5-­‐b8f7c8322ebe.pdf  
  • 68.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   68  |  P a g e   policies  aimed  at  ensuring  a  sufficient  and  fair  distribution  of  resources.  In  fact  the  world   food  production  will  have  to  increase  by  60%  by  increasing  the  agricultural  production  and   fighting  water  scarcity.  The  online  data  portal  to  be  launched  by  FAO  will  enhance  planners’   and  decision  makers’  capacity  to  estimate  agricultural  production  potentials  and  variability   under  different  climate  and  resources  scenarios   • Environmental  analysis.  In  the  last  United  Nations  conference  on  climate  (i.e.  COP  17)  taking   place  in  2011,  The  European  Environment  Agency,  the  geospatial  software  company  Esri  and   Microsoft  presented  the  network  Eye  on  Earth42 ,  which  can  be  used  to  create  an  online  site   and   group   of   services   for   scientists,   researchers   and   policy   makers   in   order   to   share   and   analyze   environmental   and   geospatial   data.   Other   three   projects   launched   by   these   institutions  at  COP  17  include  WaterWatch  (using  EEA’s  water  data);  AirWatch,  (about  EEA’s   air   quality   data);   and   finally   NoiseWatch,   which   is   a   combination   between   environmental   data   with   user-­‐generated   information   provided   by   citizens.   Moreover   during   2010   United   Nations   climate   meeting   (COP   16)   Google   launched   its   own   satellite   and   mapping   service   Google  Earth  Engine43 ,  which  is  a  combination  of  a  computing  platform,  an  open  API  and   satellite  imagery  along  25  years.  All  these  tools  will  be  available  to  scientists,  researchers   and   governmental   agencies   for   analyzing   the   environmental   conditions   in   order   to   make   sustainability   decisions.   In   this   way   the   government   of   Mexico   created   a   map   of   the   country’s   forest   incorporating   53,000   Landsat   images,   which   can   be   used   by   the   federal   authority  and  the  NGOs  to  make  decisions  about  land  use  and  sustainable  agriculture.   • Crisis   management   and   anticipation.   In   occasion   of   the   Haiti   earthquake44 :   an   European   Commission’s   Joint   Research   Center   team   used   the   damage   reports   mapped   on   the   Ushahidi-­‐Haiti  platform45  to  show  that  this  crowdsourced  data  can  help  predict  the  spatial   distribution  of  structural  damage  in  Port-­‐au-­‐Prince.  Their  model  based  on  1645  SMS  reports   crowdsourced  data  almost  perfectly  predicts  the  structural  damage  of  most  affected  areas   reported   in   the   World   Bank-­‐UNOSAT-­‐JRC   damage   assessment   performed   by   600   experts   from  23  countries  in  66  days  based  on  high  resolution  aerial  imagery  of  structural  damage.   As  for  future  developments,  some  researches46  highlight  the  fact  that  Big  Data  can  be  used   for  crisis  management  and  anticipation  by  building  up  crisis  observatories,  i.e.  laboratories   devoted   to   the   collecting   and   processing   of   enormous   volumes   of   data   on   both   natural   systems   and   human   techno-­‐socio-­‐economic   systems,   so   as   to   gain   early   warnings   of   impending  events.  With  those  capacity  would  be  possible  to  set  up  Crisis  and  Observatories   for  financial  and  economic,  for  armed  conflicts,  for  crime  and  corruption,  for  social  crisis,  for   health  risks  and  disease  spreading,  for  environmental  changes.   • Global   Development.  An  inspiring  example  is  given  by  Global  Pulse47 ,  which  is  a  Big  Data   based  innovation  programme  fostered  by  the  UN  Secretary-­‐General  and  aimed  at  harnessing   today's   new   world   of   digital   data   and   real-­‐time   analytics   in   order   foster   international   development,  protect  the  world's  most  vulnerable  populations,  and  strengthen  resilience  to   global   shocks.     The   programme   is   rooted   on   three   main   pillars:   research   on   new   data   indicators   providing   real-­‐time   understanding   of   community’s   welfare   as   well   as   real-­‐time   feedback  on  policies;  creation  of  a  toolkit  of  free  open-­‐source  software  for  mining  real-­‐time   data   useful   for   shared   evidence-­‐based   decisions;   the   establishment   of   country-­‐level                                                                                                                             42  http://www.eyeonearth.org/   43  http://earthengine.google.org/#intro     44  http://publications.jrc.ec.europa.eu/repository/handle/111111111/15684   45  http://haiti.ushahidi.com/     46  http://arxiv.org/pdf/1012.0178v5.pdf   47  http://www.unglobalpulse.org/about-­‐new  
  • 69.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   69  |  P a g e   innovation  centres  (Pulse  Lab)  where  real-­‐time  data  are  applied  to  development  challenges.   The  programme  encompasses  5  main  projects  carried  out  with  several  partners:   o “Daily  Tracking  of  Commodity  Prices:  the  e-­‐Bread  Index”48 ,  which  investigates  how   scraping  online  prices  could  provide  real-­‐time  insights  on  price  dynamics   o “Unemployment   through   the   Lens   of   Social   Media” 49 ,   which   relates   the   unemployment  statistics  with  unemployment-­‐related  conversation  from  open  social   web   o “Twitter   and   the   Perception   of   Crisis   Related   Stress”50 ,   which   investigates   what   indicators   can   help   in   understanding   people’s   concerns   on   food,   fuel,   finance,   housing     o “Monitoring   Food   Security   Issues   through   New   Media”51 ,   which   finds   emerging   trends  related  to  food  security  using  text  analysis,  semantic  clustering  and  networks   theory   o “Global   Snapshot   of   Wellbeing   –   Mobile   Survey”52 ,   aimed   at   experimenting   new   tools  able  of  replicating  the  standards  of  traditional  household  surveys  in  real-­‐time   on  a  global  scale   • Intelligence  and  security.  As  examples  of  governments’  commitment  to  Big  Data  for  national   security   we   can   present   the   Cyber-­‐Insider   Threat   (CINDER)53  program,   which   aims   at   developing  new  ways  for  detecting  cyber  espionage  activities  in  military  computer  networks   as  well  as  at  increasing  the  accuracy,  rate  and  speed  with  which  cyber  threats  are  detected.   Another  example  is  the  Anomaly  Detection  at  Multiple  Scales  (ADAMS)54  program  led  by  the   Defense   Advanced   Research   Project   Agency   (DARPA),   which   addresses   the   problem   of   anomaly-­‐detection  and  characterization  in  massive  data  sets.  The  program  will  be  initially   applied  to  insider-­‐threat  detection,  in  which  individual  actions  are  recognized  as  anomalous   with   comparison   to   a   background   of   routine   network   activity.   Finally   the   Center   of   Excellence   on   Visualization   and   Data   Analytics   (CVADA)   of   the   Department   for   Homeland   Security  (DHS)  is  leading  a  research  effort  on  data  that  can  be  used  by  first  responders  to   tackle  with  natural  disasters  and  terrorists  attacks,  by  law  enforcement  to  border  security   concerns,  or  to  detect  explosives  and  cyber  threats.   An  Interesting  Application:  Smart  Cities   A  Smart  City  is  a  public  administration  or  authorities  delivering  services  and  infrastructure  based  on   ICT  which  are  easy  to  use,  efficient,  responsive,  open  and  sustainable  for  the  environment.  We  can   identify  six  main  dimensions55 :     • Smart  economy,   characterized   by   high   standard   of   living   and   competitive   elements:   innovative   and   entrepreneurship,   high   productivity,   flexibility   of   labour   market,   internationalism,  ability  to  transform;   • Smart  mobility,   i.e.   efficient   public   transportation   system,   local   and   international   accessibility,  availability  of  ICT-­‐infrastructure,  sustainability  and  safety;                                                                                                                             48  http://www.unglobalpulse.org/projects/comparing-­‐global-­‐prices-­‐local-­‐products-­‐real-­‐time-­‐e-­‐pricing-­‐bread   49  http://www.unglobalpulse.org/projects/can-­‐social-­‐media-­‐mining-­‐add-­‐depth-­‐unemployment-­‐statistics   50  http://www.unglobalpulse.org/projects/twitter-­‐and-­‐perceptions-­‐crisis-­‐related-­‐stress   51  http://www.unglobalpulse.org/projects/news-­‐awareness-­‐and-­‐emergent-­‐information-­‐monitoring-­‐system-­‐food-­‐security   52  http://www.unglobalpulse.org/projects/global-­‐snapshot-­‐wellbeing-­‐mobile-­‐survey   53  http://www.darpa.mil/Our_Work/I2O/Programs/Cyber-­‐Insider_Threat_%28CINDER%29.aspx   54  http://www.darpa.mil/Our_Work/I2O/Programs/Anomaly_Detection_at_Multiple_Scales_%28ADAMS%29.aspx   55  See  also  the  project  EuropeanSmartCities  at  http://www.smart-­‐cities.eu/model.html
  • 70.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   70  |  P a g e   • Smart  environment   (sustainability   of   natural   resources):   low   pollution,   protection   of   environment,  natural  attractiveness;     • Smart  people,   given   by   high   level   of   human   and   intellectual   capital,   high   level   of   qualification,  lifelong  learning,  social  and  ethnic  diversity,  flexibility,  creativity;     • Smart  living   (high   quality   of   life);   presence   of   cultural   facilities,   healthy   environment   conditions,  individual  safety,  housing  quality,  education  facilities,  touristic  attractiveness  and   social  cohesion;   • Smart  governance  given  by  citizens’  participation  in  decision-­‐making,  the  presence  of  public   and  social  services  and  of  transparent  and  open  governance.   The  combination  of  all  the  benefits  stemming  from  Big  Data  in  governance,  make  it  evident  that  the   integration  of  heterogeneous  data  from  various  domains  holds  high  potential  to  provide  insights  on   cities.  New  technologies  will  unlock  massive  amounts  of  data  about  all  the  aspects  of  the  city  as  well   as  its  citizens.  For  instance  new  systems  involving  energy  use  at  fixed  locations  (point  sources,  like   house  and  office)  are  being  implemented  by  the  mean  of  smart  metering  as  well  as  the  integration   of  various  information  systems  used  to  record  pricing  and  activity.  Another  possibility  is  given  by  the   extraction  of  positional  and  frequency  data  from  social  media  such  as  Twitter,  Facebook,  Flickr  and   Foursquare.  All  this  data  will  be  used  for  fulfilling  the  Smart  Cities  targets.  Let  us  take  into  account   for  instance  the  transportation  system,  where  diagnosing  and  anticipating  abnormal  events  such  as   traffic   congestions   requires   integration   of   various   data   like   traffic   data,   weather   data,   road   conditions,  or  traffic  light  strategy.  Another  possibility  will  be  given  by  e-­‐inclusion  technologies  and   open  data  for  governance.  One  important  example  of  the  development  of  the  Smart  City  concept  at   large  scale  is  the  New  York  City  project  “Roadmap  for  a  Digital  Future”56 ,  which  outlines  a  path  to   build  on  New  York  City's  successes  and  establish  it  as  the  world's  top-­‐ranked  Digital  City,  based  on   indices  of  internet  access,  open  government,  citizen  engagement,  and  digital  industry  growth.     Recent  Trends   Big   Data   is   a   fast   growing   phenomenon:   as   the   Google   CEO   Eric   Schmidt   pointed   out   in   2010,   currently  in  two  days  is  created  in  the  world  as  much  information  as  it  was  from  the  appearance  of   man  till  2003.  Nowadays57  it  is  possible  to  store  all  the  world’s  music  in  a  $600  worth  disk  drive,   while  Facebook  content  shared  every  month  amounts  to  $30  billion.  According  to  the  forecast  global   data  will  grow  at  a  40%  rate  next  year  while  the  total  IT  spending  will  grow  just  by  5%.  In  2010  users   and  companies  stored  more  than  13  exabytes  of  new  data,  which  is  over  50,000  times  the  data  in   the  Library  of  Congress.     Big  Data  is  also  a  potential  booster  for  the  economy,  bearing  a  $300  billion  potential  annual  value  to   US  health  care  as  well  as  a    $600  billion  potential  annual  consumers  surplus  from  using  personal   location   data   globally   and   a   250   billion   Euro   potential   annual   value   to   European   public   administration.  In  fact  the  European  Commission  is  expected  to  adopt  an  Open  Data  Strategy,  i.e.  a   set   of   measures   aimed   at   increasing   government   transparency   and   creating   a   €32   billion   a   year   market  for  public  data.  Finally  as  reported  last  year  by  the  McKinsey  Global  Institute58 ,  the  United   States  will  need  140,000  to  190,000  more  workers  with  deep  analytical  expertise  and  1.5  million   more  data-­‐literate  managers.  Always  according  to  the  McKinsey  Global  Institute  the  potential  value   of  global  personal  location  data  is  estimated  to  be  $700  billion  to  end  users,  and  it  can  result  in  an  up   to  50%  decrease  in  product  development  and  assembly  costs.  What’s  the  growth  engine  of  big  data?                                                                                                                             56  http://www.nyc.gov/html/mome/digital/html/roadmap/theroadmap.shtml   57  See  McKinsey  Global  Institute  (2011)  “Big  data:  The  next  frontier  for  innovation,  competition,  and  productivity”     58  http://www.mckinsey.com/Features/Big_Data    
  • 71.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   71  |  P a g e   From  one  side  more  “old  world”  data  is  produced  through  “open  governance”  and  digitization.  From   the  other  side  “new  world”  data  are  created  are  continuously  collected  in  domains  such  as  “in  silico”   medicine,  “in  silico  engineering”  and  Internet  science.  Brand  new  fields  of  science  are  being  created:   computational   chemistry,   biology,   economics,   engineering,   mechanics,   neuroscience,   geophysics,   etc.  etc.  This  is  true  also  in  humanities,  such  as  the  birth  of  computational  social  science,  based  on   mobile  phones  and  social  network  digital  traces.  A  wide  array  of  actors  including  humanities  and   social   science   academics,   marketers,   governmental   organizations,   educational   institutions,   and   motivated  individuals,  are  now  engaged  in  producing,  sharing,  interacting  with,  and  organizing  data.   All  these  developments  are  allowed  by  the  rise  of  new  technologies  for  data  collections:  web  logs;   RFID;  sensor  networks;  social  networks;  social  data  (due  to  the  Social  data  revolution),  Internet  text   and   documents;   Internet   search   indexing;   call   detail   records;   astronomy,   atmospheric   science,   genomics,  biogeochemical,  biological;  military  surveillance;  medical  records;  photography  archives;   video  archives;  large-­‐scale  eCommerce.     Inspiring  cases   • The   Ion   ProtonTM   Sequencer59  is   a   rapid   genome-­‐scale   benchtop   sequencer.   The   tool   allows  to  perform  data  analysis    in  the  same  day  on  a  single  stand-­‐alone  server.   • The   NIH   Human   Connectome   Project60  aims   at   mapping   the   neural   pathways   that   underlie  human  brain  function  in  order  to  acquire  and  share  data  about  the  structural   and  functional  connectivity  of  the  human  brain.   • The   Models   of   Infectious   Disease   Agent   Study61  is   a   collaboration   of   research   and   informatics   groups   to   develop   computational   models   of   the   interactions   between   infectious   agents   and   their   hosts,   disease   spread,   prediction   systems   and   response   strategies.                                                                                                                 • MyTransport.sg62  is   a   portal,   developed   by   the   Land   Transport   Authority   (LTA)   of   Singapore,  providing  information  and  eServices  for  all  land  transport  users.   • UN  Global  Pulse63 ,  an  innovation  initiative  launched  by  the  United  Nations  Secretary-­‐ General   aimed   at   exploring   how   digital   data   sources   and   real-­‐time   analytics   technologies  can  help  policymakers  to  better  protect  populations  from  shocks.   Tools  on  the  market   Freely  available  tools   There  are  not  many  cases  of  freely  available  tools  for  Big  Data  analysis  on  the  market.   The  presence  of  freely  available  tools  on  the  market  bear  many  benefits,  such  as:   • Developers  and  analysts  will  use  them  to  experiment  with  emerging  types  of  data  structure   so  as  to  develop  new  and  different  analytical  procedures,  he  added                                                                                                                             59 http://www.lifetechnologies.com/global/en/home/about-­‐us/news-­‐gallery/press-­‐releases/2012/life-­‐techologies-­‐ itroduces-­‐the-­‐bechtop-­‐io-­‐proto.html.html   60  http://neuroscienceblueprint.nih.gov/connectome/index.htm     61  http://www.nigms.nih.gov/Research/FeaturedPrograms/MIDAS/     62  http://www.mytransport.sg/content/mytransport/home.html     63  http://www.unglobalpulse.org/about-­‐new  
  • 72.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   72  |  P a g e   • Developers   and   IT   professionals   contribute   their   findings   and   know-­‐how   back   into   the   industry  to  drive  knowledge  exchange   The  freely  available  tools  permit  to  overcome  data  limitations,  simplify  the  analytical  process  and   visualize  results.  The  functionalities  provided  by  these  software  are:   • Massively  parallel  processing  (MPP)  database  product  for  large-­‐scale  analytics  and  next-­‐gen   data  warehousing   • Data-­‐parallel  implementations  of  statistical  and  machine  learning  methods   • Visual  data  mining  modelling     In  this  view  are  very  important  the  free  Big  Data  tools  developed  by  Greenplum  for  data  scientists   and  developers:  MADlib  and  Alpine  In-­‐Database  Miner64  and  Greenplum  HD  Community  Edition65 .   Some   other   software   partially   for   free   with   important   Big   Data   applications:   KNIME66 ,   Weka   /   Pentaho67 ,  Rapid-­‐I  RapidAnalytics68 ,  Rapid-­‐I  RapidMiner69 .  Finally  there  is  R70 ,  which  although  was   not  built  for  Big  Data,  it  has  interesting  application  in  this  realm.       Enterprise-­‐level  software   The  enterprise-­‐level  software  is  adopted  for  the  following  functionalities:   • Open  source  software  based  on  Apache  Hadoop     • Data  storage  platforms  and  other  information  infrastructure  solutions   • Shared-­‐nothing  massively  parallel  processing  (MPP)  database  architectures   • Dataflow  engines,  software  interconnect  technologies   • Data  discovery  and  exploration  tools   • Built-­‐in  text  analytics,  enterprise-­‐grade  security  and  administrative  tools   • Real-­‐time  analytic  processing  (RTAP)  platforms   • Software-­‐as-­‐a-­‐service  (SaaS)   • Visualization  features  supporting  exploratory  and  discovery  analytics   • On-­‐line  analytical  processing  (OLAP)   • BI/DW  (business  intelligence  and  data  warehousing)   • EDW  (enterprise  data  warehousing).  Examples  of  these  software  include:  Tableau  BI  platform71 ;   SAS   Data   Integration   Studio72 ;   SAS   High   Performance   Analytics73 ;   SAS   On   Demand74 ;   SAND                                                                                                                             64  http://www.greenplum.com/community/downloads/analytics-­‐tools/   65  http://www.greenplum.com/community/downloads/database-­‐ce/   66  http://www.knime.org/     67  http://weka.pentaho.com/   68  http://rapid-­‐i.com/content/view/182/196/     69  http://rapid-­‐i.com/content/view/181/196/   70  http://www.r-­‐project.org/     71  http://www.tableausoftware.com/products/server     72  http://support.sas.com/documentation/onlinedoc/etls/     73  http://www.sas.com/software/high-­‐performance-­‐analytics/in-­‐memory-­‐analytics/analytics.html     74  http://www.sas.com/solutions/ondemand/    
  • 73.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   73  |  P a g e   Analytic   Platform75 ;   SAP   BEx76 ;   SAP   NetWeaver77 ;   SAP   In-­‐Memory   Appliance   (SAP   HANA)78 ;     ParAccel   Analytic   Database   (PADB) 79 ;   IBM   Netezza 80 ;   IBM   InfoSphere   BigInsights 81 ;   IBM   InfoSphere  Streams82 ;  Kognitio  WX283 ;  Kognitio  Pablo84 ;  EMC  Greenplum  Database85 ;  Greenplum   HD86 ;  EMC  Greenplum  Data  Computing  Appliance87 ;  Greenplum  Chorus88 ;  Cloudera  Enterprise89 ,   StatSoft  Statistica90  .   Some   other   software   which   have   not   been   built   specifically   for   Big   Data   applications,   but   nonetheless  can  be  used  for  Big  Data  analytics  are:  Mathematica91 ,  MatLab92  and  Stata93 .     Key  challenges  and  gaps   In   order   to   enjoy   all   the   potential   stemming   from   Big   Data   it   would   be   necessary   to   remove   the   technological   barrier   preventing   the   exchange   of   data,   information   and   knowledge   between,   disciplines,  as  well  as  to  integrate  activities  which  are  based  on  different  ontological  foundations.   Even  though  Big  Data  have  provided  a  lot  of  benefits,  many  challenges  are  still  to  be  coped  with.  For   instance  Gartner  (2011)94  argues  that  the  challenges  are  not  only  given  by  the  volume  of  data,  but   also  by  the  variety  (heterogeneity  of  data  types  and  representation,  semantic  interpretation)  and   velocity   (rate   of   data   arrival   and   action   timing).   According   to   the   recent   research   those   advancements  include95   • Data  modelling  challenges:  data  models  coherent  to  the  data  representation  needs;  data   models   able   to   describe   discipline   specific   aspects;   data   models   for   representation   and   query  of  data  provenance  and  contextual  information;  data  models  and  query  languages   representing  and  managing  data  uncertainty,  and  representing  and  querying  data  quality   information   • Data   management   challenges:   provide   quality,   cost-­‐effective,   reliable   preservation   and   access   to   the   data;   protect   property   rights,   privacy   and   security   of   sensible   data;   ensure   data  search  and  discovery  across  a  wide  variety  of  sources;  connect  data  sets  from  different   domains   in   order   to   create   open   linked   data   space   data   can   be   unstructured   or   semi-­‐ structured  with  no  context;  different  data  format;  different  data  labels  used  for  same  data   elements;  different  data  entry  conventions  and  vocabularies  used;  -­‐  data  entry  errors;  data                                                                                                                             75  http://www.sand.com/analytics/architecture/     76  http://scn.sap.com/community/business-­‐explorer     77  http://www.sap.com/platform/netweaver/index.epx     78 http://www.sap.com/solutions/technology/in-­‐memory-­‐computing-­‐platform/hana/overview/index.epx   79  http://www.paraccel.com/   80  http://www-­‐01.ibm.com/software/data/netezza/   81  http://www-­‐01.ibm.com/software/data/infosphere/biginsights/   82  http://www-­‐01.ibm.com/software/data/infosphere/streams/   83  http://www.kognitio.com/analyticalplatform   84  http://www.kognitio.com/pablo   85  http://www.greenplum.com/products/greenplum-­‐database   86  http://www.greenplum.com/products/greenplum-­‐hd   87  http://www.greenplum.com/products/greenplum-­‐dca   88  http://www.greenplum.com/products/chorus   89  http://www.cloudera.com/products-­‐services/enterprise/   90  http://www.statsoft.com/     91  http://www.wolfram.com/mathematica/     92  http://www.mathworks.com.au/products/matlab/   93  http://www.stata.com/     94  http://my.gartner.com/portal/server.pt?open=512&objID=202&mode=2&PageID=5553&resId=1727219   2011.    Available  at  http://www.gartner.com/it/page.jsp?id=1731916   95  http://www.grdi2020.eu/Repository/FileScaricati/6bdc07fb-­‐b21d-­‐4b90-­‐81d4-­‐d909fdb96b87.pdf  
  • 74.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   74  |  P a g e   sets   can   be   so   large   they   cannot   be   effectively   processed   by   a   single   machine;     data   parallelization  and  task  parallelization96 .   • Data   service/tools   challenges:  data  tools  for  most  scientific  disciplines  are  inadequate  to   support  research  in  all  its  phases  so  that  scientists  are  less  productive  than  what  they  might   be.  In  fact  there  is  the  need  of  software  able  to  “clean”,  analyse  and  visualize  huge  amounts   of   data.   Moreover   are   missing   data   tools   and   policies   for   the   ensuring   the   cross   collaboration  and  fertilization  among  different  disciplines  and  scientific  realms     As  for  other  issues  concerning  Big  Data,  Boyd  and  Crawford  (2011)  highlight  some  of  them:   • Relationship  between  automatic  search  and  the  definition  of  knowledge.  At  the  beginning   of   the   20th   century   Ford   introduced   the   mass   production,   automation   and   assembly   line,   reshaping  not  only  the  way  things  are  produced,  but  also  the  general  understanding  of  labor,   the   human   relationship   to   work,   and   the   society   at   large.   Fordism   consisted   in   breaking   down  holistic  tasks  into  atomized  and  independent  ones.  In  the  same  way  Big  Data  is  a  new   system  of  knowledge  characterized  by  a  computational  turn  in  science  leading  to  a  change  in   the  constitution  of  knowledge,  the  process  of  research  and  the  categorization  of  reality.  But   as  the  Fordism  had  limits  (indeed  has  been  overcome  by  the  Just  in  Time  paradigm),  also  the   specialized   Big   Data   tools   are   not   flawless.   Big   Data,   as   a   new   system   of   knowledge   can   change   the   very   meaning   of   learning   itself,   with   all   the   possibilities   and   limitations   embedded  in  the  systems  of  knowing     • Big  Data  may  produce  misleading  claims  of  objectivity  and  accuracy.  In  the  science  there  is   a   deep   cleavage   between   qualitative   and   quantitative   scientists.   Apparently   qualitative   scientists  would  be  engaged  in  creating  and  interpreting  stories,  while  quantitative  scientists   would   be   in   the   business   of   producing   facts.   Needless   to   say,   that   is   not   case   as   all   the   objectivity   claims   come   from   subjects,   who   make   subjective   observation   and   choices.   Moreover   data   analysis   is   based   on   a   large   number   of   assumptions   (see   for   instance   the   asymptotic   theory   in   statistics)   and   on   the   other   hand   even   though   a   model   may   be   mathematically   or   an   experiment   may   be   scientifically   valid,   the   final   interpretation   is   subjective.   Other   examples   are   the   difficulty   of   integrating   in   a   consistent   way   different   datasets,   the   arbitrary   choices   inherent   data   cleaning   and   finally   the   fact   that   internet   databases  may  well  be  affected  by  bias  such  as  frictions  and  self-­‐selection.  In  this  view,  by   increasing   the   quantification   space,   especially   in   social   sciences,   Big   Data   might   support   objectivity  and  accuracy  claims  which  are  not  really  grounded  on  good  sense  and  reality.       • A   higher   quantity   of   data   does   not   always   mean   better   data.   In   all   sciences   there   is   a   massive   amount   of   literature   (interpretation   bias,   design   standardization,   sampling   mechanism  and  question  bias,  statistical  significance  and  diagnostics)  aimed  at  ensuring  the   consistency  of  data  collection  and  analysis.  Curiously,  Big  Data  scientists  sometimes  assume   a  priori  quality  of  their  data  and  completely  neglects  the  methodological  issues  proper  of   global   sciences.   A   clear   example   is   given   by   social   media   data,   which   are   subject   to   self-­‐                                                                                                                           96  Some  Big  Data  challenges  are  deeply  related  with  policy  making,  such  as  the  fact  that  many  agencies  pay  a  high  premium   to  both  internal  resources  and  external  third  parties  to  manage  their  data.  Additionally,  data  management  can  sometimes   be  redundant  if  not  properly  set  up.  Moreover  regulations  do  not  take  into  account  the  new,  expanded  capabilities  that  IT   offers  as  it  takes  time  to  issue  a  new  law  and  bureaucrats  are  not  so  keen  to  novelty    
  • 75.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   75  |  P a g e   selection  bias  as  people  using  social  media  is  not  representative  of  the  society  itself.  Even   the  definition  of  active  user  and  account  of  a  social  media  might  not  be  innocuous:  in  fact  it   is  estimated  that  40%  of  Twitter’s  users  are  merely  “listeners”,  i.e.  do  not  proactively  take   part.  Finally  it  has  to  be  recognized  that  in  my  contexts  high  quality  research  is  purposely   carried  out  with  a  limited  amount  of  data,  such  as  for  instance  in  game  theory  experimental   analysis.     • Big  Data  and  Ethical  Issues.  The  use  for  research  purposes  of  “public”  data  on  social  media   websites  opens  the  door  to  deontological  issues.  The  problem  is:  can  those  data  be  used   without   any   ethical   of   privacy   consideration?   Obviously   Big   Data   is   an   emerging   field   of   science,  thereby  ethical  consideration  are  yet  to  be  fully  considered.  How  the  researchers   can   be   sure   that   their   activity   is   not   harmful   for   some   of   their   subjects?   On   one   hand   is   impossible  to  ask  for  data  use  permission  from  all  the  subjects  present  in  a  database.  On  the   other   hand,   the   mere   fact   that   the   data   are   available   does   not   justify   their   use.   Accountability  to  the  field  of  research  and  accountability  to  the  research  subjects  are  the   ethical   keys   for   Big   Data.   In   all   the   traditional   fields   of   science,   researcher   must   follow   a   series   of   professional   standards   aimed   at   protecting   the   rights   and   well   being   of   human   subjects.  On  the  other  hand  the  ethical  implications  of  Big  Data  research  are  not  yet  clear.     • Digital  divides  created  by  Big  Data.  It  is  widely  accepted  that  doing  research  on  Big  Data   automatically  involves  having  a  quick  and  easy  access  to  databases.  This  is  not  the  case,  as   only   social   media   companies   have   access   to   large   datasets,   and   sell   those   data   at   a   high   price,  offering  only  small  data  sets  to  university  based  researchers.  So  researchers  with  a   considerable  amount  of  founding  or  based  inside  those  firms  can  have  access  to  data  that   the  outsiders  will  not.  Thereby  their  methodologies  and  claims  cannot  be  verified.  In  this   view   Big   Data   can   create   a   new   digital   divide,   between   researchers   belonging   to   the   top   universities  and  working  with  the  top  companies,  and  scholar  belonging  to  the  periphery.   But  the  digital  divide  can  be  also  skills  based:  in  fact  only  people  with  a  strong  computational   background   are   able   to   wrangle   through   APIs   and   analyse   massive   quantities   of   data.   Concluding  there  is  a  new  digital  divide  between  the  Big  Data  reach,  who  are  able  to  analyze   and  to  buy  datasets,  and  belong  to  top  universities  and  companies,  and  the  Big  Data  poor,   who  are  outsiders     Finally  according  to  the  UN97  the  Big  Data  challenges  can  be  divided  along  two  main  dimensions.   The  data  management:   • Privacy.   The   development   of   new   technologies   always   raises   privacy   concerns   for   individuals,   companies   and   societies.   This   is   a   very   crucial   issue   as   privacy,   safety   and   diversity  are  important  for  defending  the  freedom  of  citizens,  and  obviously  companies  have   the   right   to   retain   their   confidential   information.   In   the   era   of   Big   Data,   the   primary   producers,   who   are   the   citizens   using   services   and   devices   generating   data,   are   seldom   aware  that  they  are  doing  so  or  how  their  data  will  be  used.  Sometimes  it  is  also  unclear  to   what  extent  users  of  social  media  such  as  Twitter  consent  to  the  analysis  of  their  data.  The   pool  of  individual  information  shared  by  mobile  phones  and  credit  card  companies,  social                                                                                                                             97  http://unglobalpulse.org/  
  • 76.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   76  |  P a g e   media   and   research   engines   is   simply   astonishing.   People   must   be   conscious   of   that,   as   privacy  is  a  freedom  pillar.         • Access  and  sharing.  A  great  amount  of  data  is  available  online  for  the  most  disparate  uses.   On  the  other  hand  much  data  is  retained  by  companies  which  are  concerned  about  their   reputation,  the  necessity  to  protect  their  competitiveness  or  simply  lack  the  right  incentive   to  do  so.  On  the  other  hand  there  is  a  bunch  of  technical  and  regulatory  arrangements  which   has  to  be  put  in  place  in  order  to  ensure  inter-­‐comparability  of  data  and  interoperability  of   systems.       Data  analysis:   • Summarising  the  data.  Sometimes  the  data  might  be  simply  false  or  fabricated,  especially   with   user-­‐generated   text-­‐based   data   (blogs,   news,   social   media   messages).   In   addition   sometimes   data   are   derived   from   people’s   perceptions,   as   in   calls   to   health   hotlines   and   online   searches   for   symptoms.   Another   case   is   related   to   opinion   mining   and   sentiment   analysis,  in  which  the  true  significance  of  the  statements  can  be  misled,  so  that  the  human   factor   is   always   crucial   in   the   analysis.   Another   problem   is   that   sometimes   data   are   generated  from  expressed  intentions  in  blogposts,  online  searches,  mobile-­‐phone  systems   for   checking   market   price,   which   are   not   a   sure   indicator   of   actual   intentions   and   final   decisions.  So  there  is  a  huge  problem  in  summarizing  facts  from  users’  generated  text,  as   there  might  be  a  difficulty  in  distinguishing  feeling  from  facts.   • Interpreting  data.  A  very  important  concern  is  given  by  the  sample  selection  bias,  given  by   the   fact   that   people   generating   data   are   not   representative   of   the   entire   population.   For   instance   younger   generations   use   more   internet   and   mobile   devices.   In   this   way   the   conclusions  of  the  analysis  are  valid  only  for  the  sample  at  hand  and  cannot  therefore  be   generalized.  Sometimes  dealing  with  huge  amounts  of  data  leads  the  researchers  to  focus  on   finding   patterns   or   correlations   without   concentrating   on   the   underlying   dynamics.   One   thing  is  to  find  a  correlation,  another  is  to  detect  a  causal  relationship.  Even  more  difficult  is   to  identify  the  direction  of  the  causal  relationship  without  using  a  founding  theory.  A  final   issue   is   very   much   linked   with   using   data   from   different   sources,   which   can   magnify   the   existing  flaws  in  each  database     Finally   we   have   the   challenges   identified   by   the   community   white   paper   drafted   with   the   collaboration  of  a  group  of  leading  researchers  across  the  United  States98 :   • Heterogeneity   and   incompleteness.   Data   must   be   structured   prior   to   the   analysis   in   an   homogeneous   way,   as   algorithms   unlike   humans   are   not   able   to   grasp   nuance.   Most   computer  systems  work  better  if  multiple  items  are  stored  in  an  identical  size  and  structure.   But   an   efficient   representation,   access   and   analysis   of   semi-­‐structured   data   is   necessary   because  as  a  less  structured  design  is  more  useful  for  certain  analysis  and  purposes.  Even   after   cleaning   and   error   correction   in   the   database,   some   errors   and   incompleteness   will   remain,  challenging  the  precision  of  the  analysis.   • Human   collaboration.   Even   if   analytical   instruments   gained   tremendous   advancements,   there  are  still  many  realms  in  which  the  human  factor  is  able  to  discover  patterns  algorithms                                                                                                                             98  http://imsc.usc.edu/research/bigdatawhitepaper.pdf  
  • 77.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   77  |  P a g e   cannot.  An  example  can  be  found  in  the  use  of  CAPTCHAs,  which  can  discern  human  users   from  computer  programmes.  In  this  view  a  Big  Data  system  cannot  must  involve  a  human   presence.  Given  the  complexity  of  today’s  world,  there  is  the  necessity  to  harness  human   ingenuity   from   different   domains   through   crowdsourcing.   Thereby   a   Big   Data   system   requires  technologies  able  to  support  this  kind  of  collaboration  even  in  case  of  conflicting   statements  and  judgments.       Current  Big  Data  Techniques     Big   datasets   can   be   analysed   by   the   mean   of   several   techniques   coming   from   statistics   and   computers  science.     A  list  of  the  principal  categories  is:   • Cluster   analysis.   Statistical   technique   consisting   in   splitting   an   heterogeneous   group   into   smaller  subsets  of  similar  elements,  whose  characteristics  of  similarities  are  not  known  in   advance.  A  typical  example  is  to  identify  consumers  with  similar  patterns  of  past  purchases   in  order  to  tailor  most  accurately  a  given  marketing  strategy   • Crowdsourcing.  Technique  for  the  collection  of  data  which  have  been  drawn  from  a  large   group  or  community  in  response  to  an  open  call  through  a  networked  media  such  as  the   internet.  This  category  bears  a  crucial  importance  in  our  case  as  it  is  a  mass  collaboration   instance  of  using  Web  2.0   • Data   mining.   Combination   of   database   management,   statistics   and   machine   learning   methods  useful  for  extracting  patterns  from  large  datasets.  Some  examples  include  mining   human  resources  data  in  order  to  assess  some  employee  characteristics  or  consumer  bundle   analysis  to  model  the  behavior  of  customers   • Machine   learning.   Subfield   of   computer   science   (in   the   scope   of   artificial   intelligence)   regarding  the  definition  and  the  implementation  of  algorithms  allowing  computers  to  evolve   their  behaviour  based  on  empirical  evidence.  An  example  of  machine  learning  is  the  natural   language  processing.   • Natural   language   processing.   Set   of   computer   science   and   linguistic   methods   adopting   algorithms  to  analyse  natural  human  language.  Basically  this  field,  which  began  as  a  branch   of  artificial  intelligence,  deals  with  the  interaction  between  computer  and  human  language   • Neural   networks.   Computational   models   which   are   structured   and   work   similarly   to   biological  neural  networks  existing  among  brain  cells,  and  that  are  used  to  find  in  particular   non-­‐linear  patterns  in  the  data.  Some  applications  include  game-­‐playing  and  decision  making   (backgammon,  chess,  poker)  and  knowledge  discovery  in  data  bases     • Network   analysis.   Part   of   graph   theory   and   network   science   which   describes   the   relationships  among  discrete  nodes  in  a  graph  or  a  network.  In  particular  the  social  network   analysis  studies  the  structure  of  relationship  among  social  entities.  Some  applications  are   include  the  role  of  trust  in  exchange  relationships  and  the  study  of  recruitment  into  political   movements  and  social  organizations   • Predictive  modelling.  Branch  a  mathematical  model  used  to  best  predict  the  probability  of   an  outcome.  This  technique  is  widely  used  in  customer  relationship  management  to  produce   customer-­‐level  models  able  to  assess  the  probability  that  a  customer  would  take  a  particular   action,  such  as  cross-­‐sell,  product  deep-­‐sell  and  churn  
  • 78.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   78  |  P a g e   • Regression.  Statistical  method  for  assessing  how  the  value  of  a  dependent  variable  changes   with   one   or   more   dependent   variables.   Examples   of   applications   include   the   change   in   consumer’s  behaviour  due  to  manufacturing  parameters  or  economic  fundamentals   • Sentiment  analysis.  Natural  language  processing  methods  for  extracting  information  such  as   polarity,  degree  and  strength  of  the  sentiment  over  a  given  feature,  aspect  of  product.  Many   companies   assess   how   different   customers   and   stakeholders   react   to   their   products   and   action  by  applying  this  analysis  to  blogs,  social  networks  and  other  social  media   • Spatial   analysis.   Methods   for   assessing   the   geographical,   geometric   or   topological   characteristics  of  a  data  set.  The  spatial  data  are  often  drawn  from  geographical  information   systems  (GIS)  including  addresses  or  latitude/longitude  coordinates,  to  be  incorporated  into   spatial  regressions  (correlation  between  commodity  price  and  location)  or  simulations   • Simulation.  Consists  in  modelling  the  behavior  of  a  complex  system  for  performing  forecast   and   scenario   analysis.   As   example   we   can   mention   Monte   Carlo   simulations,   which   are   a   class  of  computational  algorithms  that  rely  on  repeated  random  sampling  to  compute  their   results     Current  and  Future  Research   • Technologies   for   collecting   cleaning,   storing   and   managing   data:   datawarehouse;   pivotal   transformation;   ETL;   I/O;   efficient   archiving,   storing,   indexing,   retrieving,   and   recovery;   streaming,   filtering,   compressed   sensing   sufficient   statistics;   automatic   data   annotation;   Large   Database   Management   Systems;   storage   architectures;   data   validity,   integrity,   consistency,   uncertainty   management;   languages,   tools,   methodologies   and   programming   environments   • Technologies   for   summarizing   data   and   extracting   some   meaning:   reports;   dashboard;   statistical   analysis   and   inference;   Bayesian   techniques;   information   extraction   from   unstructured,   multimodal   data;   scalable   and   interactive   data   visualization;   extraction   and   integration   of   knowledge   from   massive,   complex,   multi-­‐modal,   or   dynamic   data;   data   mining;   scalable   machine   learning;   data-­‐driven   high   fidelity   simulations;   scalable   machine   learning;   predictive   modelling,   hypothesis   generation   and   automated   discovery   Technologies   for   using   data   a   decision   tool:   Decision   Trees,   Pro-­‐Con   Analysis,   Rule   Based   Systems,   Neural   Networks,   Tradeoff   based   Decisions   (which   incorporates   Reporting,   Statistics,  Knowledge  Based  systems)    
  • 79.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   79  |  P a g e     3.2.2. Opinion  Mining  and  Sentiment  Analysis   Summary  overview     Current  free  tools   Top   market   tools   Current  research   Short   term   future  research   Long   term   future   research   -­‐  Filtering  opinion  based   on   rating;   assessing   sentiments   based   on   keywords;   visual   word   counting   -­‐  Argument  mapping   -­‐    Machine   learning  +   human   analysis   ·∙    -­‐        -­‐  Statistical  +  Semantic  analysis   through  lexicon/corpus  of  words   with  known  sentiment  for   sentiment  classification   ·∙                -­‐  Identification  of  policy  -­‐   opinionated  material  to  be   analysed   ·∙                -­‐  Computer-­‐generated  reference   corpuses  in  political/governance   field   ·∙                  -­‐  Visual  mapping  of  bipolar   opinion   ·∙                  -­‐  Identification  of  highly  rated   experts   ·∙              -­‐  Visual   representation   ·∙                -­‐  Audiovisual   opinion  mining   ·∙                -­‐  Real-­‐time  opinion   mining            ·∙      -­‐  Machine  learning   algorithms            ·∙    -­‐  Natural  language   interfaces   ·∙                -­‐  SNA  applied  to   opinion  and   expertise   ·∙                -­‐  Bipolar  assessment   of  opinions   ·∙                -­‐  Multilingual   reference  corpora   ·∙                -­‐  Recommendation   algorithms   ·∙                -­‐  Multilingual   audiovisual   opinion  mining   ·∙                -­‐  Usable,  peer-­‐to-­‐ peer  opinion   mining  tools  for   citizens   ·∙                -­‐  Non-­‐bipolar   assessment  of   opinion   ·∙                -­‐  Automatic  irony   detection       Introduction  and  definition   The  explosion  of  social  media  has  created  unprecedented  opportunities  for  citizens  to  publicly  voice   their  opinions,  but  has  created  serious  bottlenecks  when  it  comes  to  making  sense  of  these  opinions.   At  the  same  time,  the  urgency  to  gain  a  real-­‐time  understanding  of  citizens  concerns  has  grown:   because   of   the   viral   nature   of   social   media   (where   attention   is   very   unevenly   distributed)   some   issues  rapidly  and  unpredictably  become  important  through  word-­‐of-­‐mouth.   Policy-­‐makers  and  citizens  don’t  yet  have  an  effective  way  to  make  sense  of  this  mass  conversation   and  interact  meaningfully  with  thousands  of  others.   As  a  result  of  this  paradox,  the  public  debate  in  social  media  is  characterized  by  short-­‐termism  and   auto-­‐referentiality.   Many   experts   consider   social   media   as   a   missed   opportunity   for   better   policy   debate.   At  the  same  time,  the  sheer  amount  of  raw  data  is  also  an  opportunity  to  better  make  sense  of   opinions.   The   key   asset   that   Google   exploited   to   reach   dominance   in   the   search   market   is   not   a   better  algorithm,  but  the  power  of  more  data.  
  • 80.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   80  |  P a g e   We  are  therefore  at  a  crucial  underpinning  where  the  challenge  of  information  overload  can  become   not  a  problem,  but  an  opportunity  for  making  sense  of  a  thousand  voices  and  identify  problems  as   soon  as  they  arise.     Opinion   mining   can   be   defined   as   a   sub-­‐discipline   of   computational   linguistics   that   focuses   on   extracting   people’s   opinion   from   the   web.   The   recent   expansion   of   the   web   encourages   users   to   contribute  and  express  themselves  via  blogs,  videos,  social  networking  sites,  etc.  All  these  platforms   provide  a  huge  amount  of  valuable  information  that  we  are  interested  to  analyse.  Given  a  piece  of   text,  opinion-­‐mining  systems  analyse:   ·∙              Which  part  is  opinion  expressing;   ·∙              Who  wrote  the  opinion;   ·∙              What  is  being  commented.   Sentiment   analysis,  on  the  other  hand,  is  about  determining  the  subjectivity,  polarity  (positive  or   negative)  and  polarity  strength  (weakly  positive,  mildly  positive,  strongly  positive,  etc.)  of  a  piece  of   text  –  in  other  words:   ·∙              What  is  the  opinion  of  the  writer       Opinion  mining  and  sentiment  analysis  cover  a  wide  range  of  applications.     1. Argument  mapping  software  helps  organising  in  a  logical  way  these  policy  statements,  by   making  explicit  the  logical  links  between  them.  Under  the  research  field  of  Online   Deliberation,  tools  like  Compendium,  Debatepedia,  Cohere,  Debategraph  have  been   developed  to  give  a  logical  structure  to  a  number  of  policy  statement,  and  to  link  arguments   with  the  evidence  to  back  it  up99 .     2. Voting  Advise  Applications  help  voters  understanding  which  political  party  (or  other  voters)   have  closer  positions  to  theirs.  For  instance,  SmartVote.ch  asks  the  voter  to  declare  its   degree  of  agreement  with  a  number  of  policy  statements,  then  matches  its  position  with  the   political  parties.   3. Automated  content  analysis  helps  processing  large  amount  of  qualitative  data.  There  are   today  on  the  market  many  tools  that  combine  statistical  algorithm  with  semantics  and   ontologies,  as  well  as  machine  learning  with  human  supervision.  These  solutions  are  able  to   identify  relevant  comments  and  assign  positive  or  negative  connotations  to  it  (the  so-­‐called   sentiment).     The   first   two   point   reflect   mature   application   areas,   while   the   third   area   is   emerging   and   with   relevant  research  issues.  We  will  therefore  mainly  focus  on  this  area  for  the  research  issues.     Why  it  matters  in  governance   These  applications  are  the  basic  infrastructure  of  large  scale  collaborative  policy-­‐making.  They  help   making  sense  of  thousands  of  interventions.  They  help  to  detect  early  warning  system  of  possible   disruption   in   a   timely   manner,   by   detecting   early   feedback   from   citizens.   Traditionally,   ad   hoc                                                                                                                             99  Other  similar  tools  include  Rationale  (http://rationale.austhink.com/tour)  
  • 81.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   81  |  P a g e   surveys  are  used  to  collect  feedback  in  a  structured  manner.  However,  this  kind  of  data  collection  is   expensive,  as  it  deserves  an  investment  in  design  and  data  collection;  it  is  difficult,  as  people  are  not   interested   in   answering   surveys;   and   ultimately   it   is   not   very   valuable,   as   it   detects   “known   problems”  through  pre-­‐defined  questions  and  interviewees,  but  fails  to  detect  the  most  important   problems,   the   famous   “unknown   unknown”.   Opinion   mining   is   helpful   to   identify   problems   by   listening,  rather  than  by  asking,  thereby  ensuring  a  more  accurate  reflection  of  reality.     Argument  mapping  software  is  then  useful  to  ensure  that  policy  debates  are  logical  and  evidence-­‐ based,  and  do  not  repeat  the  same  arguments  again  and  again.   These  tools  would  finally  be  helpful  not  only  for  policy-­‐makers,  but  also  for  citizens  who  could  more   easily  understand  the  key  points  of  a  discussion  and  participate  to  the  policy-­‐making  process.     Recent  trends   Opinion  mining  is  not  in  itself  a  new  research  theme.  Automated  methods  for  content  analysis  have   been   increasingly   used,   and   have   increased   at   least   6   folds   from   1980   to   2002   (Neuendorf,   K.   A.   2002.   The   Content   Analysis   Guidebook.   Sage).   The   research   theme   is   based   in   long   established   computer  science  disciplines,  such  as  Natural  Language  Processing,  Text  Mining,  Machine  Learning   and  Artificial  Intelligence,  Automated  Content  Analysis,  and  Voting  Advise  Applications.   However,  according  to  Pang  and  Lee  (2008),  since  2001  we  see  a  growing  awareness  of  the  problems   and  opportunities,  and  “subsequently  there  have  been  literally  hundreds  of  papers  published  on  the   subject.”   What   is   new   today   is   the   sheer   increase   in   the   quantity   of  unstructured   data,   mainly   due   to   the   adoption  of  social  media,  that  are  available  for  machine  learning  algorithm  to  be  trained  on.  Social   media  content  by  nature  reflects  opinions  and  sentiments,  while  traditional  content  analysis  tended   to   focus   on   identifying   topics   ((Pang,   Lee,   and   Vaithyanathan   2002).   As   such,   it   deals   with   more   complex  natural  language  problems.  Because  of  the  combination  of  increase  in  the  volume  of  data   available   and   more   complex   concepts   to   analyse,   in   recent   years   there   has   been   a   decrease   in   interest   on   semantic-­‐based   application,   and   a   move   towards   greater   use   of   statistics   and   visualisation.  Just  as  any  other  scientific  discipline,  also  automated  content  analysis  is  becoming  a   data-­‐intensive  science.   Inspiring  cases   • Usage  of  DiscoverText    in  government100   • OpinionSpace101   • Project  NOMAD102                                                                                                                             100  http://www.discovertext.com/Government.html   101  http://www.state.gov/opinionspace/   102  http://www.nomad-­‐project.eu/  
  • 82.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   82  |  P a g e   Tools  on  the  market   The  market  of  opinion  mining  tools  is  crowded  with  solution  providers.  Most  of  these  applications   are   geared   towards   analysing   customers   feedback   about   products   and   services,   and   therefore   skewed  towards  sentiment  analysis  that  detects  positive/negative  feelings  by  interpreting  natural   language.   Freely  available  tools   Most  of  the  state-­‐of-­‐the-­‐art  argument  mapping  and  voter  advise  applications  are  freely  available,   because  they  derive  largely  from  academic  community  or  NGOs.  A  comprehensive  list  of  such  tools   is   available   in   http://groups.diigo.com/group/CROSSOVERproject/content/tag/argumentmapping   and  http://groups.diigo.com/group/CROSSOVERproject/content/tag/VAA     There  are  currently  freely  available  applications  that  simply  analyse  terms  based  on  a  pre-­‐defined   glossary,  and  giver  highly  simplified  and  unreliable  results.  One  example  is  http://twitrratr.com/          Figure  11:  Twitrratr       Another   stream   of   simple,   free   and   popular   solutions   is   the   word   visualisation.   Wordclouds   are   becoming   more   and   more   used   to   make   sense   of   large   quantities   of   information   in   a   snapshot.   Obviously,   such   tools   are   also   extremely   simplified   and   only   offer   a   visualisation   of   the   most   common  used  terms,  which  is  helpful  to  have  an  idea  of  what  the  document  is  about,  but  little  more.   Tools  such  as  wordle.com  provide  an  appealing  design  solution  that  can  serve  as  an  entry  level  in  the   opinion  mining  market.  They  are  therefore  important  to  involve  a  much  wider  public  in  this  kind  of   activities.  
  • 83.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   83  |  P a g e     Figure  12:  Wordclouds     Finally,  another  way  of  making  sense  of  large  amount  of  information  is  by  relying  on  human  effort,   by   crowdsourcing   and   collective   intelligence:   people   are   not   only   submitting   their   opinions,   but   actually   filtering   them   by   signalling   the   most   important   ones.   Tools   such   as   uservoice.com   allow   customers  to  submit  feedback  and  to  rank  other  people  ideas,  thereby  allowing  the  emergence  of   the  most  popular  ideas.  These  tools  are  available  at  very  low  cost,  but  research  shows  that  they  are   effective  in  gathering  feedback  but  not  in  identifying  good  ideas,  as  voting  tends  to  focus  on  easier   and  most  popular  issues.          Figure  13:  UserVoice      
  • 84.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   84  |  P a g e   Enterprise-­‐level  software   Beside   these   simple   and   free   applications,   there   is   then   a   flourishing   market   of   enterprise-­‐level   software  for  opinion  mining  which  much  more  advanced  features.  These  tools  are  largely  in  use  by   companies   to   monitor   their   reputation   and   the   feedback   about   products   on   social   media.   In   the   government  context,  opinion  mining  has  long  been  in  use  as  an  intelligence  tool,  to  detect  hostile  or   negative   communications   (Abbasi   2007).   More   recently,   politics   has   become   a   key   area   of   applications,  as  politicians  monitor  public  opinion  on  social  media  to  understand  public  reaction  to   their  position.   Technically,   these   tools   rely   on   machine   learning   with   regard   to   identifying   and   classify   relevant   comments,   through   a   combination   of   latent   semantic   analysis,   support   vector   machines,   "bag   of   words"  and  Semantic  Orientation.  This  process  requires  significant  human  effort  aided  by  machines:   all  the  tools  on  the  market  rely  on  a  combination  of  machine  and  human  analysis,  typically  using   machines  to  augment  human  capacity  to  classify,  code  and  label  comments.   Automated  analysis  is  based  on  a  combination  of  semantic  and  statistical  analysis.  Recently,  because   of   the   sheer   increase   in   the   quantity   of   datasets   available,   statistical   analysis   is   becoming   more   important.   Key  challenges  and  gaps   Current   solutions   for   opinion   mining   and   sentiment   analysis   are   rapidly   evolving,   typically   by   reducing  the  amount  of  human  effort  needed  to  classify  comments.   Among  the  challenges  identified  we  can  select:   -­‐ The  detection  of  spam  and  fake  reviews,  mainly  through  the  identification  of  duplicates,  the   comparison   of   qualitative   with   summary   reviews,   the   detection   of   outliers,   and   the   reputation  of  the  reviewer  (Liu  2008)   -­‐ The  limits  of  collaborative  filtering,  which  tends  to  identify  most  popular  concepts  and  to   overlook  most  innovative  /  out  of  the  box  thinking   -­‐ The  risk  of  a  filter  bubble  (Pariser  2011),  where  automated  content  analysis  combined  with   behavioural  analysis  leads  to  a  very  effective  but  ultimately  deviating  selection  of  relevant   opinions  and  content,  so  that  the  user  is  not  aware  of  content  which  is  somehow  different   from  his  expectations   -­‐ The  asymmetry  in  availability  of  opinion  mining  software,  which  can  currently  be  afforded   only   by   organisations   and   government,   but   not   by   citizens.   In   other   words,   government   have   the   means   today   to   monitor   public   opinion   in   ways   that   are   not   available   to   the   average   citizens.   While   content   production   and   publication   has   democratized,   content   analysis  has  not.   -­‐ The  integration  of  opinion  with  behaviour  and  implicit  data,  in  order  to  validate  and  provide   further  analysis  into  the  data  beyond  opinion  expressed   -­‐ The   continuous   need   for   better   usability   and   user-­‐friendliness   of   the   tools,   which   are   currently  usable  mainly  by  data  analysts   Current  research   Current  research  is  focussing  on:     ● Improving  the  accuracy  of  algorithm  for  opinion  detection   ● Reduction  of  human  effort  needed  to  analyze  content   ● Semantic  analysis  through  lexicon/corpus  of  words  with  known  sentiment  for   sentiment  classification  
  • 85.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   85  |  P a g e   ● Identification  of  policy  opinionated  material  to  be  analysed   ● Computer-­‐generated  reference  corpuses  in  political/governance  field   ● Visual  mapping  of  bipolar  opinion   ● Identification  of  highly  rated  experts   Future  research:  long  term  and  short  term  issues   We  can  distinguish  between  long  and  short  term  research  efforts.  As  for  the  first  one  we  have:   • Enhanced  discoverability  of  content  through  Linked  Data   • Visual  representation   • Audio-­‐visual  opinion  mining   • Real-­‐time  opinion  mining   • Machine  learning  algorithms   • SNA  applied  to  opinion  and  expertise   • Bipolar  assessment  of  opinions   • Multilingual  reference  corpora   • Comment  and  opinion  recommendation  algorithm   • Cross-­‐platform  opinion  mining   • Collaborative  sharing  of  annotating/labelling  resources       On  the  other  hand,  for  the  long-­‐term:   ● Autonomous  machine  learning  and  artificial  intelligence   ● Usable,  peer-­‐to-­‐peer  opinion  mining  tools  for  citizens   ● Non-­‐bipolar  assessment  of  opinion   ● Automatic  irony  detection  
  • 86.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   86  |  P a g e                           3.2.3. Visual  Analytics  for  collaborative  governance:  the  opportunities  and  the   research  challenges   Summary Overview Market  availability   Challenges  and  gaps   Current  research   Short  term  future   research   Long  term  future   research   -­‐Information   visualisation   requirements  for   business  intelligence   and  situational   awareness   -­‐Enterprise  knowledge   visualisation  linking   -­‐Online  analytical   processing  and  data   mining  -­‐Advanced  social   network  analysis  and   visualisation   -­‐Data  mining  and   interactive  visualisation   communication  of   location-­‐based   statistical  data   -­‐Information   visualisation  tools  for   high  dimensional  non-­‐ linear  data   -­‐Visual  analysis  of  data   in  spreadsheet  format   -­‐  Demographics   visualisations,  allowing   stakeholders  and   decision  makers  to  have   a  clear  picture  of  the   data  and  of  their  trends   over  time   -­‐  Legal  Arguments   visualisation:  text   analysis,  argumentation   mappings  and   visualisation  algorithms   -­‐  Discussion  Arguments   visualisation,  making   use  of  visualisation   techniques  for   visualizing  a   discussion’s  flow   -­‐Geographic   visualisation  tools   -­‐Financial  markets   monitoring  and   visualizing  in  real  time   -­‐Advanced  applications   for  security  and  defense   -­‐Close  the  loop  of   information  selection,   preparation  and   visualisation   -­‐Simultaneous  multiple   visualisation   -­‐Integration  of   visualisation  with   comments  /  wiki  /  blogs   -­‐Collaborative  platform   display   Interaction  between   visualisation  and  models   -­‐Mobile  visual  analytics   tools   -­‐Geo-­‐visualisation  of   government  data   -­‐Integration  with  opinion   mining  and  participatory   sensing   -­‐Evaluation  framework   for  visualisation   effectiveness   -­‐Visualisation   infrastructures  for  policy   modelling  issues   -­‐Re-­‐usable,   mashable  tools  for   visual  analytics   -­‐Tighter  integration   between  automatic   computation  and   interactive   visualisation   -­‐Bias  identification   and  signalling  in   visualisation   -­‐Perceptual,   cognitive  and   graphical  principles   -­‐Efficiency  of  the   visualisation   techniques  to   enable  interactive   exploration   interaction   techniques  such  as   focus  &  context   -­‐Impact  evaluation   of  visual  analytics   on  policy  choices   -­‐Learning  adaptive   algorithm  for  users   intent   -­‐Advanced  visual   analytics  interfaces   -­‐Intuitive  affordable   visual  analytics   interface  for   citizens   -­‐Development  of   novel  interaction   algorithms   incorporating   machine   recognition  of  the   actual  user  intent   and  appropriate   adaptation  of  main   display  parameters   such  as  the  level  of   detail,  data   selection,  etc.  by   which  the  data  is   presented  
  • 87.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   87  |  P a g e   Introduction  and  definition     The  explosion  in  computing  techniques  led  to  the  generation  of  a  tremendous  amount  of  data  which   are  stored  in  the  internet  and  processed  in  the  IT  infrastructures  all  over  the  world.  Some  examples   of  new  technologies  for  data  collection  are:  web  logs;  RFID;  sensor  networks;  social  networks;  social   data  (due  to  the  Social  data  revolution),  Internet  text  and  documents;  Internet  search  indexing;  call   detail   records;   astronomy,   atmospheric   science,   genomics,   biogeochemical,   biological;   military   surveillance;  medical  records;  photography  archives;  video  archives;  large-­‐scale  eCommerce.   In  managing  this  huge  amount  of  data,  when  it  comes  to  human-­‐computer  interaction  there  is  a   need  to  distil  the  most  important  information  to  be  presented  it  in  a  humanly  understandable  and   comprehensive  way.  Here  it  comes  visualisation,  which  is  a  way  to  interpret  and  translate  data  from   computer   understandable   formats   to   human   ones   by   employing   graphical   models,   charts,   graphs   and  other  images  that  are  conventional  for  humans  (Bederson  and  Shneiderman  2003).  From  one   hand  we  can  define  visualisation  as  any  technique  for  creating  create  insight,  preferably  by  allowing   users   to   interact   and   alter   with   the   visualisation   to   iteratively   solve   questions   and   form   new   questions  based  on  previous  findings.  On  the  other  hand  visualisation  can  be  defined  as  a  set  of   techniques  for  communicating  knowledge  that  can  be  supported  by  data.   In   contrast   with   visualisation   traditionally   seen   as   the   output   of   the   analytical   process,   visual   analytics103  considers   visualisation   as   a   dynamic   tool   that   aims   at   integrating   the   outstanding   capabilities   of   humans   in   terms   of   visual   information   exploration   and   the   enormous   processing   power   of   computers   to   form   a   powerful   knowledge   discovery   environment.   In   this   view   visual   analytics  is  useful  for  tackling  the  increasing  amount  of  data  available,  and  for  using  in  the  best  way   the  information  contained  in  the  data  itself.  Moreover  visual  analytics  aims  at  present  the  data  in   way  suitable  for  informing  the  policy  making  process.   More  in  particular  the  interdisciplinary  field  of  visual  analytics  aims  at  combining  human  perception   and  computing  power  in  order  to  solve  the  information  overload  problem.  In  Thomas  and  Cooks   (2005)   definition,   visual   analytics   is   “the   science   of   analytical   reasoning   supported   by   interactive   visual   interfaces”.   Precisely   visual   analytics   is   an   iterative   process   that   involves   information   gathering,   data   preprocessing,   knowledge   representation,   interaction   and   decision   making.   The   characteristic   of   this   field   is   that   it   entails   the   association   of   data-­‐mining   and   text-­‐mining   technologies,   used   for   preprocessing   massive   amounts   of   data,   and   information   visualisation104 ,   which   is   useful   for   disentangling   important   from   trivial   and   useless   information.   In   a   certain   way   information  visualisation  becomes  a  tool  in  a  semi-­‐automated  analytical  process  characterized  by   the  cooperation  between  humans  and  computers,  in  which  is  the  user  who  decides  the  direction  of   the  analysis  relating  to  a  particular  task,  while  the  system  works  as  an  interaction  tool.  It  is  somehow   difficult  to  distinguish  among  information  visualisation  and  visual  analytics.  In  poor  terms  we  can  say   that  information  visualisation  handles  abstract  data  structures  such  as  trees  or  graphs,  and  finally   visual   analytics   deals   properly   with   sense-­‐making   and   reasoning.   More   in   particular   information   visualisation   is   mostly   applied   to   data   not   belonging   to   scientific   inquiry,   e.g.   graphical   representations  of  data  for  business,  government,  news  and  social  media.  Visualisation  work  does                                                                                                                             103  http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1573625   104  We  can  define  Information  visualisation  as  a  way  of  making  data  easier  to  understand  using  direct  sensory  experience,   rather   than   linguistic   or   logical   reasoning.   Or   in   the   words   of   Friendly,   information   visualisation   is   the   study   of   "the  visual  representation  of   large-­‐scale   collections   of   non-­‐numerical   information,   such   as   files   and   lines   of   code   in  software  systems,  library  and  bibliographic  databases,  networks  of  relations  on  the  internet,  and  so  forth".  (See  Michael   Friendly  2008)    
  • 88.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   88  |  P a g e   not  necessarily  deal  with  an  analysis  task  nor  does  it  always  use  advanced  data  analysis  algorithms.   On   the   other   hand   visual   analytics   can   be   seen   as   an   integral   approach   to   decision-­‐making,   combining  visualisation,  human  factors  and  data  analysis.  It  entails  identifying  the  best  algorithm  for   a  given  analysis  task,  to  be  integrated  with  the  best  automated  analysis  algorithms  with  appropriate   visualisation  and  interaction  techniques.   Visualisation  and  visual  analytics  should  be  considered  in  strict  integration  with  other  research  areas,   such  as  modelling  and  simulation105 ,  social  network  analysis,  participatory  sensing,  open  linked  data,   visual  computing.   The   disciplines   in   the   domain   of   visualisation   and   visual   analytics   include:   Human-­‐Computer   Interaction   (HCI),   Computer   Science,   Graphic   and   Information   Design,   Usability   Engineering,   Cognitive   and   Perceptual   Science,   Decision   Science,   Information   Visualisation,   Scientific   Visualisation,   Databases,   Data   Mining,   Statistics,   Knowledge   Discovery,   Data   Management   &   Knowledge   Representation,   Presentation,   Production   and   Dissemination,   Statistics,   Interaction,   Geospatial  Analytics,  Graphics  and  Rendering,  Cognition,  Perception,  and  Interaction.   As   far   the   visual   analytics   methodologies   are   concerned,   in   the   CROSSOVER   taxonomy   we   can   identify  the  following:  visualisation  of  a  single,  static,  embedded  data  set;  visualisation  of  multiple   static  data  sets;  visualisation  of  a  single  live  data  feed  or  updating  data  set;  and  finally  visualisation   of  multiple  data  points,  including  live  feeds  or  updates.   Why  it  matters  in  governance   Today’s   governments   face   the   challenge   of   understanding   an   increasingly   complex   and   interdependent   world,   and   the   fast   pace   of   change   and   increased   instability   in   all   the   areas   of   regulation  requires  rapid  decision  making  able  to  draw  on  the  wider  amount  of  available  evidence  in   real-­‐time.  How  can  visualization  and  visual  analytics  help?   • Generate  high  involvement  of  citizens  in  policy-­‐making.  One  of  the  main  applications   of  visualisation  is  in  making  sense  of  large  datasets  and  identifying  key  variables  and   causal  relationships  in  a  non-­‐technical  way.  Similarly,  it  enables  non-­‐technical  users  to   make  sense  of  data  and  interact  with  them.  For  instance,  the  GapMinder106  software   helps  to  understand  the  main  global  demographic  changes  and  raise  awareness  on  the   implications  of  sound  health  policies  in  developing  countries.     • Understand  the  impact  of  policies:  visualisation  is  instrumental  in  making  evaluation  of   policy  impact  more  effective.  For  instance,  Farmsubsidy107  helps  understanding  who  are   the  main  beneficiaries  of  the  common  agricultural  policy  by  geo-­‐referencing  the  single   beneficiary.   • Identify  problems  at  an  early  stage,  detect  the  “unknown  unknown”  and  anticipate   crisis:  visual  analytics  are  largely  used  in  the  intelligence  community  because  they  help   exploiting  the  human  capacity  to  detect  unexpected  patterns  and  connections  between   data.   Thereby   they   help   early   detection   of   potential   threats   at   an   early   stage.   For                                                                                                                             105  The  connections  between  simulation  and  visualisation  appears  even  more  clear  when  dealing  with  user  interfaces,   which  enable  the  visualisation  to  take  user  commands   106  http://www.gapminder.org/   107  http://farmsubsidy.org/    
  • 89.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   89  |  P a g e   instance,  the  VisAware108  project  in  the  US  provides  situational  awareness  in  situation   of  emergencies,  helping  the  coordination  of  different  resources  involved  in  emergencies     History  and  trends   Since  from  the  beginning  of  human  history,  visualisation  has  been  an  effective  way  to  communicate   both  abstract  and  concrete  ideas.  The  appearance  of  digital  visualisation  led  to  the  development  of   graphic  hardware  as  well  as  to  a  wide  array  of  technique  used  to  visualize  data  in  a  number  of  ways   (van  Wijk,  2005).     One  of  the  best-­‐known  examples  of  visualisation  dates  back  to  the  19th  century  with  the  drawings109   by   Charles   Joseph   Minard,   who   developed   a   format   to   show   data   tied   to   a   timescale   with   a   landscape  background.  In  particular  Minard  conveyed  a  complex  series  of  events  through  various   data  measures,  explained  together  with  their  causes  and  consequences  in  a  single  graphic.  Minard's   drawings  are  applied  to  show  the  march  of  Napoleon’s  army  towards  Moscow,  starting  with  422,000   and  ending  with  10,000  men,  and  Hannibal's  crossing  of  the  Alps,  starting  with  97,000  and  ending   with  6,000  men.  The  modern  visualisation  field,  making  use  of  computer  graphics,  originated  in  the   late  1980s  with  the  studies  on  scientific  visualisation  applied  to  fluid  dynamics,  volume  visualisation,   molecular  modelling,  imaging  remote-­‐sensing  data,  and  medical  imaging  (Rosenblum  1994).  From   scientific  visualisation  took  place  some  more  recent  areas,  such  as  information  visualisation,  mobile   visualisation,  location-­‐aware  computing  and  visual  analytics.  Information  visualisation  arose  when   Robertson,   Card   and   Mackinlay   in   the   1980s   started   to   use   the   work   of   Bertin   (1967)   and   Tufte   (1983)   in   interactive   computer   applications.   Later   Shneiderman   (1996)   inter   al.   formalized   the   process   of   information   visualisation.   Finally   Ware   (2004)   emphasized   the   important   of   human   perception  in  information  visualisation.  In  parallel  with  information  visualisation  raised  the  field  of   data  mining,  aimed  at  discovering  information  hidden  in  massive  amounts  of  data.  A  characteristic  of   the  field  is  that  it  aimed  at  substituting  the  human  analysis  with  automatic  computer  operations,  not   supporting  human  perception  with  interactive  visualisation.  In  order  to  avoid  that  was  developed   the   interdisciplinary   field   of   visual   analytics,   which   combines   human   perception   abilities   with   computers’  processing  power  in  order  to  tackle  massive  amounts  of  information.  Visual  analytics  can   therefore  be  seen  as  the  combination  between  human  factors  and  data  analysis  on  one  side,  and   information   visualisation   (Keim   et   el.   2008).   Future   developments   of   visual   analytics   include   the   fields  of  enhanced  collaboration  capabilities,  more  intuitive  interaction,  support  of  non-­‐computing   devices,  as  well  as  the  integration  of  quantitative  and  qualitative  data.  In  fact  visual  analytics  require   particular  technological  advances,  as  traditional  data  mining  tools  are  unsuitable  for  some  necessary   functionalities  such  as  the  algorithm  speed  required  for  iterative  visualisation.   Inspiring  cases  in  information  visualisation  and  visual  analytics:   •  GapMinder110   •  US  Labour  Force  visualisation111   •  State  Cancer  Profiles112                                                                                                                             108  http://www.sci.utah.edu/publications/yarden05/VisAware.pdf   109  http://www.math.yorku.ca/SCS/Gallery/minbib/index.htm.   For   other   examples   please   refer   to   http://www.infovis.net/printMag.php?num=110&lang=2   110  http://www.gapminder.org/   111  http://flare.prefuse.org/launch/apps/job_voyager   112  http://statecancerprofiles.cancer.gov/micromaps/  
  • 90.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   90  |  P a g e   •  Instant  Atlas113   • Rennes  Metropole114   • City  Dashboard115   • OECD  Better  Life  Index116     • Gain  Index117     • IBM  Many  Bills118   • Graphical  Contingency  Analysis119   • DeepCity3D120   • Vis  Sense121           Projects  in  information  visualization  and  visual  analytics:   • Jigsaw122 :  visualization  for  investigative  analysis     • Ploceus123 :  network-­‐based  visualization  of  tabular  data     • Dotlink360124 :  visual  analytics  for  exploring  converging  business  ecosystems     • SportsVis125 :  visualization  to  analyze  sports  data     • Intelligence  Analysis126 :  visual  analytics  to  help  intelligence  analysts     • SellTrend127 :  visualizing  temporal,  categorical  event  transactions     • Dust  &  Magnet128 :  InfoVis  via  a  magnet  metaphor     • Fund  Explorer129 :  stock  portfolio  diversification  through  Context  Treemaps                                                                                                                               113  http://www.instantatlas.com/CDC_story.xhtml   114  http://dataviz.rennesmetropole.fr/quisommesnous/en/   115  http://citydashboard.org/choose.php   116  http://www.oecdbetterlifeindex.org/   117  http://index.gain.org/  
  • 91.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   91  |  P a g e   • InfoCanvas130 :  peripheral  information  art     • Information  Mural131 :  squeezing  large  data  sets  into  small  views     • NetVizor132 :  visualizing  network  topologies     • SunBurst133 :  radial  space-­‐filling  views  of  hierarchies     • Tarantula134 :  testing  and  debugging  large  software  systems     Policy  applications  of  visualisation  and  visual  analytics  tools   With  regard  to  the  governance  and  policy  making  context,  some  visualisation  tools  can  be  applicable   to   a   wide   array   of   issues   and   situation   (education,   environment,   public   health,   urban   growth,   national  defense,  etc.).  In  the  public  context,  visual  analytics  of  public  data  is  an  exploding  field,  with   particular   relation   to   the   open   data   movement,   in   order   to   monitor   policy   context   and   evaluate   government  policies.  Most  basic  mash-­‐up  tools  are  available  to  visualize  government.   Let  us  see  some  other  examples:   • Demographics  visualisations,  allowing  stakeholders  and  decision  makers  to  have  a  clear   picture   of   the   data   and   of   their   trends   over   time.   Visualisation   of   demographic   data   make  easier  the  design  and  evaluation  of  various  policies,  as  there  is  no  need  to  dig   through  acres  of  numbers.  In  fact  advanced  algorithms  are  able  to  create  figures  and   illustrations   easy   to   interpret.   Typical   examples   are   the   aforementioned   GapMinder   (which  embeds  visualisations  of  various  demographic  data  at  global  level),  as  well  as   Dynamic   Choropleth   Maps135 ,   DataPlace136 ,   Hive   Group137 ,   Name   Voyager138 ,   State   Cancer  Profiles139 .                                                                                                                                                                                                                                                                                                                                                                                             118  http://manybills.researchlabs.ibm.com/   119  http://availabletechnologies.pnnl.gov/technology.asp?id=288   120  http://www.deepcity3d.eu/default.aspx   121  http://www.vis-­‐sense.eu/   122  http://www.cc.gatech.edu/gvu/ii/jigsaw/   123  http://www.cc.gatech.edu/gvu/ii/ploceus/   124  http://www.cc.gatech.edu/gvu/ii/dotlink/   125  http://www.cc.gatech.edu/gvu/ii/sportvis/   126  http://www.cc.gatech.edu/gvu/ii/intell/   127  http://www.cc.gatech.edu/gvu/ii/selltrend/   128  http://www.cc.gatech.edu/gvu/ii/dnm/   129  http://www.cc.gatech.edu/gvu/ii/fundexplorer/   130  http://www.cc.gatech.edu/gvu/ii/infoart/   131  http://www.cc.gatech.edu/gvu/ii/mural/   132  http://www.cc.gatech.edu/gvu/ii/netviz/   133  http://www.cc.gatech.edu/gvu/ii/sunburst/   134  http://pleuma.cc.gatech.edu/aristotle/Tools/tarantula/   135  http://www.turboperl.com/dcmaps.html   136  http://www.knowledgeplex.org/dataplace.html   137  http://www.hivegroup.com/gallery/worldpop/     138  http://www.babynamewizard.com/voyager#     139  http://statecancerprofiles.cancer.gov/micromaps/    
  • 92.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   92  |  P a g e   • Legal  Arguments  visualisation:  text  analysis,  argumentation  mappings  and  visualisation   algorithms  can  be  applied  to  legal  documents  in  order  to  simplify  legislation  making  it   more  accessible  and  comprehensible  to  the  general  public  (Many  Bills140 ,  Clear  Congress   Project141 ),  or   in   order   to  visually   represent   corroborative   evidence   (e.g.   the   tools   Carneades142 ,  Deflog143 )   • Discussion   Arguments   visualisation,   making   use   of   visualisation   techniques   for   visualizing  the  flow  of  a  discussion  that  include  various  arguments,  in  order  to  instantly   get  awareness  of  the  topics  discussed,  as  well  as  of  the  arguments  and  the  support  such   arguments   gain.   In   this   view   visualisation   supports   all   interested   stakeholders   to   understand  the  flow  of  a  discussion,  which  is  presented  to  them  in  a  structured  and   interactive  format,  avoiding  numerous  discussion  threads.  Example  of  such  visualisation   tools   include   DebateGraph144 ,   which   is   intensively   used   for   building   argumentation   maps,  as  well  as  Araucaria145 ,  Compendium146 ,  Argublogging147  and  Rationale148 .   • Geovisualisation,   which   is   based   on   the   provision   of   theory,   tools   and   methods   for   visual   analysis,   synthesis,   exploration   and   representation   of   geographical   data   and   information  in  order  to  derive  problem  specific  models  and  design  task  specific  maps  for   incorporating   geographical   knowledge   into   planning   and   decision   making.   Some   examples  of  such  tools  include  ESTAT149 ,  GeoViz  Toolkit150 ,  the  geovisualisation  tools  at   the  US  National  Cancer  Institute151 ,  some  applications  of  InstantAtlas152 .   • Advanced  visualisation  applications  used  for  security  and  national  defense.  In  this  fields,   software  advances  are  being  led  both  on  the  military  and  on  the  corporate  front.  In  fact   business   organizations   also   have   urgent   information   visualisation   requirements   that   support   their   business   intelligence   and   situational   awareness   capability,   data   mining   and  reporting  requirements.  In  this  view  many  of  the  software  innovations  are  being   targeted  at  financial  and  corporate  requirements,  but  are  also  applicable  to  the  defense   domain  due  to  common  data  mining  and  information  visualisation  challenges.  Examples   of  such  tools  are:  DataMontage153 ,  HoneyComb154 ,  Oculus  GeoTime155  and  Starlight156 .                                                                                                                             140  http://researcher.watson.ibm.com/researcher/view_project.php?id=1232   141  http://clearcongressproject.com/   142  http://carneades.berlios.de/downloads/   143  http://www.ai.rug.nl/~verheij/aaa/   144  http://www.debategraph.org   145  http://araucaria.computing.dundee.ac.uk/     146  http://compendium.open.ac.uk/institute/   147  http://www.arg.dundee.ac.uk/?p=624   148  http://rationale.austhink.com/   149  http://www.geovista.psu.edu/ESTAT/   150  http://www.geovista.psu.edu/geoviztoolkit/index.html   151  http://gis.cancer.gov/nci/geovisualisation.html   152  http://www.instantatlas.com/clients.xhtml#government   153  http://www.stottlerhenke.com/datamontage/examples/madcap/Air_force_wargame_simulation.htm   154  http://www.hivegroup.com/solutions/demos/merit.html   155  http://www.oculusinfo.com/papers/GeoTime_Brochure_06.pdf   156  http://starlight.pnl.gov/  
  • 93.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   93  |  P a g e   Other   very   interesting   examples   are   Analyst’s   Notebook 157  is   Visual   Sentinel   Visualizer158 ,  adopted  by  intelligence  agencies  such  as  the  CIA.   • Visualisation   applications   adopted   for   financial   markets   monitoring   and   visualizing   in   real  time.  An  example  of  such  tool  is  SmartMoney159 .   • Visualisation   applied   to   governmental   finances/expenditure   monitoring,   such   as   USAspending.gov160 ,  OffenerHaushalt161  and  Where  Does  My  Money  Go162 .   Tools  on  the  market   There  is  a  massive  quantity  of  visualisation  tools  in  the  market,  both  freely  available  and  enterprise   level,  critical  for  analysts  and  researchers,  but  also  for  common  people,  is  now  available  online.   Freely  available  tools   First   of   all   we   have   visualisation   websites   useful   for   sharing   and   presenting   data,   provide   clear   context   on   important   cultural,   environmental,   social   and   economic   issue,   build   chart   and   share   visualisation  and  discoveries.  Such  examples  include  Data360163 .  Moreover  there  are  “do  it  yourself”   infographic  tools  such  as  Vizify164 ,  Visual.ly165 ,  Easel.ly166  and  Vizualize.me167 .     Then   we   have   data   visualisation   tools   used   for   plotting   data   on   maps,   frameworks   for   creating   charts,   graphs   and   diagrams   and   tools   to   simplify   the   handling   of   data   transforming   them   into   spreadsheets,   visual   data   mining   and   database   exploration   system,   data   visualisation   system   for   high-­‐dimensional  data,  visualisation  framework  for  animating  data.  Some  examples  of  those  tools   are:  Data  Wrangler168 ,  JavaScript  InfoVis  Toolkit169 ,  VisDB170 ,  Graphviz171 ,  IBM  OpenDX172 ,  Gephi173 ,   GeoCommons174 ,  Miso  Dataset175 ,  Polymaps176 ,  Tableau  Public177 .                                                                                                                             157  http://www.i2group.com/us/products/analysis-­‐product-­‐line/ibm-­‐i2-­‐analysts-­‐notebook   158  http://www.fmsasg.com/LinkAnalysis/Government/Solutions.asp   159  http://www.smartmoney.com/map-­‐of-­‐the-­‐market/   160  http://usaspending.gov/   161  http://bund.offenerhaushalt.de/   162  http://www.wheredoesmymoneygo.org/   163  http://www.data360.org/index.aspx   164  https://www.vizify.com/   165  http://visual.ly/   166  http://www.easel.ly/   167  http://vizualize.me/   168  http://vis.stanford.edu/wrangler/   169  http://philogb.github.com/jit/   170  http://bib.dbvis.de/uploadedFiles/202.pdf   171  http://www.graphviz.org/   172  http://www.opendx.org/   173  https://gephi.org/   174  http://geocommons.com/   175  http://misoproject.com/dataset/   176  http://polymaps.org/   177  http://www.tableausoftware.com/public/  
  • 94.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   94  |  P a g e   Tools  available  in  the  market   Apart  from  free  visualisation  tools,  there  are  also  much  more  advanced  software  which  are  used  by   firms   in   order   to   satisfy   their   information   visualisation   requirements   for   business   intelligence   support   and   situational   awareness   capability,   as   well   as  data  mining  and  reporting  requirements.   Other   uses   include   enterprise   knowledge   visualisation,   linking   knowledge   to   spatial   data,   online   analytical   processing   and   data   mining,   advanced   social   network   analysis   and   visualisation,   data   mining  and  interactive  visualisation,  communication  of  location-­‐based  statistical  data,  on-­‐line  and   batch  environment  for  business  graphics,  information  visualisation  tools  for  high  dimensional  non-­‐ linear  data,  visual  analysis  of  data  in  spreadsheet  format,  analysis  of  high  volumes  of  unstructured   text,  analysis  of  high-­‐dimensional  data  in  large  complex  data  sets  and  of  multivariate  time-­‐oriented   data.   Some   examples   of   such   software   are:   CViz   Cluster178  visualisation,   IBM   ILOG 179  visualisation,   Spotfire180 ,  Survey  Visualizer181 ,  Infoscope182 ,  Sentinel  Visualizer183 ,  Grapheur  2.0184 ,  InstantAtlas185 ,   Miner3D186 ,  VisuMap187 ,  Drillet188 ,  Eaagle189 ,  GraphInsight190 ,  Gsharp191 ,  Tableau192 .   Other  examples  of  visualisation  software  can  be  found  in   • http://groups.diigo.com/group/CROSSOVERproject/content/tag/visualisation   Key  Challenges  and  Gaps     New   tools   like   the   Many   Eyes   Word   Tree193 ,   Treemap194 ,   Tag   Cloud195  and   Bubble   Chart196  are   available   but   lack   interactivity.   What   is   also   missing   is   a   better   interaction   of   visualisation   approaches   and   analytical   processes   of   text   mining,   as   well   as   a   better   integration   between   new   opportunities  for  data  collection,  such  as  open  data  and  participatory  sensing,  policy  modelling  and                                                                                                                                                                                                                                                                                                                                                                                               178  http://www.alphaWorks.ibm.com/formula/CViz   179  http://www-­‐01.ibm.com/software/websphere/ilog/   180  http://spotfire.tibco.com/   181  http://www.macrofocus.com/public/products/surveyvisualizer/   182  http://www.macrofocus.com/public/products/infoscope/   183  http://www.fmsasg.com/   184  http://grapheur.com/   185  http://www.instantatlas.com/   186  http://www.miner3d.com/     187  http://www.visumap.net/     188  http://drillet.appspot.com/   189  http://wp.eaagle.com/   190  http://www.graphinsight.com/   191  http://www.avs.com/products/gsharp/index.html   192  http://www.tableausoftware.com/   193  http://www-­‐958.ibm.com/software/data/cognos/manyeyes/page/Word_Tree.html   194  http://www.treemap.com/   195  http://www.tagcloud.com/   196  See   http://manyeyes.alphaworks.ibm.com/manyeyes/,   which   can   be   also   found   in   the   project   CROSSOVER   Diigo   collection  
  • 95.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   95  |  P a g e   visual  analytics  tools.  Most  applications  related  to  visual  analytics  of  public  data  remain  at  the  level   of  visualisation  only,  with  limited  analytical  functionalities.   Visualisation  tools  are  still  largely  design  for  analyst  and  are  not  accessible  to  non-­‐experts.  Intuitive   interfaces   and   devices   are   needed   to   interact   with   data   results   through   clear   visualisations   and   meaningful  representations.  User  acceptability  is  a  challenge  in  this  sense,  and  clear  comparisons   with   previous   systems   to   assess   its   adequacy   and   objective   rules   of   thumbs   to   facilitate   design   decisions  would  be  a  great  contribution  to  the  community.   Scalability  of  visualisation  in  face  of  big  data  availability  is  a  permanent  challenge,  since  visualisation   requires  additional  performances  with  respect  to  traditional  analytics  in  order  to  allow  for  real  time   interaction  and  reduce  latency.   Finally,  visualisation  is  largely  a  demand-­‐  and  design-­‐driven  research  area.  In  this  sense  one  of  the   main  challenge  is  to  ensure  the  multidisciplinary  collaboration  of  engineering,  statistics,  computer   science  and  graphic  design.   A   relevant   challenge   of   visualisation   and   visual   analytics   is   to   adapt   existing   techniques   to   policy   modelling:   • RelaNet  (Landesberger  et  al.  2008),  which  displays  the  network  relations  and  thereby  is   able  to  show  the  connections  and  co-­‐variances  of  the  different  opinions  overtime   • CirVis3D  (Landesberger  et  al.  2009),  which  can  visualize  clustered  opinion  snippets  as   well  as  display  time  series  in  order  to  show  the  opinion  trends  over  time   Following  Chen  (2005),  who  builds  on  Rhyne  et  al.  (2004),  we  can  enumerate  a  number  of  challenges   in  the  topic:   • Usability:   the   availability   of   low   cost,   ready   to   use   and   reconfigurable   information   visualisation   systems,   as   well   as   a   balanced   portfolio   of   general   purpose   fully   functional   information  visualisation  systems  is  used  is  crucial     • Understanding   elementary   perceptual–cognitive   tasks:   research   should   not   only   focus   on   relatively   high   level   cognitive   activities   such   as   browsing   and   searching,   or   judging   the   relevance   of   information.   Rather   it   should   primarily   focus   on   the   identification   and   de-­‐ codification   of   visualized   objects   would   be   a   fundamental   step   toward   engineering   information  visualisation  systems   • Prior  knowledge:  in  order  to  understand  the  underlying  message  in  visualized  information   users  need  a  prior  knowledge  of  how  to  operate  the  information  visualisation  system,  as   well  as  the  domain  knowledge  of  how  to  interpret  the  content   • Education  and  training:  on  the  one  hand  there  is  the  need  for  the  need  for  researchers  and   practitioners  within  the  field  of  information  visualisation  to  learn  and  share  principles  and   skills   of   visual   communication.   On   the   other   hand   potential   users   from   other   fields   must   realize  the  value  of  information  visualisation  and  how  it  might  contribute  to  their  work   • Intrinsic  quality  measures:  finding  quality  metrics  is  crucial  for  the  evaluation  and  selection   of   visual   information   advances,   and   for   understanding   to   what   extent   an   information   visualisation  design  represents  the  underlying  data  faithfully  and  efficiently,  and  preserves   intrinsic  properties  of  the  underlying  phenomenon   • Scalability:   need   for   the   adoption   of   parallel   computing   and   other   high-­‐performance   computing  techniques  in  information  visualisation  
  • 96.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   96  |  P a g e   • Aesthetics:   it   is   important   to   assess   how   insights   and   aesthetics   interact   in   sustaining   insightful   and   visually   appealing   information   visualisation.   What   visual   properties   make   users  think  a  graph  is  pretty  or  visually  appealing?   • Paradigm   shift   from   structures   to   dynamics:   shift   from   the   study   of   the   structure   of   visualisation   to   the   assessment   of   the   dynamic   properties   of   underlying   phenomena,   providing  built-­‐in  trend  detection  mechanisms  embedded  in  the  data  modelling  component   • Causality,  visual  inference,  and  predictions:  there  is  a  strong  necessity  for  the  elaboration  of   sensitive   and   selective   algorithms   that   can   resolve   conflicting   evidence   and   suppress   background   noises.   To   this   respect   a   great   role   is   played   by   complex   network   and   link   analysis     • Knowledge  domain  visualisation:  it  encompasses  several  of  the  aforementioned  challenges,   and   it   is   linked   to   the   fact   that   it   is   not   only   the   information   conveyed   to   be   important,   rather  is  its  structure,  which  is  a  social  construction       Current  research   • Close  the  loop  of  information  selection,  preparation  and  visualisation   • Multiple,  coordinated  views  in  visualisation/visual  analytics197   • Integration  of  visualisation  with  comments  /  wiki  /  blogs   • Collaborative  platform  display   • Interaction  between  visualisation  and  models   • Mobile  visual  analytics  tools,  e.g.  Sitegeist198   • Geo-­‐visualisation  of  government  data   • Integration  with  opinion  mining  and  participatory  sensing   • Evaluation  framework  for  visualisation  effectiveness   • Visualisation  infrastructures  for  policy  modelling  issues     A  list  of  EU  funded  projects  in  visual  analytics  include:   •  VisMaster-­‐Visual  Analytics:  Mastering  the  Information  Age199   •  VisSense-­‐   Visual   Analytic   Representation   of   Large   Datasets   for   Enhancing   Network   Security200                                                                                                                             197  See  Heer,  Jeffrey,  Fernanda  B.  Viégas,  and  Martin  Wattenberg.  2007.  Voyagers  and  Voyeurs:  Supporting  Asynchronous   Collaborative  Information  visualisation.  In  CHI  2007,  April  28–May  3,  2007,  San  Jose,  California,  USA.  See  also  the   presentation  on  social  visualisation  carried  out  by  Fernanda  Viégas  and  Martin  Wattenberg  at  the  University  of   Harvard,  April  13  2009   198  http://sunlightfoundation.com/blog/2012/12/13/sitegeist-­‐uncover-­‐the-­‐data-­‐around-­‐you/   199  http://www.visual-­‐analytics.eu/   200  http://cordis.europa.eu/projects/rcn/94912_en.html  
  • 97.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   97  |  P a g e   •  CUBIST-­‐  Combining  and  Uniting  Business  Intelligence  and  Semantic  Technologies201   •  WATTALIST  –  Modelling  and  Analysing  Demand  Response  Systems202   •  CODE  –  Commercially  empowered  Linked  Open  Data  Ecosystems  in  Research203   •  SemSeg-­‐4D  Space-­‐Time  Topology  for  Semantic  Flow  Segmentation204       Future  research:  long  term  and  short  term  issues   Short-­‐term  research   • Reusability  of  mashup  tools  (mashup  is  a  web  application  which  combines  data  from   one  or  more  sources  into  a  single  integrated  tool  or  application)  for  visual  analytics   • Tighter  integration  between  automatic  computation  and  interactive  visualisation,  which   consists   in   the   availability   of   complex   and   powerful   algorithms   that   allow   for   manipulating   the   data   under   analysis,   transforming   it   in   order   to   feed   suitable   visualisations   • Bias  identification  and  signalling  in  visualisation   • Techniques  and  algorithms  for  creating  effective  visualisation  tools  based  on  perceptual   psychology  (dealing  with  the  process  by  which  the  physical  energy  received  by  sense   organs  forms  the  basis  of  perceptual  experience),  cognitive  science  (focusing  on  how   information  is  represented,  processed,  and  transformed)  and  graphical  principles   • Visualisations   enabling   interactive   exploration   techniques   such   as   focus   &   context,   in   order  for  the  viewers  to  be  able  to  see  the  object  of  primary  interest  presented  in  full   detail   while   at   the   same   time   getting   a   overview–impression   of   all   the   surrounding   information  —  or  context  —  available   • Exploiting  visualisation  as  a  medium  to  engage  citizens  in  policy-­‐related  complex  matter   • Visualisation   as   a   way   to   provide   (persuasive)   feedback   and   change   in   attitudes,   opinions,  behaviors   • Visualisation  as  a  medium  for  grassroots/crowd-­‐sourced  participation,  collaboration  on   data-­‐related  issues     • Impact  evaluation  of  visual  analytics  on  policy  choices   • Research  in  making  visualisation  accessible  for  non-­‐experts       Long-­‐term  research                                                                                                                             201  http://cordis.europa.eu/projects/rcn/95904_en.html   202  http://cordis.europa.eu/projects/rcn/100984_en.html   203  http://cordis.europa.eu/projects/rcn/103419_en.html   204  http://www.semseg.eu/    
  • 98.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   98  |  P a g e   • Learning  adaptive  algorithm  for  users  intent.  Note  that  learning/adaptive  algorithms  are   defined   as   being   capable   to   automatically   change   behaviour   based   on   its   execution   context   (data   handled   by   the   algorithm,   configuration   parameters   of   the   runtime   environment,  resources  used)  in  order  to  obtain  optimal  performances   • Advanced  visual  analytics  interfaces:  visual  interfaces  in  which  neither  the  analytics  nor   the  visualisation  needs  to  be  advanced  in  itself  but  synergy  between  automation  and   visualisation  is  in  fact  advanced   • Intuitive  and  affordable  visual  analytics  interface  for  citizens   • Development  of  novel  interaction  algorithms  incorporating  machine  recognition  of  the   actual  user  intent  or  of  the  actual  relevance  for  the  user  and  appropriate  adaptation  of   main  display  parameters  such  as  the  level  of  detail,  data  selection,  etc.  by  which  the   data  is  presented  
  • 99.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   99  |  P a g e   3.2.4. Serious  Gaming  for  Behavioural  Change   Introduction  and  definition   So   far,   collaborative   ICTs   have   dramatically   augmented   the   capacity   of   people   to   connect   and   collaborate.   Yet,   less   impact   has   been   achieved   in   terms   of   actual   change   and   action,   as   most   collaboration  remain  confined  to  an  elite  of  highly-­‐motivated  individuals  and  faces  the  traditional   limits  of  human  attention  and  motivation.  As  illustrated  in  other  challenges,  ICT  can  improve  data   collection   and   analysis,   but   if   attention   and   motivation   are   not   present,   little   impact   can   be   achieved.  This  challenge  depicts  ICT  solutions  that  enable  behavioural  change  and  action.  Even  when   citizens  and  government  are  fully  aware  of  necessary  policy  choices,  they  might  irrationally  choose   short-­‐term  benefits.     Simulation  and  serious  gaming  (also  known  as  interactive  learning  environments)  offer  opportunities   to  impact  on  personal  incentives  to  action  and  showing  long-­‐term  and  systemic  effects  of  individual   choices,  thereby  lowering  the  engagement  barrier  to  collaborative  governance  and  augmenting  its   impact.   In   particular,   serious   games   have   been   developed   for   educational   purposes   and   raising   awareness  on  particular  issues  while  not  requiring  high  levels  of  engagement.   Simulation   tools   enable   users   to   see   the   systemic   and   long-­‐term   impact   of   their   action   in   a   very   concrete   and   tangible   form,   thereby   encouraging   more   responsible   behaviour   and   long-­‐term   thinking.   Gaming   engages   users   through   the   “fun”   and   “social”   dimension,   thereby   providing   incentives  towards  action.  Feedback  and  simulation  systems  include  both  individual  and  government   behaviour,  thereby  allowing  policy-­‐makers  and  citizens  to  detect  the  impact  of  both  individual  and   policy  choices.   Engagement  of  domain  experts  is  a  crucial  issue  for  building  reliable  games  and  simulation  tools.   Toolkits  and  modules  enable  a  wider  audience  of  stakeholders  to  take  a  direct,  active  role  in  games   development,  thereby  enabling  all  relevant  knowledge  to  be  elicited  and  captured  by  the  simulation   and  gaming  scenarios  and  models.  Pre-­‐built  toolkit  enables  the  creation  directly  by  thematic  experts   and  not  by  technology  experts.     Why  it  matters  in  governance   Most   applications   of   simulation   and   gaming   are   developed   into   the   context   of   education   and   learning,  while  more  interactive  feedback  producing  systems  have  been  applied  to  personal  health   and  energy  conservation.  The  specific  challenges  of  gaming  for  public  policy  awareness  and  action   are   currently   less   researched,   but   are   very   specific   because   of   their   large-­‐scale   interaction   and   systemic  effects  of  individual  behaviour,  which  characterised  this  field.     Furthermore,  the  availability  of  a  simulation  toolkit  is  necessary  to  empower  a  diverse  and  inclusive   simulation  landscape,  where  the  most  diverse  set  of  ideas  can  be  influential  and  listened  to.       Recent  trends   Simulation  and  gaming  have  started  to  be  applied  in  different  policy  contexts  in  order  to  engage   wider   audiences.   Games   are   developed   “on   purpose”,   by   highly   skilled   developers,   in   the   public   sector   and   by   civil   society,   therefore   requiring   significant   investment   and   without   the   specific   thematic   knowledge   of   the   field.   Furthermore,   existing   serious   games   lack   flexibility   to   allow   for   unpredictable  developments  and  non-­‐linear  behaviours,  where  scenarios  evolve  and  adapt  to  users   choices  rather  than  being  rigidly  prescribed.  Commercial  solutions  that  turn  long-­‐term  effects  into   short-­‐term  feedback  are  available,  but  still  lack  usability  as  well  as  the  fun  dimension  of  games  and  
  • 100.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   100  |  P a g e   finally   require   high   levels   of   engagement.   They   are   designed   for   individual   feedback   and   do   not   cover  the  complexity  of  systemic  interactions,  which  are  typical  of  public  governance  issues.     To   sum   up,   serious   gaming   is   still   requiring   high   level   of   engagement,   and   progress   is   needed   in   terms  of  usability  and  appeal  in  order  to  reach  “casual  gamers”  including,  immersive  and  emotion   aware  games.     Current  practice   •   Purpose-­‐built   gaming   and   simulation   for   understanding   of   policy   issues   and   of   individual   behaviour     Public  Policy  Applications   Simulation  and  gaming  can  be  useful  to  policy  makers  in  the  following  terms  (Mayer  et  al.  2004  ,   Bots  and  van  Daalen  2007  ):   •   Research  and  analyse  a  policy  issue  when  it  is  not  feasible  to  tackle  the  real  system  (due  to   time   constraint   or   just   because   it   does   not   exists)   or   to   include   human   behaviour   by   way   of   a   computer  model  (due  to  unrealistic  assumptions  such  as  perfect  rationality).  In  this  view  the  game   becomes  a  laboratory  which  can  produce  a  great  deal  of  data  which  provide  useful  insights   •   Design  alternative  solutions  to  a  problem  analyse  and  assess  the  possible  consequences  of   the  alternative  solutions  in  order  to  recommend  a  course  of  action  for  the  policy-­‐maker.  In  this  view   the   game   can   be   seen   as   a   virtual   design   studio   useful   to   boost   out-­‐of-­‐the-­‐box   thinking   about   alternative  solutions  to  a  policy  issue,  and  also  to  ponder  recommendations’  consequences     •   A  game  can  be  used  to  provide  strategic  advice  acting  as  a  virtual  practice  ring  in  which  the   policy  maker  can  rehearsal  different  strategies.    A  typical  example  of  such  kind  is  given  by  the  war   games,  in  which  the  other  players  act  as  sparring  partners  for  the  policy  makers,  playing  the  role  of   another  stakeholder  as  opportunistically  as  possible   •   Many  policy  issues  require  mediation  so  that  it  is  necessary  to  seek  for  consensus  among   stakeholders.  This  can  be  done  by  putting  the  players  around  a  virtual  negotiation  table  by  the  mean   of  a  mediation  game.  In  this  way  the  changes  in  attitude  and  the  discovery  of  new  opportunities  for   conflict  resolution  are  eased  by  the  interaction  among  stakeholders  during  the  game   •   Normally   experts   and   elites   are   involved   in   the   policy-­‐making   process,   while   citizens   and   ordinary   people   are   completely   neglected.   However,   by   defining   virtual   consultation   forums   it   is   possible  to  allow  equal  access  for  all  the  actors  carrying  views  and  opinions,  which  would  have  been   otherwise  disregarded.  In  this  respect  using  games  and  simulations  bears  and  advantage  given  by   the  fact  that  ordinary  people  can  focus  and  express  themselves  more  easily  when  playing  a  role   •   Clearly  ethical  questions  and  opinions  have  a  great  influence  on  the  policy  making  process.   Games   and   simulations   can   be   used   to   clarify   the   values   and   arguments   behind   a   point   of   view.   While   in   ordinary   political   debate   values   remain   implicit,   by   creating   a   virtual   parliament   it   is   possible   to   make   them   explicit.   Furthermore   gaming   and   simulations   can   be   used   to   magnify   positions   and   opinions   of   stakeholders,   so   that   the   game   can   be   designed   to   reward   players   for   quality  and  clarity  of  argumentation   Moreover   readapting   the   taxonomy   of   Sawyer   and   Smith   simulation   and   serious   gaming   can   be   useful  in  the  following  domains  (cross-­‐referenced  with  game  objectives):   •   Public   sector   and   NGOs:   public   health   education   and   mass   casualty   response   (games   for   health);  political  games  (advergames);  employee  training  (games  for  training);  provide  info  to  the  
  • 101.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   101  |  P a g e   public   (games   for   education);   data   collection/planning     (games   for   research);   strategic   and   policy   planning,  spatial  planning  (games  for  producing);  diplomacy  and  opinion  research  (games  as  work)   •   Defence:   rehabilitation   and   wellness   (games   for   health);   recruitment   and   propaganda   (advergames);   soldier/support   training   (games   for   training);   school   house   education   (games   for   education);   wargames   and   planning   (games   for   research);   war   planning   and   weapons   research   (games  for  producing);  command  and  control  (games  as  work)   •   Healthcare:   cyber   therapy/exergaming   (games   for   health);   public   health   (advergames);   policy  and  social  awareness  campaigns  (games  for  training);  training  games  for  health  professionals   (games  for  education);  games  for  patient  education  and  disease  management  (games  for  research);   visualization   and   epidemiology;   biotech   manufacturing   and   design   (games   for   producing);   public   health  response  planning  and  logistics  (games  as  work)   •   Education:  inform  about  diseases/risks  (games  for  health);  social  issue  games  (advergames);   train   teachers/workforce   skills   (games   for   training);   learning   (games   for   education);   computer   science   and   recruitment     (games   for   research);   P2P   Learning   (games   for   producing);   distance   learning    (games  as  work)       Inspiring  cases     Let  us  present  now  some  inspiring  cases  of  serious  games  applied  to  policy  making:   •   SimHealth:   The   National   Health   Care   Simulation   is   a   management   simulation   of   the   U.S.   Healthcare  system  released  during  Congressional  debates  on  the  Clinton  health  care  plan   •   SimCity   2013:   is   an   upcoming   city-­‐building/urban   planning   simulation   computer   game   allowing   allows   players   to   visualize   data,   such   as   pollution   and   water   distribution,   which   will   be   realised  in  February  2013   •   City  One:  the  game  teaches  industry  professionals  and  civil  servants  the  real-­‐world  planning   in  fields  such  as  optimization  of  banking,  retail,  energy  and  water  solutions   •   Democracy   2:   government   simulation   game   in   which   the   player   acts   as   the   president   or   prime  minister  of  a  democratic  government  introducing  and  altering  policies  in  areas  such  as  tax,   economy,  welfare,  foreign  policy,  transport,  law  and  order  and  public  services   •   Close  Combat  Marines:  serious  game  for  military  training  purposes,  with  particular  reference   to  the  United  States  Marine  Corp     •   Incident  Commander™  NIMS-­‐compliant  training  tool  for  Homeland  Security:  in  this  game  the   player   mimics   the   role   of   incident   commander   in   case   of   natural   or   manmade   disaster,   terrorist   attack  or  hostage  situation.  Application:  US  Department  of  Justice  officers’  training   •   Virtual   Battlespace   Systems   2:   this   is   an   interactive   military   simulator   developed   for   the   United   States   Marine   Corp   and   the   Australian   Defence   Force   to   meet   the   individual   needs   of   military,   law   enforcement,   homeland   defence,   loadmaster,   and   first   responder   training   environments   •   Pulse!!    Virtual  Clinical  Learning  Lab  for  Health  Care  Training:  the  game  recreates  a  lifelike,   interactive,   virtual   environment   in   which   civilian   and   military   heath   care   professionals   practice   clinical  skills  in  case  of  catastrophe  or  terrorist  attack  
  • 102.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   102  |  P a g e   •   Levee  Patroller:  immersive  3D  game-­‐based  environment  to  train  levee  inspection  knowledge   and  skills,  in  order  to  be  prepared  to  cope  with  unexpected  flooding   •   Construct.it:  game-­‐based  learning  environment  allowing  players  to  experience  and  debrief   some  of  the  complexities  involved  in  large-­‐scale  urban  projects.  Application:  development  plan  for   Scheveningen-­‐Harbour  of  The  Hague   •   Simport-­‐Maasvlakte   2:   computer-­‐supported   multi-­‐player   simulation   game   that   mimics   the   real  processes  involved  in  planning,  equipping  and  exploiting  the  new  area  in  the  Port  of  Rotterdam   •   Pro  Rail:  capacity  optimization  of  a  complex  infrastructural  network,  in  this  case  the  Dutch   railways.  Applications:  cargo  capacity  management,  opening  of  the  Vecht-­‐bridge,  increase  traffic  on   the  A2-­‐corridor   • Win   Manager:   online   multiplayer   negotiation   business   game   in   which   players   conduct   a   sequence   of   bilateral   negotiations   pursued   through   private   threads   on   the   general   game   board   • Management   Business   Game:   business   game   focussed   on   the   simulation   of   a   company’s   management  in  a  competitive  market,  which  can  be  played  both  online  and  offline   • Management   Utilities   Euroshop:   management   of   a   chain   of   retail   stores   selling   electronic   products,  through  which  players  identify  the  relationships  between  management  issues  and   competitive  market  factors   • Shadow  Government:  serious  game  based  on  the  gamification  of  real  countries,  systems,  and   worldwide   events.   Based   on   System   Dynamics,   customized   at   the   country   level,   it   allows   players  to  test  several  policy  interventions  and  evaluate  their  impacts  
  • 103.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   103  |  P a g e     Key  challenges  and  gaps   Following  Mayer  (2009)  we  can  identify  the  following  challenges  and  gaps:   •   Cultural  changes  concerning  the  interaction  between  science  in  politics,  democracy.  Changes   in  the  role  of  elites,  activism  and  citizens’  participation,  as  well  as  the  recent  emergence  of  game   cultures   •   Changes  in  public  policy  making  perception,  i.e.  from  rational  comprehensive  to  political  and   incremental     •   How  natural  and  human-­‐caused  events  can  influence  the  political  agenda  (climate  warming,   pollution,  depletion  of  natural  resources,  terrorism)   •   Institutional  changes  and  the  emergence  of  new  industrial  or  institutional  actors   •   Technological   innovation   in   computing   and   simulation   modelling,   such   as   agent-­‐based   models,  cellular  automata,  or  virtual  game  worlds.       Moreover  following  IDATE  we  can  display  the  following  key  challenges  regarding  in  particular  serious   gaming:   •   Restructure  the  game  in  order  to  cope  with  specific  purposes  and  broaden  the  audience   •   Innovate  the  existing  business  models   •   Automating   a   portion   of   the   production   process,   such   as   for   example   the   integration   of   sector-­‐specific  elements   •   Try  to  persuade  reluctant  users  and  create  sector-­‐targeting  serious  gamins  and  persuading   reluctant  users   •   Investing  in  all  connected  platforms   Current  research   •   Kit-­‐based  serious  games   •   Integration  between  policy  models  and  simulation   •   Design   of   appealing,   adaptive   and   context-­‐aware   interfaces.   Impact   of   simulation   and   gaming  on  individual  behaviour   •   Unconscious  impact  of  feedback  systems     Research  disciplines:  human-­‐computer  interaction,  sensors,  information  visualisation,  sensor  design,   psychology,  pedagogy,  public  policy     Future  research:  long  term  and  short  term  issues   Short-­‐term  research   •   Citizens-­‐  and  experts-­‐generated  gaming   •   Immersive  interfaces  
  • 104.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   104  |  P a g e   •   Large-­‐scale  collaboration  in  development   •   Casual  serious  gaming   •   Ethical  issues  in  serious  gaming   •   User-­‐controlled  simulation  and  gaming   •   Non-­‐linear  and  adaptive  scenarios  for  gaming  in  policy  context   •   Integrated  analysis  of  information  and  behavioural  change   •   Impact  of  simulation  and  gaming  on  systemic  behaviour     Long-­‐term  research   •   Augmented  reality  citizens-­‐generated  gaming  and  simulation   •   Ubiquitous  feedback  systems  on  public  governance   •   Model  and  display  long-­‐term  systemic  effects  of  individual  choices  on  public  policy  topics   •   Interplay  between  different  feedback  systems  and  other  information  outputs           3.2.5. Linked  Open  Government  Data   The   notion   of   Government   Data   concerns   all   the   information   that   governmental   bodies   produce,   collect  or  pay  for.  This  could  include  geographical  data,  statistics,  meteorological  data,  data  from   publicly  funded  research  projects,  and  digitized  books  from  libraries.  In  this  respect  the  definition  of   Open  Public  Data  is  applicable  when  that  data  can  be  readily  and  easily  consulted  and  re-­‐used  by   anyone  with  access  to  a  computer.  In  the  European  Commission's  view  'readily  accessible'  means   much   more   than   the   mere   absence   of   a   restriction   of   access   to   the   public.   Data   openness   has   resulted  in  some  application  in  the  commercial  field,  but  by  far  the  most  relevant  applications  are   created   in   the   context   of   government   data   repositories.   With   regard   to   linked   data   in   particular,   most   research   is   being   undertaken   in   other   application   domains   such   as   medicine.   Government   starts  to  play  a  leading  role  towards  a  web  of  data.  However,  current  research  in  the  field  of  open   linked  data  for  government  is  limited.   Following   the   Open   Government   Working   Group   Meeting   in   Sebastopol 205  and   the   Sunlight   Foundation206 ,  there  is  a  set  of  principles  according  to  which  data  can  be  considered  open:   • Data  must  be  completed,  i.e.  no  part  of  them  should  be  omitted  due  to  security,  privacy   or  privilege  limitations   • Data  must  be  primary,  disaggregated  and  not  modified,  and  must  be  published  with  the   finest  possible  level  of  granularity   • Data  must  be  timely  as  their  value  is  time-­‐relevant   • Data  must  be  accessible  to  the  widest  range  of  users  and  purposes   • Data  formats  must  not  under  exclusive  control  of  an  entity                                                                                                                             205  http://opengovdata.org/home/8principles   206  http://sunlightfoundation.com/policy/documents/ten-­‐open-­‐data-­‐principles  
  • 105.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   105  |  P a g e   • Data  should  not  be  subject  to  any  copyright   • Data  must  be  machine-­‐processable   • Data  access  must  be  non-­‐discriminatory   • Data  must  be  permanent,  so  that  their  embedded  information  is  available  over  time   • Data  must  be  cheaply  accessible   In  the  same  way  according  to  Davies  et  al.207  engaging  open  data  should:   • Be  demand  driven   • Put  data  in  context   • Support  conversation  around  data   • Build  capacity,  skills  and  networks   • Collaborate  on  data  as  a  common  resource   Moreover   according   to   Vander   Sande   et   al.  208 ,   publishing   data   leads   to   more   transparency,   new   businesses,  better  evidence-­‐based  policy  making  and  increased  public  sector  efficiency  only  if  the   different  actors  in  the  chain  have  co-­‐ownership  of  the  data  and  be  able  to  participate  directly  in  its   correction.  In  this  sense  free  licensing  and  shared  platform  to  publish  and  offer  feedback/corrections   directly  to  the  data  are  crucial.   Linked  open  government  data  are  valuable  for  a  number  of  reasons.  Firstly,  openness  in  government   data   is   important   for   the   economic   reasons.   For   instance,   the   Open   Data   Strategy   for   Europe   launched   by   the   European   Commission   is   expected   to   deliver   a   €40   billion   boost   to   the   EU's   economy  each  year.  More  in  general,  open  government  data  are  important  for  participatory  decision   making:   •                  Promotion  of  transparency  concerning  the  destination  and  use  of  public  expenditure   •                  Improvement  in  the  quality  of  policy  making,  which  becomes  more  evidence  based   •                  Display  the  full  economic  and  social  impact  of  information,  and  create  services  based   on  government  data   •                  Increase  in  the  collaboration  across  government  bodies,  as  well  as  between   government  and  citizens   •                  Permits  new  added-­‐values  services  to  come  into  existence   •                  Increase  the  awareness  of  citizens  on  specific  issues,  as  well  as  their  information  about   government  policies   •                  Promote  accountability  of  public  officials   • Very   important   examples   are   given   within   the   scope   of   the   Open   Government   Initiative 209  carried   out   by   the   Obama   Administration   for   promoting   government   transparency  on  a  global  scale:Data.gov210 :  platform  which  increases  the  ability  of  the   public  to  easily  find,  download,  and  use  datasets  that  are  generated  and  held  by  the   Federal  Government.  In  the  scope  of  Data.gov  US  and  India  have  developed  an  open   source   version   called   the   Open   Government   Platform 211  (OGPL),   which   can   be   downloaded   and   evaluated   by   any   national   Government   or   state   or   local   entity   as   a   path  toward  making  their  data  open  and  transparent   • USAspending.gov212 :   it   is   a   searchable   website   displaying   for   each   Federal   award   the   name  of  the  entity  receiving  the  award,  the  amount  of  the  award,  information  on  the   award,  and  the  location  of  the  entity  receiving  the  award                                                                                                                             207  http://www.w3.org/2012/06/pmod/pmod2012_submission_5.pdf   208  http://www.w3.org/2012/06/pmod/pmod2012_submission_4.pdf   209  http://www.whitehouse.gov/open   210  http://www.data.gov/ 211  http://www.opengovplatform.org/   212  http://www.usaspending.gov/  
  • 106.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   106  |  P a g e   • FederalRegister.gov213 :   HTML   edition   of   the   Federal   Register   to   make   it   easier   for   citizens  and  communities  to  understand  and  get  informed  about  the  regulatory  process   • Performance.gov214 :   website   providing   a   window   of   US   Government   Administration   effort  to  improve  performance  and  accountability,  in  order  to  create  a  government  that   is  more  effective,  efficient,  innovative,  and  responsive   • IT   Dashboard215 :   website   enabling   federal   agencies,   industry,   the   general   public   and   other  stakeholders  to  view  details  of  federal  information  technology  investments     At   the   European   level   we   have   the   repository   of   applications   making   use   of   open   data:   publicdata.eu216 .  At  the  European  national  level  the  initiatives  include:     • United  Kingdom:  Data.gov.uk217 ,  which  collects  data  from  5,400  datasets  available,  from   all  central  government  departments  and  a  number  of  other  public  sector  bodies  and   local  authorities.     • Italy:  Dati.gov.it218 ,  which  is  an  open  data  portal  allowing  citizens,  developers,  firms  and   public  administrations  to  make  use  of  the  public  administration  information  stock   • Spain:  datos.gob.es219 ,  the  national  portal  for  managing  and  organizing  the  Catalogue  of   Public  Information  of  the  General  State  Administration   • Ireland:   StatCentral.ie 220 ,   providing   standard   documentation   on   recurring   official   statistics  and  links  to  where  they  can  be  found   • Netherlands:   Overheid.nl 221 ,   the   central   access   point   to   all   information   about   government  organizations  of  the  Netherlands                                                                                                                             213  https://www.federalregister.gov/   214  http://www.performance.gov/   215  http://www.itdashboard.gov/   216  http://publicdata.eu/app?page=1   217  http://data.gov.uk/   218  http://www.dati.gov.it/content/datigovit-­‐il-­‐portale-­‐dei-­‐dati-­‐aperti-­‐della-­‐pa   219  http://datos.gob.es/datos/   220  http://www.statcentral.ie/   221  http://www.overheid.nl/
  • 107.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   107  |  P a g e   At  transnational  level  there  are  the  World  Bank222 ,  United  Nations223 ,  REEP224  and  Open  Knowledge   Foundation225  portals.   As  highlighted  by  the  experience  of  the  Open  Corporates226 ,  which  turn  the  freely  available  raw  data   into  something  genuinely  useful  that  customers  will  be  prepared  to  pay  for,  open  data  are  valuable   also  for  the  private  sector.     Figure  14    Open  Data  Business  Model  (source:  Istituto  Superiore  Mario  Boella)       In  this  case  the  data  can  generate  revenue  in  a  number  of  ways:   •                Subscriptions  or  royalties;   •                The  so-­‐called  “freemium”  model  where  a  basic  service  of  offered  for  free  but  with   charges  for  premium  services;   •                  Advertising  by  third  parties;   •                Cross  subsidy;   •                By  offering  services  that  are  cheaper  and  more  efficient  to  outsource.                                                                                                                             222  http://data.worldbank.org     223  http://data.un.org   224  http://www.reegle.info/   225  http://opengovernmentdata.org   226  http://www.w3.org/2012/06/pmod/pmod2012_submission_16.pdf  
  • 108.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   108  |  P a g e     Looking  at  the  best  practices  and  examples  of  Linked  Open  Government  Data,  it  is  possible  to  refer   to  diagram  maintained  by  Richard  Cyganiak  (DERI,  NUI  Galway)  and  Anja  Jentzsch  (HPI),  which  is  a   visualization  of  the  key  LOD  providers  and  their  linkages227 :        Figure  15  -­‐LOD  providers  and  their  linkages           Three  other  inspiring  cases  are:   • The   clean   energy   information   gateway   reegle.info228 ,   which   makes   use   of   LOD   for   providing   comprehensive   clean   energy   country   profiles   so   that   users   can   access   the   highest   quality   information   in   a   visually   appealing   fashion.   By   using   reegle.info   small   organizations   can   share   responsibilities,   as   they   are   not   required   to   maintain   large   databases.   Moreover   the   information   is   directly   linked   to   data   providers,   so   that   updates  take  place  immediately.   • Open   Energy   Information   (OpenEl)229 ,   a   collaborative   knowledge-­‐sharing   platform   providing  open  and  free  access  to  energy  related  models,  tools  and  data.  The  business   benefits  of  using  this  system  stem  from  the  fact  that  a  small  organization  not  having  a   huge  team  of  people  for  maintaining  a  large  database  with  information  of  clean  energy,   can  obtain  the  same  amount  of  knowledge  making  use  of  an  overview  of  a  variety  of   energy-­‐related  and  country-­‐specific  topics.  Moreover,  given  the  direct  link  to  the  data   providers’  information,  any  update  occurs  in  real  time.   • The   UK   official   government   archive   Legislation.gov.uk,   which   offers   access   to   all   published  UK  legislation  so  that  it  can  be  shared  by  citizens  and  businesses.  The  dataset                                                                                                                             227  http://lod-­‐cloud.net/   228  http://data.reegle.info   229  ttp://en.openei.org  
  • 109.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   109  |  P a g e   covers   more   than   800   years   and   includes   the   most   recent   changes   in   legislation.   Legislation.gov.uk   merges   the   contents   of   the   Office   of   Public   Sector   Information   website  and  the  Statute  Law  Database  in  ordered  to  provide  UK  public  and  local  acts,   church  instruments,  ministerial  orders  and  acts  of  the  parliament.   • The  tool  publicspending.gr230 ,  which  takes  data  from  a  variety  of  sources  and  from  the   combined  data  the  system  is  able  to  derive  graphs  showing  where  public  money  is  being   spent   and   which   departments   are   spending   it.   This   is   the   first   linked   open   data   application  in  Greece,  where  it  can  used  to  aid  to  policy  making  and  transparency   • Another   interesting   case   is   “Where   Does   My   Money   Go?”231 ,   which   shows   how   daily   taxes  are  allocated  among  the  different  functions  of  the  government.   • UK   Crime   Map232 ,   which   has   prompted   a   change   in   the   way   police   resources   are   prioritized,  and  is  widely  used  by  the  government  itself  which  is  benefitting  from  much   more  efficient  access  to  information   • The  BudgIT233  platform,  which  turned  the  Nigerian  budget  into  an  interactive  document,   complete  with  commentary  channels  via  the  Web  and  SMS   • The   DERI's   Galway   Volvo   Ocean   Race234  app   for   Android   and   iPhone,   created   by   converting   various   data   sets   into   linked   data   and   then   enrich   that   data   through   crowdsourcing.  DERI  was  used  to  classify  350  apps,  showing  that  the  majority  of  apps:   • Have  been  produced  by  individuals  rather  than  commercial  companies     • Are  Web  based   • Combine  OGD  with  maps   • Rely  on  static  data  sets  rather  than  real  time  data   • Use  a  single  data  set,  rather  than  mixed  data   • The   Open   Culture   Data   project   (Open   Cultuur   Data)235 ,   presenting   the   results   of   a   hackathon.  The  winning  entry  made  use  of  a  video  dataset  and  smartphone  capabilities   to  match  a  person's  location  with  video  taken  in  a  given  area   • GLAMs236  (galleries,  libraries,  archives,  museums),  which  provides  open  access  to  the   cultural  heritage         Some   other   interesting   tools   are   the   RDF   Data   Cube   Vocabulary237 ,   the   Data   Cube   faceted   browser238 ,  Openpolis239 ,  Nosdeputes240 ,  Virtueel  Platform241       Current  challenges:  open  data  and  opinion  mining   There  is  lot  of  effort  for  extracting  public  opinion  and  sentiment  towards  policies242 .  The  problem  is                                                                                                                             230  http://www.w3.org/2012/06/pmod/pmod2012_submission_32.pdf   231  http://www.wheredoesmymoneygo.org/   232  http://www.police.uk/   233  http://www.w3.org/2012/06/pmod/pmod2012_submission_8.pdf   234  http://www.w3.org/2012/06/pmod/pmod2012_submission_20.pdf   235  http://www.opencultuurdata.nl/   236  http://www.w3.org/2012/06/pmod/pmod2012_submission_22.pdf   237  http://www.w3.org/TR/vocab-­‐data-­‐cube/   238  http://www.w3.org/2012/06/pmod/pmod2012_submission_12.pdf   239  http://www.openpolis.it/   240  http://www.nosdeputes.fr/   241  http://virtueelplatform.nl/nieuws/apps-­‐for-­‐amsterdam-­‐zoekt-­‐nieuwe-­‐open-­‐data-­‐apps/   242  http://www.w3.org/2012/06/pmod/pmod2012_submission_29.pdf  
  • 110.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   110  |  P a g e   that   the   tools   that   are   better   at   extracting   real   data   from   social   media   content   are   of   course   expensive.  In  this  respect  the  future  challenges  for  opinion  mining  applied  to  social  media  are:   •                How  to  reduce  human  efforts;   •                Identification  of  good  ideas;   •                Finding  necessary  investments;   •                How  to  improve  usability  of  tools.   We  also  have  to  notice  that  most  of  social  media  interaction  is  not  carried  out  in  public:  for  example   in  Facebook  only  open  discussion  pages  can  provide  information  for  sentiment  analysis  and  opinion   mining.     Current  research  topics   • Integration  of  open  government  data  (OGD)  and  social  media  data  (SMD):  policy  makers   will   soon   be   able   to   see   the   subjective   reaction   to   objective   changes   through   a   dashboard  that  is  powered  by  linked  data               3.2.6. Collaborative  Governance       Introduction  and  definition     While   all   challenges   provide   opportunities   for   a   more   effective   large-­‐scale   collaboration   in   public   action,  the  relevant  institutional  design  is  far  from  being  introduced.  The  formal  inclusion  of  citizens   input  in  the  policy-­‐making  process,  the  deriving  institutional  rules,  the  legitimacy  and  accountability   framework  are  all  issues  that  have  so  far  been  little  explored.  Instant,  open  governance  implies  a   substantial   increase   in   feedback   loops   that   are   of   a   different   scale   with   respect   to   the   present   context.  Any  system  stability  is  affected  by  the  number,  speed  and  intensity  of  feedback  loops,  and   the  institutional  context  has  been  designed  for  less  and  slower  loops.     The  definition  and  design  of  public  sector  role  is  being  directly  affected  by  the  radical  increase  in   bottom-­‐up  collaboration,  deriving  from  the  lower  cost  of  self-­‐organisation.  There  are  also  important   questions  to  be  answered  –  where  does  the  legitimacy  come  from,  how  to  gain  and  maintain  the   trust  of  users,  how  to  identify  the  users  online.  There  is  also  a  very  important  issue  of  how  to  take   into   the   account   the   diversity   of   the   standpoints,   i.e.   how   to   achieve   a   consensual   answer   to   controversial  social  issues,  especially  when  we  do  not  offer  alternatives  (ready-­‐made  options)  but   start   from   an   open   question   and   work   throughout   different   options   proposed   by   participants.   Furthermore,  the  trade-­‐off  between  direct  or  representative  model  of  democracy  will  have  to  be   analysed  in  this  context.  It  is  far  from  being  proved  that  the  open  and  collaborative  governance  is   really  inclusive  and  representative  of  all  the  social  groups,  including  the  disadvantaged  and  of  all   standpoints.  There  is  a  visible  risk  that  online  collaboration  increases  the  divide,  rather  than  reduces   it.  However,  in  the  current  situation,  the  circles  where  the  policy  proposals  are  designed,  amended   and  ranked  hierarchically  are  very  small  and  composed  by  leaders  of  political  parties,  top-­‐level  civil  
  • 111.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   111  |  P a g e   servants,  CEOs  of  large  firms.  On-­‐line  collaborative  tools  would  broaden  these  circles  to  all  those   that  have  a  competence  in  the  field  being  discussed,  and  have  an  ability  to  elaborate  an  argument.   The   management   of   institutional   bodies   is   changing:   innovative   ideas   and   insight   coming   from   employees   and   citizens   are   key   resources   to   be   exploited,   and   meritocracy   and   transparency   are   entering  a  once  stable  and  conservative  workforce.  Enhanced  collaboration  with  citizens  and  private   third   parties   should   be   accompanied   by   adequate   legal   and   accountability   frameworks,   mapping   incentives  to  participation  and  enabling  business  models  for  different  stakeholders.     The  privacy  paradigm  is  changing  and  appropriate,  more  dynamic  frameworks  have  to  be  designed,   taking  into  account  the  willingness  of  citizens  to  share  information  and  at  the  same  time  ensuring   their  full  awareness  of  the  implications  and  their  control  over  the  data  usage.       Recent  trends   The   current   status   is   characterized   by   practice-­‐driven   implementation,   accompanied   by   little   scientific  reflection.  Guidelines  and  soft  regulation  are  being  created  from  scratch  and  by  building  on   other   institutions   examples.   The   development   of   collaborative   governance   is   growing   rapidly   without  an  appropriate  reference  framework.       Public  Policy  Application       As  it  is  widely  recognized,  policy  issues  of  our  age  can  be  addressed  only  through  the  collaboration   of  all  the  components  of  the  society,  including  the  private  sector  and  individual  citizens.  In  this  view   the  advantages  in  collaborative  governance  are  given  by:   • Effectiveness  and  efficiency  in  the  delivery  of  programs   • Professional  development  /  capacity  building   • Better  needs  assessment  and  use  of  available  resources   • Boost  communication  among  citizens  and  stakeholders   • Increase  transparency  and  accountability,  as  well  as  equity  and  inclusiveness     • Avoiding  duplication  in  policy  making   • Increasing  responsiveness,  access  and  build  relationship   • Improving  public  image   • Improve  the  quality  of  information   • Consensus  based  decision-­‐making     • Increased  acceptance  of  results     Collaborative  governance  can  be  applied  to  virtually  all  the  policy  making  fields.  The  following  areas   constitute  a  mere  example:  
  • 112.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   112  |  P a g e   • Infrastructure  management:  building  new  infrastructures  often  entails  the  necessity  to   balance  conflicting  interests,  especially  for  what  concerns  the  case  of  huge  externalities   • Digital  inclusion:  the  increase  in  the  use  of  ICT  has  to  be  fostered  by  the  collaboration  at   every  level   • Energy:  delivering  affordable  and  efficient  energy,  collaboration  on  the  definition  of  energy   regulations   • Environment:   definition   of   new   regulations   on   environmental   safeguard,   mediation   concerning   the   use   of   environmental   resources,   collaboration   in   assessing   public   projects   with  environmental  impact   • Health   care:   collaboration   in   health   care   reform,   awareness   and   education   campaigns,   disease  prevention   • Transportation:   collaboration   in   the   definition   of   a   transportation   plan,   negotiation   of   transportation  rules,  settlement  of  disputes  on  the  construction  of  a  transportation  facility   In  general  the  fields  where  collaborative  governance  is  most  fruitful  are,  in  general,  those  that  (1)   involve  many  different  categories  of  stakeholders,  and  (2)  where  the  conflicts  are  not  frozen  along   entrenched   lines.   Indeed,   (1)   the   benefits   of   collaboration   are   enhanced   when   the   diversity   is   highest,  and  (2)  entrenched  conflicts  are  hardly  solved  by  dialogue,  and  become  a  field  of  power   balance  and  force.       Inspiring  cases  of  ICT  applications  to  Collaborative  Governance     The   Open   Government   Initiative 243  carried   out   by   the   Obama   Administration,   for   promoting   government  transparency  and  citizen  engagement  on  a  global  scale:   • Partner4Solutions244 :  the  website  for  the  Partnership  Fund  for  Program  Integrity  Innovation   at  the  Office  of  Management  and  Budget.  By  using  this  tool,  the  Partnership  Fund  aims  at   gathering  ideas  from  citizens  for  improving  the  Federal  assistance  programme     • Regulations.gov245 :  in  this  website  it  is  possible  to  comment  on  proposed  regulations  and   related   documents   published   by   the   U.S.   Federal   government,   as   well   as   to   search   and   review  original  regulatory  documents  as  well  as  comments  submitted  by  others   • Challenge.gov246 :  online  challenge  platform  allowing  the  public  to  bring  the  best  ideas  and   top  talent  to  bear  on  our  nation’s  most  pressing  challenges,  which  can  range  from  simple   ideas   and   suggestions   to   proofs   of   concept,   designs,   or   finished   products   that   solve   the   grand  challenges  of  the  21st  century   • We  the  People247 :  allowing  citizens  to  create  and  launch  a  petition  in  order  to  engage  the   government     • Change  by  Us248 ,  which  allows  people  to  propose  ideas  and  projects  for  improving  the  cities   they  live.  So  far  the  tool  has  been  applied  to  New  York,  Phoenix  and  Philadelphia  
  • 113.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   113  |  P a g e   • Wikirendum249 ,  a  platform  where  citizens  can  share  ideas  on  policy  making   • The  Digital  Engagement  Guide250 ,  which  is  a  platform  where  ideas  on  how  to  use  digital  and   social  media  in  the  public  sector  are  shared         Key  challenges  and  gaps     The  key  challenges  and  gaps  in  collaborative  governance  are:   • Coping  with  accelerating  changes  in  policy  making   • Overlapping  in  institutions  and  jurisdictions   • Increasing  complexity  in  the  issues  to  be  tackled   • Ability  to  choose  the  appropriate  tool  for  tackling  the  problem  at  hand   • The  need  to  integrate  policies  and  resources   • Managing  expectations   • Public  involvement  processes  can  be  disconnected  from  real  decision  making   • Tackle  conflicting  interests  among  participants     • Using  tools  appropriate  for  the  scale  (small-­‐scale  or  large  scale)  of  the  problem/solution   • Calibrate  the  level  of  citizens’  participation  required  with  respect  to  the  nature  of  the   problem/solution251   • Define  the  appropriate  levels  of  accountability   • Avoid  instability  in  preferences                                                                                                                                                                                                                                                                                                                                                                                             243  http://www.whitehouse.gov/open   244  http://www.partner4solutions.gov/   245  http://www.regulations.gov/#!home   246  http://challenge.gov/   247  https://wwws.whitehouse.gov/petitions   248  http://changeby.us/   249  http://wikirendum.org/   250  http://www.digitalengagement.info/   251  Making  a  reliable  synthesis  of  a  large-­‐scale  discussion  can  be  daunting.  An  approach  considered  viable  is  some  sort  of   machine-­‐assisted   human   "harvesting",   or   "catching":   software   uses   several   algorithms   to   identify   possible   "atoms   of   interest".  For  example,  networks  analysis  can  detect  balkanization  of  a  community  around  a  polarizing  issue;  if  users  give   ratings  to  statements  (as  is  the  case  in  some  of  the  tools  examined  by  CROSSOVER)  Bayesian  inference  on  user  behavior   can  detect  inconsistencies,  and  therefore  irrational  biases.  The  content  thus  algorithmically  selected  is  then  presented  to   human  "harvesters",  who  can  write  summaries.  These  attention-­‐mediation  algorithms  were  apparently  developed  in  the   context  of  medicine,  as  a  tool  for  helping  doctors  formulate  diagnoses  
  • 114.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   114  |  P a g e         Current  research   • Analysing  the  compatibility  of  new  collaborative  behaviour  with  existing  institutional   framework       Research   disciplines:   political   sciences,   public   administration,   law,   sociology,   and   other   social   sciences   in   general   (including   institutional   economics   for   example),   as   well   as   organisational,   network,  innovation  theories,  etc.   Possible  research  instruments:  thematic  networks,  Support  Action     Future  research:  long  term  and  short  term  issues     Short-­‐term  research   • Updated  institutional  framework     Long-­‐term  research     • New  models  of  governance  and  service  provision       3.2.7. Participatory  Sensing   Introduction  and  definition   Participatory  sensing  refers  to  the  usage  of  sensors,  usually  embedded  in  personal  devices  such  as   smartphones   to   allow   citizens   to   feed   data   of   public   interest.   This   could   include   anything   from   photos   to   passive   monitoring   of   movement   in   the   traffic.   Participatory   sensing   involves   higher   commitment  from  citizens,  contrary  to  opportunistic  sensing  where  user  may  not  be  aware  of  active   applications.   The   diffusion   of   mobile   phones   significantly   lowers   the   barriers   of   participation   and   data  input  by  citizens,  with  automated  geo-­‐tagging  and  time-­‐stamping:  given  the  right  architecture,   they   could   act   as   sensor   nodes   and   location-­‐aware   data   collection   instruments.   While   traditional   sensor  nodes  are  centralised,  these  sensors  are  under  the  owners’  control.  This  would  give  way  to   data  availability  at  an  unprecedented  scale.     Why  it  matters  in  governance   Participatory   sensing   radically   improves   the   data   availability   for   evaluating   the   effect   of   public   policies  and  how  individual  behaviour  is  changing,  provided  adequate  privacy  provisions  are  in  place.   Devices  should  assure  enhanced  users’  control  over  data,  i.e.  which  data  is  being  sent,  when  and   how  it  is  treated,  as  well  as  possibility  for  enhanced  data  anonymisation.   Furthermore,  design  of  participatory  sensing  should  be  placed  in  the  framework  of  policy  contexts,  
  • 115.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   115  |  P a g e   allowing   inference   of   policy   impact   from   data.   Future   platforms   should   combine   participatory   sensing,  mass  moderation,  personalised  feedback  and  social  network  analysis  to  assess  the  interplay   between  perception,  data  and  social  interaction.   Participatory  sensing  is  already  used  in  “public  sphere”  activities  such  as  environment  and  health.   However   the   specific   issue   of   evaluating   public   policies   has   been   so   far   little   researched,   with   particular  regard  to  the  implications  for  privacy,  large-­‐scale  deployment  and  bias  management  on   citizens  sensing.     Recent  trends   Small-­‐scale   experiments   are   being   carried   out   in   different   domains,   mainly   dealing   with   environmental  and  health  data.  Applications  in  the  field  of  urban  planning  are  particularly  promising,   yet   there   is   no   structure   link   between   participatory   sensing   and   policy   models.   Larger   scale   deployment  would  require  more  granular  privacy  compliance  and  user-­‐control,  adequate  incentives   to  participation  and  deriving  business  models.  There  is  no  formalisation  of  the  requirements  and  the   design   of   opportunistic   versus   participatory   sensing,   including   sampling   design   for   participants   recruitment.     Advantage  of  Application  in  Public  Policy     Participatory  sensing  can  be  used  to  gather  and  collect  the  following  kinds  of  information:   • Civic  data:  neighborhood  maintenance  issues,  power  outage  documentation   • Environmental  data:  data  providing  hints  on  pollution  levels,  climate-­‐change  related  data   • Transportation:  commutation  habits,  location  and  movement  data,  condition  of  the  road,   connections  to  public  transportation,  incidence  of  traffic,  accidents  occurrence     • Health:  vital  signs,  info  providing  early  warnings  of  diminishing  health,  info  on  epidemic   spread,  self-­‐administered  diagnostic  tests     The  advantages  for  policy  making  are:   • Possibility  to  collect  data  at  an  otherwise  unachievable  scale  and  geographic  range   • Virtually  costless  data  collection   • Reveal  and  highlight  behavioural  patterns  and  routines  which  can  be  accordingly  changed   • Engage  common  citizens  in  sensitive  issues   • More  pervasive  monitoring  capacity  in  fields  such  as  environment  and  health   Inspiring  cases  of  policy  making  related  applications252   • PEIR 253  (Personal   Environmental   Impact   Report),   which   is   a   system   allowing   users   to   determine  their  exposure  to  environmental  pollution  by  using  a  sensor  in  their  mobile  phone   able  to  determine  the  location  and  the  mean  of  transport   • eHealthSense254 ,  automatically  detects  health  related  events  which  are  not  directly  observed   by  current  sensor  technology,  like  pain,  tow  conditions,  depression  
  • 116.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   116  |  P a g e   • SenSay255 ,  which  is  able  to  alert  the  medical  staff  when  the  user  falls  or  in  case  of  suspect   behaviour   • MobAsthma256 ,  which  monitors  the  exposure  to  pollution  affecting  asthma.  Both  the  volume   of  air  inhaled  and  the  pollution  rate  are  collected  by  sensors  interfaced  to  the  mobile  phone     • Haze  Watch257 ,  in  which  the  concentration  of  carbon  monoxide,  ozone,  sulphur  dioxide,  and   nitrogen  dioxide  is  measured  by  embedding  pollution  sensors  in  mobile  phones     • NoiseTube258 ,  which  registers  noise  levels  used  to  monitor  noise  pollution,  which  can  affect   human  hearing  and  behaviour   • EpySurveyor259 ,  used  by  the  Red  Cross  to  evaluate  anti-­‐malarial  bednet  distribution  and  use   throughout  sub-­‐Saharan  Africa,  as  well  as  the  coverage  achieved  by  vaccination  campaigns   • CarTel260 ,   which   is   a   mobile   sensing   and   computing   system   making   use   of   mobile   phones   carried  in  vehicles  to  collect  information  about  traffic  or  WIFI  access  points     • NoiseSpy261 ,  which  is  a  sound-­‐sensing  system  able  to  log  data  for  monitoring  environmental   noise.  Users  can  explore  a  city  area  while  at  the  same  time  visualize  noise  levels  in  real  time     Key  challenges  and  gaps   • Preserve  the  privacy  of  the  users  which  are  required  to  provide  extremely  personal  data   • Create  new  mobile  device  interfaces  which  are  engaging  and  efficient  and  can  be  used   • Ensure  security,  as  the  current  and  past  citizen’s  position  might  be  spotted   • Provide   new   sensors   capable   of   increasing   the   range   of   information   that   individuals   can   track  and  use   • Create  network  infrastructures  aimed  at  supporting  participatory  sensing  services   • Provide  incentive  for  participation  to  data  collection   • Develop   analytical   techniques   to   carry   out   more   accurate   inference   with   mobile   phone   supplied  data  such  as  geo-­‐data  and  images     • Develop   visual   analytics   and   data   analysis   techniques   which   provide   relevant   and   easy   to   interpret  information  for  the  general  public   • Create  engaging  and  efficient  mobile  device  interfaces  to  support  effective,  real-­‐time  user   interaction   • Provide   quality   data,   temporal   and   geo-­‐graphical   availability,   and   ability   to   cover   the   phenomena                                                                                                                                                                                                                                                                                                                                                                                               252  To  the  best  of  our  knowledge  there  are  not  yet  government  applications  in  the  realm  of  participatory  sensing   253  http://peir.cens.ucla.edu   254  http://dl.acm.org/citation.cfm?doid=1411759.1411761   255  See  Siewiorek  et  al.  (2003)   256  http://crystal.uta.edu/~kumar/CSE4340_5349MSE/mobsense.pdf   257  http://www.pollution.ee.unsw.edu.au/   258  http://noisetube.net/   259  http://www.episurveyor.org/user/index   260  http://cartel.csail.mit.edu/doku.php   261  www.cl.cam.ac.uk/mobilesensing/downloads.htm  
  • 117.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   117  |  P a g e   Current  research   • Aggregating  and  validating  citizens  generated  and  government  data  resource  discovery,     • Selective  sharing,  and  context  verification  mechanisms,  as  well  as  application-­‐level  support   for  data  gathering  campaigns,   • Incentives  for  participatory  sensing,     • Evaluation  of  human  agents  as  sensors   Disciplines  of  research:  sensor  networks,  location  services;  psychology,  economics  of  participation;   privacy   Research  instruments:  testbeds  and  living  labs,  STREPs   Future  research:  long  term  and  short  term  issues   Short-­‐term  research   • Sensing  coverage,  sensor  calibration  and  sensor  context  for  opportunistic  sensing.   • Quality  verification  for  participatory  sensing   • Privacy-­‐compliant  sensing  and  sharing   • Business  models  for  participatory  sensing   • Intelligently  recruiting  collaborators  and  deploying  data  collection  protocols.   • Anonymous,  transparent  use  of  human-­‐carried  sensing  devices   • Evaluating  behavioural  change  through  participatory  sensing   Long-­‐term  research   • Enhanced  analytical  techniques  to  make  more  accurate  inferences  from  mobile  phone-­‐ supplied  data  such  as  location  and  images  and  to  automatically  detect  and  respond  to  subtle   events;     • New  personal-­‐scale  sensors  to  expand  the  range  of  information  that  individuals  can  track   and  use   • Privacy  by  design  in  participatory  sensing    
  • 118.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   118  |  P a g e   3.2.8. Identity  Management       Introduction  and  definition   Digital  identity  management  has  long  been  a  policy  priority  in  the  EU  Member  States,  and  large-­‐scale   investments   have   been   deployed.   In   the   context   of   collaborative   governance,   digital   identity   constitutes  a  fundamental  pillar  of  trustworthy  cooperation.  Identity  management  systems  include   control  and  management  of  credentials  used  to  authenticate  one  entity  to  another,  and  authorise  an   entity  to  adopt  a  specific  role  or  perform  a  specific  task.  Global  in  nature,  they  should  support  non-­‐ repudiation   mechanisms   and   policies;   dynamic   management   of   identities,   roles,   and   permissions;   privacy   protection   mechanisms   and   revocation   of   permissions,   roles,   and   identity   credentials.   Furthermore,   all   the   identities   and   associated   assertions   and   credentials   must   be   machine   processable  and  human  understandable.   At  the  EU  level,  the  goal  is  to  provide  an  interoperable  privacy  protecting  infrastructure  for  eID  that   is   federated   across   countries,   with   multiple   levels   of   security   for   different   services,   relying   on   authentic  sources,  and  usable  in  a  private  sector  context.   Alongside   this,   a   flexible,   context-­‐dependent   and   interoperable   identity   management   system   is   required   for   large-­‐scale   deployment.   In   particular,   federated   identity   management   systems   that   ensure   flexible   deployment   and   seamless   integration   of   users’   preferred   identities,   including   commercial  (such  as  Facebook  connect)  and  open  source  solutions  (such  as  OpenID)  are  needed.   Particular   focus   should   be   put   on   usable   delegation   of   privileges,   which   is   very   important   for   workflows  and  integrating  services.   Electronic  identity  management  should  identify  non-­‐humans  (devices,  sensors)  as  well  as  humans,  in   order  to  ensure  validated  identity  in  the  context  of  participatory  sensing  and  the  Internet  of  Things.   At   the   same   time,   eIdentity   management   should   take   into   account   the   risks   of   information   centralization  in  terms  of  data  privacy  and  security.  Cost-­‐benefit  considerations  of  centralised  versus   federated  systems  remains  a  key  issue.  Identity  federation  can  be  accomplished  in  any  number  of   ways,   some   of   which   involve   the   use   of   Internet   standards,   such   as   the   OASIS   Security   Assertion   Markup   Language   (SAML)   specifications,   with   the   use   of   open   source   technologies   and/or   other   openly  published  specifications.     Why  it  matters  in  governance   Identity  certification  is  one  of  the  core  tasks  of  government,  and  therefore  pertains  specifically  to   the  governance  context.  This  is  reinforced  by  Meta  Group  (2002),  who  views  the  implementation  of   identity  management  “not  as  a  differentiator  but  as  mandatory  security  consideration,  a  business   imperative  and  a  non-­‐negotiable  user  expectation”.     Recent  trends   The  role  of  Identity  Management  is  vital  in  the  context  of  ICT  for  Governance  and  Policy  Modelling.   The   importance   of   addressing   eIdentity-­‐related   issues   for   secure   public   service   provision,   citizen   record   management   and   law   enforcement   has   made   Identity   management   a   strategic   issue   for   governments  at  both  a  local  and  international  level.  Research  for  the  design  and  implementation  of   privacy   preserving   digital   identity,   as   well   as   for   its   supporting   management   infrastructures,   and   delegation  of  authority,  has  reached  a  satisfactory  level.  Nevertheless,  one  of  the  greatest  problems   in   Identity   Management   is   lack   of   interoperability   of   digital   identities   and   identity   management   systems   between   proprietary   systems   and   standards-­‐based   ones,   and   between   organisations   and   governments.    
  • 119.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   119  |  P a g e   Current  practice   • Electronic  ID  creation  at  national  level   • Pilots  in  cross-­‐border  interoperability  of  field  in  EU  (STORK  project)   Public  Policy  Applications   The   development   of   Federated   Identity   Management   would   be   to   the   following   benefits   at   governmental  level:   • Avoid   replicated   efforts:   reduction   in   the   number   of   sign-­‐ons   and   passwords   needed   for   accessing  multiple  systems  and  databases,  thereby  decreasing  cost  and  time-­‐waist   • It  would  be  possible  to  define  a  mechanism  of  sharing  and  managing  identity  information  as   it  moves  between  discrete  legal,  policy  and  organizational  domains  which  would  be  based   on  standards   • Institutions   would   not   have   to   establish   separate   relationships   and   procedures   with   one   another     • It  is  possible  to  grant  ad  revoke  user  access  to  info  more  easily     •  Reduce  the  number  of  passwords  accumulated:  citizens  either  forget  them  or  choose  simple   ones  thereby  increasing  insecurity  and  fraud  possibility   • Increase  in  security  regarding  the  user  access  to  information  and  the  digital  resources,  as  it   eliminates  the  need  to  replicate  databases  of  user  credentials  for  separate  applications  and   systems,  which  are  potential  weak  points   • Increase  in  sensitive  information  shared  across  government  and  organizational  boundaries  in   case  of  crisis   • Allows  to  focus  on  users  of  information  and  services  rather  than  on  entities  that  house  those   resources   Key  challenges  and  gaps   • Fragmentation  of  research  in  identity  along  disciplinary  lines   • Need  for  new  identity  proof  processes     • Privacy  issues:  use  limitation  principles,  avoid  pervasive  surveillance   • Capability  to  efficiently  integrate  services  throughout  the  chain   • Time  saving  identification   • Specifications   and   nature   of   a   Digital   Identity   dictated   by   the   social   and   political   environment  of  the  country  of  issuance   • Increasing   number   of   electronic   identity-­‐related   crimes   (identity   fraud,   identity   theft,   impersonation),  which  makes  it  difficult  to  guarantee  the  legitimacy  of  identities     Current  research   • Cultural-­‐dependent  identity  systems   • Mobile  and  biometrics  in  eIdentity   • Privacy  protecting  identity  management  systems   • User-­‐centric  identity,  delegation  of  authority  
  • 120.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   120  |  P a g e   Disciplines  of  research:  legal,  technological,  social,  economics   Possible  research  instruments:  testbeds  and  living  labs,  STREPs   Future  research:  long  term  and  short  term  issues   Short-­‐term  research   • Quantitative  research  on  cost-­‐benefit  analysis  of  interoperable  identity   • Dynamic  user-­‐controlled  identity  disclosure   • Formal  verification  of  identity  management  systems   • Governance  and  legal  issues,  levels  of  assurance     Long-­‐term  research   • Context-­‐dependent  identity  management                                                    
  • 121.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   121  |  P a g e       3.2.9. Global  Systems  Science         Introduction  and  definition262   Current  tools  available  to  policy  makers  are  insufficient  for  providing  guidance  on  a  global  scale  in   facing  present  societal  challenges  because  of  the  connections  across  subject  domains  as  well  as  the   globalization   of   the   policy   challenges,   which   range   from   environmental   threats,   food   security,   or   energy  sufficiency.  Such  challenges  are  multi-­‐dimensional  and  borderless,  thereby  they  cannot  be   solved   by   one   single   country   or   by   one   aspect   of   policy.   In   fact   current   public   policy   making   is   targeted  at  individual,  rather  than  interrelated  systems,  and  thereby  struggles  in  achieving  systemic   change   and   in   addressing   challenges   which   are   global   and   interconnected   in   scope,   as   they   arise   from   the   interplay   of   social,   technological,   and   natural   systems.   In   this   respect   it   is   important   to   integrate  scientific  evidence  into  social  process  for  being  able  to  address  those  challenges.     In  this  view  we  need  a  new  multidisciplinary  system  approach  taking  into  account  the  connections   across  policy  areas  (e.g.  economy,  transport,  health  and  social  understanding  of  system  risk)  as  well   as   across   geographical   borders.   This   new   branch   of   science   should   take   into   account   the   multidimensionality  of  global  problems  given  by  the  interconnectedness  of  decisions  across  different   policy  realms.   As  stated  in  the  Cordis  website263 ,  Global  System  Science  “addresses  new  ways  of  supporting  policy   decision  making  on  globally  interconnected  challenges  such  as  climate  change,  financial  crises,  or   containment   of   pandemics.   The   ICT   engines   behind   GSS   are   large-­‐scale   computing   platforms   to   simulate   highly   interconnected   systems,   data   analytics   for   'Big   Data'   to   make   full   use   of   the   abundance   of   high-­‐dimensional   and   often   uncertain   data   on   social,   economic,   financial,   and   ecological   systems   available   today,   and   novel   participatory   tools   and   processes   for   gathering   and   linking  scientific  evidence  into  the  policy  process  and  into  societal  dialogue.  GSS  will  develop  further   the  scientific  and  technological  foundations  in  systems  science,  computer  science,  and  mathematics.”   Some  examples  of  global  systems  are  the  following:   • Energy,  water  and  food  supply  systems   • Community  of  scientists       • World  wide  web   • Globally  spreading  diseases     • Global  financial  system   • Climate  policy   • Web  of  military  forces  and  diplomatic  relations                                                                                                                             262  Among  the  sources  of  this  research  challenge  there  is  the  position  paper  “GSS:  Towards  a  Research  Program  for  Global   Systems  Science”  prepared  for  the  Second  Open  Global  Systems  Science  Conference  which  took  place  on  June  10-­‐12  2013   in  Brussels,  as  well  as  other  documents  related  to  the  GSDP  consortium   263  http://cordis.europa.eu/fp7/ict/fet-­‐proactive/fetconsult2012-­‐topic09_en.html  
  • 122.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   122  |  P a g e   ICT  tools  are  very  important  for  GSS,  as  they  support  large-­‐scale,  complex  societal  and  infrastructure   decisions   that   affect   human   life   and   health.   On   the   side   of   policy   informatics,   they   provide   a   scientific  evidence-­‐base  for  policy,  i.e.  models  and  data  integrated  across  different  policy  sectors.  On   the  side  of  societal  informatics,  ICT  tools,  presenting  the  models’  results  by  the  mean  of  games  and   visualization,  are  able  to  integrate  the  can  make  a  better  link  between  stakeholders  in  the  scientific   and  policy  process,  leading  to  a  society-­‐centred  science.  In  this  sense  ICT  tools  are  also  very  useful  to   solve   problems   for   policy   makers,   engage   domain   experts   and   empowered   officials   early,   and   demonstrate  relevance  of  the  treated  issues.         Why  it  matters  in  Governance   There  are  two  main  motivations  for  adopting  a  Global  System  Science  approach:   • Develop   new   models   in   support   of   decision-­‐making:   obviously   nowadays   the   most   important  issue  to  deal  with  lies  in  the  economic  and  financial  crisis.  A  typical  example   of   global   systems   model   is   given   by   the   interdependencies   among   the   actors   and   institutions   in   the   financial   system   as   well   as   the   contagion   channels   between   the   financial  systems  and  the  other  sectors  of  the  economy.  Having  neglected  or  overseen   such  interconnections  has  led  to  dangerous  chain  reactions  and  contagion  of  crises  not   anticipated  in  current  economic  models.  In  this  respect  global  systems  models  can  give   an  alternative  to  existing  theoretical  and  empirical  approach,  in  particular  when  facing   challenges   which   are   global   by   definition,   such   as   financial   instability   and   the   environmental  or  energy  policy.  The  point  of  arrival  will  be  the  definition  of  advanced   simulation  models  mimicking  factual  conditions  and  human  behaviours,  and  embedding   empirical  data  on  systemic  dependencies  as  well  as  on  the  role  of  human  behaviour.     Such   models   will   be   funded   on   large-­‐scale   agent-­‐based   modelling,   will   allow   stakeholders   participation   and   interaction   and   online   monitoring   with   feedback   from   individual  citizens.     • Develop   new   models   of   governance:   there   is   the   necessity   for   scientific   modellers   to   better  communicate  and  interact  with  citizens,  businessmen,  politicians,  civil  servants,   NGO   representatives   and   other   stakeholders   as   concrete   societal   needs   and   policy   decisions  must  drive  the  scientific  questions  to  be  asked,  the  data  to  be  collected  and   how   the   models   have   to   be   conceived.   Global   Systems   Science,   by   producing   better   models,  can  provide  the  decision  making  process  with  insight  on  system  behaviour  and   dynamical   outcomes,   leading   to   better   policies.   In   this   respect   the   current   global   governance   model   which   is   based   on   nation   states   cooperating   in   international   organizations   is   unfit   to   meet   global   challenges,   so   that   a   Global   Systems   Science   is   necessary.     Tools  and  Techniques  in  GSS   GSS  includes  in  general  computer  science  and  mathematical  approaches  such  as  interaction  based   computing;   data   topology   and   modeling   languages;   high   performance   computation;   data   mining   methodologies;   methods   for   specification   and   analysis   of   dynamics   of   highly   interconnected   systems;  specification,  verification  and  validation  of  the  computational  dynamics  simulations,  and   formal   approach   to   the   analysis   of   dynamical   network   abstractions   for   complex   system   representation.  More  in  particular  we  have:   • Agent-­‐based  or  Multi-­‐agent  Models,  which  are  synthetic  virtual  realities  populated  by  
  • 123.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   123  |  P a g e   artificial  agents  interacting  adaptively  with  each  other  and  also  change  with  the  overall   conditions  in  the  environment   • Analyses  of  networks  based  on  maps  of  relationships  or  linkages  among  constituents  in   systems,  from  which  is  possible  to  identify  configurations  that  are  especially  unstable   and  can  be  used  as  predictors  of  catastrophic  failures  in  real-­‐life  networks     • Data  Mining:  techniques  for  finding  patterns  and  relationships  in  large  data  sets  with   complex  qualities,  which  are  applicable  to  nonlinear  and  discontinuous  phenomena   • Modelling   of   artificially   constructed   scenarios   representing   hypothetical   models   of   complex   systems   that   reflect   their   key   constituents   and   dynamics,   and   which   can   be   used   to   anticipate   the   effects   of   various   conditions   and   to   identify   policies   that   are   robust  to  many  likely  futures,  for  instance  in  case  of  man-­‐made  or  natural  disasters   • Sensitivity  Analysis,  used  for  assessing  the  behaviours  and  evolution  of  complex  systems   due   to   shocks   in   the   underlying   parameters   performed   by   the   mean   of   numerical   techniques   • Dynamical   Systems   Models,   i.e.   sets   of   differential   equations   or   iterative   discrete   equations  used  to  describe  the  behaviour  of  interacting  parts  in  a  complex  system,  and   used  for  simulate  the  results  of  alternative  system  interventions  as  well  as  unintended   consequences  of  policies               Policy  Applications  of  GSS  Tools   Health.  In  contrast  with  traditional  epidemic  models,  in  which  each  agent  bears  the  same  probability   of  infection,  agent  based  models  entail  a  heterogeneous  population  which  interacts  in  a  changing   environment   leading   to   more   realistic   tests   and   prediction   of   new   policies.   As   an   example   some   complexity-­‐science   simulations   showed   that   reductions   in   air   traffic   (even   20%-­‐50%)   would   not   dramatically   slow   the   spread   of   certain   epidemics.   On   the   other   hand   the   massive   storage   of   smallpox  vaccine  would  reduce  the  number  of  infections  in  case  of  a  biological  terror  attack.     Urbanization.  More  complex  patterns  displace  the  classic  centre-­‐periphery  structures  and  puts  into   question  the  distinction  of  nature  and  culture.  Moreover  urban  lifestyles  are  blended  with  the  global   awareness  fostered  by  ICT.  In  this  view  urbanization  raises  major  challenges  because  innovations  my   worsen  already  worrying  trends,  urbanization  can  undermine  communities  leading  to  new  forms  of   violence   and   anomie,   and   health   problems   such   as   circulatory   diseases,   cancer,   obesity   and   epidemics  can  be  augmented.  GSS  is  able  to  explain  how  settlement  structures  and  lifestyles  are   modified  by  interaction  between  the  global  urban  system  and  the  global  ICT  system,  as  well  as  how   policy-­‐makers  can  influence  their  future  dynamics.     Traffic.  Analytic  techniques  can  be  used  to  anticipate  life-­‐threatening  traffic  phenomena,  as  well  as   to   reduce   pollution   and   improve   traffic   flows   in   order   to   save   time   and   energy.   This   advanced   modelling  approach  incorporates  human  cognition  and  has  been  adopted  for  predicting  unexpected   events  such  as  traffic  jams  so  as  to  automatically  alert  drivers  via  wireless  communications  devices.   This  class  of  models  can  be  generalisable  to  other  types  of  situations,  e.g.  outbreaks  of  civil  unrest.  
  • 124.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   124  |  P a g e   Similar  methods  have  been  successfully  used  for  analysing  human  foot  traffic  and  have  been  applied   to  prevent  stampeding  during  the  Hajj  in  Mecca.     Crisis   management   and   security.     By   using   a   global   systems   science   and   complex   systems   it   is   possible   to   enable   bottom-­‐up   disaster   response   capabilities   as   well   as   to   implement   a   proactive   approach   to   disaster   preparation   and   planning,   by   the   mean   of   policy   simulations.   Moreover   network-­‐analysis   methods   can   be   used   to   attempt   to   identify   associations   of   terrorists,   including   pinpointing  the  locations  of  key  dangerous  individuals.       Climate   Change.   In   the   most   advanced   climate   change   models   is   missing   the   social   and   human   aspect  of  the  issues  given  by  the  strict  interconnection  between  nature,  economy,  finance,  energy   and  the  industrial  structure.  In  this  respect  complexity  and  global  systems  science  techniques  allow   to  identify  tipping  points  in  the  human-­‐earth  system.  An  example  is  given  by  the  management  of   water  resources:  water  stresses  occur  regularly  in  different  geographical  locations,  showing  that  a   tipping  point,  leading  to  massive  long-­‐term  water  shortages,  may  be  close.  GSS  will  support  global   climate  policy  by  highlighting  its  benefits  from  reduced  health  impacts  to  accelerated  productivity   growth  by  new  directions  and  volumes  of  investment,  in  order  not  to  be  stuck  in  multiple  basins  of   attraction.   This   will   require   new   models   as   well   as   greater   interactions   between   different   policy   fields  such  as  environment,  energy,  employment,  health  and  foreign  policy.  Policy  makers  will  join  to   show   that   increased   economic   well-­‐being   is   possible   with   systematically   decreasing   emissions,   strengthen  resilience  while  reducing  emissions,  and  prepare  for  the  need  to  take  CO2  back  from  the   atmosphere,  especially  once  global  poverty  will  have  been  overcome.       Financial   Markets.   Decision   support   and   analysis   tools,   based   on   modelling   and   simulation,   conceived  within  the  scope  of  global  systems  science  models  can  allow  the  theoretical  testing,  of  the   resilience   of   proposed   financial   regulations   in   order   to   avoid   the   dramatic   instabilities   of   recent   times.   Those   classes   of   models   can   offer   a   crucial   supplement   to   traditional   analysis   as   they   emphasize   dynamism   rather   than   equilibrium,   real   attractors   rather   than   theoretically   prescribed   ones,   positive   feedback   loops   and   phase   transitions.   The   current   financial   crisis   did   not   crashed   completely  the  Euro  economy  thanks  to  the  president  of  the  ECB,  Mario  Draghi,  who  pushed  the   market  from  a  bad  to  a  good  equilibrium  rather  than  considering  the  crisis  as  a  shock  that  had  to  be   absorbed  by  the  capacity  of  the  markets  to  return  to  the  stable  equilibrium.  In  the  next  decades  GSS   will   be   necessary   for   the   development   of   an   integrated   governance   of   global   risks   taking   into   account  the  interactions  between  financial  and  other  markets,  as  well  as  the  socioeconomic  dynamic   at  different  scale,  relying  on  the  analysis  of  the  large  data-­‐sets  for  monitoring  the  complex  networks   of   world   agents.   Researchers   and   policy   makers   should   join   within   the   scope   of   GSS   in   order   to   design   and   implement   effective   measures   towards   a   financial   sector   supporting   increasing   employment   and   sustainable   economic   growth,   such   as   rules   to   limit   risky   dynamics   of   complex   financial  systems,  regional  experiments  with  innovative  schemes  to  foster  sustainable  growth,  and   the  creation  of  a  global  monetary  system.       Methodological  Aspects   First  of  all  GSS  will  rely  on  computer  models  in  order  to  tackle  the  complex  multi-­‐scale  (spatial  and  
  • 125.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   125  |  P a g e   temporal)   structure   of   global   systems.   However   computer   models   are   ill   suited   to   deal   with   the   ambiguities   that   are   a   vital   ingredient   of   human   life.   In   this   respect   narratives   delimit   the   scope   within  which  a  particular  model  is  useful  and  understand  what  goes  wrong  when  it  is  used  beyond   that  scope.   Another   interesting   methodological   realm   of   GSS   is   high   performance   computing   (HPC),   which   should  provide  policy-­‐makers  with  a  scientific  evidence  coming  with  an  assessments  of  its  reliability,   validity,   and   relevance,   by   exploring   complex   sample   spaces   of   parameter   values   and   boundary   conditions.   To   this   respect   the   computational   skills   must   be   combined   with   great   skills   in   communication  and  in  assessing  the  relevance  of  evidence  for  addressing  specific  practical  issues.   Moreover  Big  Data,  if  used  in  new  ways,  can  become  an  essential  tool  to  perceive  global  systems.   GSS  can  define  practical  problems  and  preliminary  concepts  that  can  be  used  to  mine  big  data  sets,   which  can  be  obtained  by  crowdsourcing,  in  the  view  of  describing  the  dynamics  and  structure  of   global   systems,   by   exploiting   the   relation   between   models   and   narratives   of   globalization.   The   resulting  output  will  be  used  to  improve  problem  definitions  and  concepts,  as  well  as  to  monitor  the   intended  and  unintended  consequences  of  policies  dealing  with  global  systems.         Key  challenges  and  gaps   GSS  entails  a  number  of  challenges  and  gaps:   • Model   validation   in   terms   of   the   underlying   assumptions   and   parameter   choices   interfaces   • Creation   of   ICT   platforms   for   setting   up,   execute   and   validate   large-­‐scale   models.   In   particular   it   would   necessary   to   establish   agreed-­‐upon   standards   for   validation   and   calibration  in  order  to  improve  the  models  credibility  for  policy  makers   • Tools   for   gathering,   integrating   and   linking   data   from   various   sources:   financial   data,   socio-­‐economic  data,  data  on  financial  and  economic  networks,  ecological  and  energy   data,  and  even  data  on  nature  of  human  decision.  Tools  for  knowledge  elicitation   • Decision-­‐support  tools:  scientists  and  researchers  should  try  to  formulate  the  results  of   their   work   in   terms   that   policymakers   can   use,   for   example   through   simulations   showing  different  scenarios  in  a  global  setting   • Interoperability  of  Models:  models  should  be  built  in  order  for  the  results  of  modelling   to  be  comparable,  eliminating  the  heterogeneity  preventing  non-­‐experts  from  choosing   and  applying  models,  as  well  as  gauging  their  relevance  and  credibility.  Moreover  we   should  be  able  to  run  simulations  in  different  degrees  of  granularity   • Institutional  adaptation:  the  development  of  a  science  which  is  global  in  scope  requires   coordination  of  research  and  education  efforts  at  a  global  level     A   more   comprehensive   set   of   challenges   and   opportunities   can   be   found   at   http://goo.gl/qWbhG264 .  Here  we  summarize  the  main  points:   • Breaking   inter-­‐disciplinary   boundaries,   stimulating   cross-­‐disciplinary   fertilization   and   removing  silos  between  organisations,  governments,  scientists/technologists  and  other                                                                                                                             264  This  set  of  challenges  and  opportunities  has  been  developed  by  the  experts  conveyed  at  the  brainstorming  meeting  on   "Global   System   Sciences:   the   role   of   models   and   data",   which   took   place   in   Brussels   on   the   7-­‐8   February   2013   http://www.isi.it/events/workshop-­‐on-­‐global-­‐systems-­‐science-­‐role-­‐of-­‐models-­‐and-­‐data-­‐brussels-­‐february-­‐7-­‐8-­‐2013  
  • 126.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   126  |  P a g e   stakeholders   • Building   the   scientific   foundations   of   GSS,   scoping   the   technologies   and   methods,   changing   basic   paradigms   in   science   and   making   society   and   humans   parts   of   the   phenomenology  of  science   • Managing   and   making   sense   of   Big   Data:   manage   the   increased   input   of   data/information  and  analyse  big  data  while  preserving  ethical  values  and  respecting   privacy  and  civil  rights   • Models   and   simulation,   languages:   create   a   common   universal   language   to   precisely   describe  and  simulate  models,  close  the  gap  between  formalized  models  and  real  world   peculiarities  of  systems   • Infrastructures   and   resources:   develop   a   culture   of   ICT   as   the   fabric   that   connects   nature  and  society,  and  push  a  new  conceptual  advance  in  order  to  overcome  the  limits   of  computing     • Ownership  and  regulations:  enable  scientists  to  act  on  society  and  to  access  the  data   without   violating   individual   rights   to   privacy,   and   manage   copyright/IPR   in   the   hyper   connected  world   • GSS  and  policy  making:  ensure  take  up  in  GSS  from  decision  makers  and  policy  makers,   enable  policy  feedback  to  be  reflected  into  the  models  and  create  tools  for  effectively   support  decision  making   • Communicating  GSS  and  engaging  society:  raise  awareness  of  Complex  Systems  science   and  promote  GSS  skills/curricula   As  for  the  use  of  ICT  tools  in  GSS  there  are  several  categories  of  challenges265 :   Evidence   Dissemination   Global   Reasoning   Proposal   Governance   Implementation   • Scientific  data   • Scientific  code   • Complex   modelling  and   simulation   • Software   architecture,   sustainability,   interoperability,   interfaces   • Common  data   interchange   format   • Domain   specific   languages   • Visualization   • User  interface   • Independent   validation   • ICT  tools  for   learning  and   understanding     • Accessibility   • Tools  for   interactive   participation   • Computer   linguistic  for   narrative  and   automatic   translation   • User-­‐ friendly   simulation   tools  for   prediction   • Computer   security  for   trust   management   • ICT   platforms   • ICT  tools  for   transparency   and   participation   • Semantic   frameworks   for  computer-­‐ added  decision   making     • Use  of  ICT   platforms   • Crowd-­‐sourced   development   • Feedback  to  the   scientific  data   management  and  the   model                                                                                                                             265  Source:  readapted  from  the  presentation  delivered  by  Ulf  Dahlsten  at  the  First  Open  Global  Systems  Science  Conference   (Brussels,   November   8   –   10,   2012).   A   Tech4i2   delegate   attended   the   meeting.   http://blog.global-­‐systems-­‐ science.eu/?page_id=938    
  • 127.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   127  |  P a g e   • Validation   Table  2  Challenges  in  the  use  of  ICT  tools  for  Global  Systems  Science Current  Research     • Computer  Science  for  interacting  Informational,  Technological  and  Social  Networks   • Computer   Science   of   very   large   systems:   high   performance   computing   and   data-­‐driven       societal  science     • Advanced  computing  for  Network  Science     • Network  Science  as  an  integrating  framework  for  real  world  complexity   • Network  approach  for  governance  and  policy  tools     • Computational  modelling  in  complex  realities   • Computational  and  digital  epidemiology   • Mathematics  of  complex  systems     Future  Research   Nowadays   there   is   not   a   mathematical   model   able   to   describe   all   the   interactions   between   the   components   in   Global   System   Science.   So   far   scientists   have   implemented   components   and   their   interactions  with  code,  but  there  is  the  need  to  create  an  intermediate,  mathematical  layer  between   narratives  and  simulations,  such  as  in  physics.  This  mathematical  layer  cannot  be  based  on  mere   partial  differential  equations  or  functional  analysis.  On  the  contrary,  as  the  formal  language  of  GSS  is   computer   code,   the   mathematical   layer   has   to   be   embedded   in   the   mathematics   of   general   programs.  In  practice  computer  science  should  play  for  GSS  the  same  role  that  mathematics  plays   for  physics.  In  this  sense  there  is  the  necessity  to  adapt  and  extend  to  the  GSS  models  one  or  more   of  the  formal  languages  for  specifying  and  reasoning  about  programs.  The  main  candidate  so  far  is   the   constructive   type   theory,   which   can   be   used   to   express   both   programs   and   classical   mathematical  results.          
  • 128.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   128  |  P a g e   4. The  case  for  policy-­‐making  2.0:  evaluating  the  impact     These  technologies  and  methodologies  certainly  show  great  promises  for  better  policy-­‐making,  but   to  what  extent  do  they  genuinely  lead  to  better  policy-­‐making?   In  this  section,  we  provide  an  overview  of  the  evidence  regarding  the  actual  impact  of  policy-­‐making   2.0  tools.  To  do  so,  we  extract  the  main  related  findings  from  the  cases  studies;  from  the  prize  on   policy-­‐making  2.0  launched  by  the  project;  from  the  survey  of  users’  needs.  Based  on  the  findings,   we  propose  an  additional  research  challenge  on  the  impact  evaluation  of  policy-­‐making  2.0.   4.1. Cross  analysis  of  case  studies     In  deliverable  D5.2,  the  CROSSOVER  project  provides  an  in  depth  analysis  of  4  cases  studies:   1. Gleam   2. Pathways  2050   3. UrbanSim   4. Opinion  Space     In  this  section,  we  extract  and  analyse  the  relevant  information  about  their  impact  on  the  quality  of   policy-­‐making.  
  • 129.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   129  |  P a g e   4.1.1. Global  Epidemic  and  Mobility  Model   GLEAM  -­‐  The  global  epidemic  and  mobility  model266 ,  is  a  discrete  stochastic  epidemic  computational   model  based  on  a  meta-­‐population  approach  in  which  the  world  is  defined  in  geographical  census   areas  connected  in  a  network  of  interactions  by  human  travel  fluxes  corresponding  to  transportation   infrastructures   and   mobility   patterns.   The   GLEAM   2.0   simulation   engine   includes   a   multi-­‐scale   mobility  model267  integrating  different  layers  of  transportation  networks  going  from  the  long  range   airline   connections   to   the   short   range   daily   commuting   pattern268  and   it   elaborates   stochastic   infectious   disease   models   to   support   a   wide   range   of   epidemiological   studies,   covering   different   types  of  infections  and  intervention  scenarios  in  order  to  respond  to  the  spread  of  a  pandemic  crisis   in   very   short   time.   Real-­‐world   data   on   population   and   mobility   networks   are   used   and   integrate   those  in  structured  spatial  epidemic  models  to  generate  data  driven  simulations  of  the  worldwide   spread  of  infectious  diseases.   GLEAM  is  mostly  used  in  the  design  stage  of  the  policy  making  cycle,  and  it  is  able  to  manage  and   visualize   with   a   huge   amount   of   complex   and   diverse   data   (e.g.   detailed   airline   transportation   model).  In  fact,  data  from  census  agencies,  data  regarding  population  at  very  high  resolutions,  data   from  a  world  map  implemented  by  NASA  with  the  world  population  divided  to  5x5  miles  area  boxes,   the   entire   database   of   airlines,   about   40   databases   from   different   countries   for   local   mobility,   transfer  etc.  are  utilized.  In  addition,  it  has  to  be  mentioned  that  GLEAM  has  moved  beyond  research   in  the  H1N1  epidemic  case;  when  the  simulation  derived  from  the  application  of  GLEAM  was  used   ex-­‐post  and  resulted  in  a  particularly  accurate  analysis.  GLEAM  is  nowadays  utilized  both  in  research   initiatives  (e.g.  EPIWORK  IP  project269 ,  EPIFOR  project270 )  and  in  formal  policy  making  agencies  (e.g.   US  Defense  Agency).  Moreover,  GLEAM  can  also  be  met  in  educational  courses;  both  in  a  high  school   and  at  the  university  level.       Figure  4:  The  three  population  and  mobility  data  layers  in  GLEAM   Impact  of  Gleam   The  main  impact  of  GLEAM  so  far  was  the  production  of  the  forecast  for  the  H1N1  pandemic  in  real-­‐ time  which  was  a  quite  successful  exercise  and  showed  the  power  of  the  model.  A  validation  paper   (Tizzoni  et  al.  2012)  has  been  published  in  December  2012  showcasing  that  the  GLEAM  predictions   were  quite  spot  on.                                                                                                                               266  http://www.gleamviz.org/   267  http://www.gleamviz.org/model/   268  GLEAM  in  Detail.  Available  at:  ww.GLEAMviz.org/GLEAM-­‐in-­‐detail/   269  EpiWork  -­‐Developing  the  framework  for  an  epidemic  forecast  infrastructure.  Available  at:  http://www.epiwork.eu   270  EpiFor   -­‐   Complexity   and   predictability   of   epidemics:   toward   a   computational   infrastructure   for   epidemic   forecasts.   Available  at:  http://www.epifor.eu  
  • 130.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   130  |  P a g e   Many  stakeholders  are  also  using  the  software  and  support  their  policy-­‐making  procedures  in  terms   of   designing   measures   to   prevent   or   constrain   the   spread   of   diseases.   Examples   include   the   US   Defense  Agency,  the  JRC,  and  other  corporations  that  are  using  the  software.  It  has  to  be  noted  that   JRC  is  using  the  tool  in  its  long-­‐term  strategy  for  studying  and  responding  to  the  spread  of  epidemics   (through   communicating   the   simulation   results   to   DG   SANCO   policy   officers),   based   on   the   experience  that  has  been  accumulated  from  using  the  GLEAM  toolkit  during  the  H1N1  disease.     The  core  innovation  of  GLEAM  lies  within  the  computational  model  which  can  integrate  data  from   various   sources   and   provide   a   close   to   real   time   forecast   (by   combining   various   real-­‐time   data   sources)  on  the  spread  of  epidemics  on  a  global  level,  which  was  not  possible  before  at  that  level  of   precision  and  punctuality.  Moreover,  through  the  visual  interface  users  are  in  a  position  to  create   their  own  models  and  investigate  specific  diseases  and  issues  that  they  are  interested  in.     4.1.2. UrbanSim   UrbanSim271  is   a   software-­‐based   demographic   and   development   modelling   tool   for   integrated   planning   and   analysis   of   urban   development,   incorporating   the   interactions   between   land   use,   transportation,  environment,  economy  and  public  policy  with  demographic  information.  It  simulates   in  a  3D  environment  the  choices  of  individual  households,  businesses,  and  parcel  landowners  and   developers,  interacting  in  urban  real  estate  markets  and  connected  by  a  multi-­‐modal  transportation   system.  The  3D  output  resulting  from  the  process  underpinning  the  simulation  model  is  presented   using  indicators,  which  are  variables  that  convey  information  on  significant  aspects  of  the  simulation   results.         Figure  5  UrbanSim  Land  Maps     This  approach  works  with  individual  agents  as  done  in  agent-­‐based  modelling,  and  with  very  small   cells   as   in   the   cellular   automata272  approach,   or   even   at   building   and   parcel   levels.   UrbanSim                                                                                                                             271  http://www.urbasim.org   272  http://en.wikipedia.org/wiki/Cellular_automaton  
  • 131.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   131  |  P a g e   however  differs  from  these  approaches  by  drawing  together  choice  theory273 ,  a  simulation  of  real   estate   markets,   and   statistical   methods   in   order   to   achieve   accurate   estimation   of   the   necessary   model   parameters   (such   as   land   policies,   infrastructure   choices,   etc.)   in   order   to   calibrate   uncertainty  in  its  system.  As  an  example  of  its  use,  one  could  refer  to  the  project  on  Modelling  Land   Use  Change  in  Chittenden  County274 ,  where  the  model  parameters  based  on  statistical  analysis  of   historical  data  are  integrated  with  market  behaviours,  land  policies,  infrastructure  choices  in  order   to  produce  simulations  on  household,  employment  and  real  estate  development  decisions  (where   the  first  two  are  based  on  an  agent-­‐based  approach  while  the  latter  on  a  grid-­‐based  approach).     Impact  of  UrbanSim   As  far  as  the  impact  is  concerned,  the  European  case  is  not  at  the  same  level  as  the  US  ones.  In  the   US   there   are   quite   a   number   of   MPOs   that   actively   utilize   the   UrbanSim   platform.   The   most   indicative  application,  representing  the  approach  common  in  the  US,  is  probably  the  San  Francisco   Bay   one.   The   results   of   the   aforementioned   case   have   involved   examining   and   analysing   five   alternative   scenarios   that   required   articulating   a   set   of   assumptions   about   land   use   policies,   transport  policies  and  macro-­‐economic  growth  (the  analysis  in  now  complete  –  relevant  publications   will  be  available  in  the  next  few  months).   In   one   of   them,   analysing   visibility   of   the   proposed   policy   though   reverse   engineering   was   attempted,   that   made   the   task   much   more   challenging,   both   in   terms   of   research   and   implementation.   The   agency   has   now   accepted   the   results,   with   documentation   and   visualization   supporting  them.   In  the  San  Francisco  case,  the  3D  visualization  system  was  created  in  order  to  achieve  higher  visibility   amongst  citizens  than  the  plain  UrbanSim  tool.  The  intention  was  to  use  this  system  in  a  number  of   workshops   held   during   January   2012.   User   engagement   was   intense   even   from   the   development/testing   phase.   In   addition,   the   public   agencies   used   it   in   a   series   of   meetings   with   community   organizations.   Each   of   these   meetings   had   from   15   up   to   200   participants   each.   The   point  of  these  meetings  was  to  communicate  the  different  scenarios  to  the  public  and  to  receive   feedback  on  the  preferences  of  the  citizens.   One  of  the  most  innovative  elements  of  UrbanSim  is  the  combination  of  various  technological  and   theoretical  aspects,  as  well  as  the  withdrawal  of  strong  assumptions  regarding  urban  planning  and   adoption  of  less  strong  assumptions  (than  markets  are  an  equilibrium).  For  example,  the  impacts  of   transport  projects  on  urban  planning  are  far  from  being  instantaneously  realized  (in  fact  they  might   evolve   over   decades).   In   addition,   the   capacity   of   being   able   to   support   these   less   strong   assumptions  can  also  be  considered  as  a  core  innovation.     4.1.3. Opinion  Space   Launched   by   the   U.S.   Department   of   State275  in   collaboration   with   Berkeley   University   which   developed   it,   Opinion   Space   bridges   the   worlds   of   politics   and   social   media   in   an   interactive   visualization  forum,  where  users  can  engage  in  open  dialog  on  foreign  affairs  and  global  policies.  It   invites  users  to  share  their  perspectives  and  ideas  in  an  innovative  visual  "opinion  map"  that  will   illustrate  which  ideas  result  in  the  most  discussions  and  which  ideas  are  judged  most  insightful  by   the  community  of  participants.                                                                                                                               273  http://en.wikipedia.org/wiki/Choice_theory   274  http://www.uvm.edu/rsenr/countymodel/Workshop08bv3.ppt   275  U.S.  Department  of  State.  Available  at:  http://State.gov  
  • 132.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   132  |  P a g e   Using   an   experimental   gaming   model,   Opinion   Space   incorporates   techniques   from   deliberative   polling,   collaborative   filtering,   and   multidimensional   visualization.   The   result   is   a   self-­‐organizing   system   that   uses   an   intuitive   graphical   "map"   that   displays   patterns,   trends,   and   insights   as   they   emerge  and  employs  the  wisdom  of  crowds  to  identify  and  highlight  the  most  insightful  ideas.       Figure  16  Rating  other  opinions'  in  Opinion  Space   Opinion  Space  is  fully  operational  in  its  current  state.  Nevertheless,  as  a  research  platform  it  still   remains  experimental.  The  great  amount  of  data  is  very  structured  and  this  helps  towards  continuing   research  on  text  analysis,  statistical  modelling  etc.     Impact  of  Opinion  Space   One  of  the  first  and  main  indicators  of  the  impact  of  Opinion  Space,  which  have  applies  mostly  to  the   “agenda  setting”  and  “monitor  and  evaluation”  phases  of  the  policy  cycle,  was  the  participation  rate:   users  that  arrive  in  the  platform  for  the  first  time  and  those  that  become  active  participants.  People   that  arrive  in  websites  are  always  more  than  those  who  actually  participate  (in  some  projects  the   rate  was  close  to  50%  and  in  others  around  10%).     In   the   State   Department   instance   (of   Opinion   Space   3.0),   more   than   2000   different   ideas   were   collected  (about  US  foreign  policy).  In  addition,  more  than  5000  individual  responses  were  collected.   It   cannot   be   said   whether   the   final   decisions   were   based   on   some   of   the   ideas   provided,   but   a   detailed   report   was   provided   to   the   policy   makers.   The   project   with   a   US   auto-­‐maker   (targeted   towards  recognizing  ways  of  improving  their  image)  resulted  to  about  1000  ideas  and  about  100.000   ratings  evaluating  these  ideas  (e.g.  more  specifically  they  talked  about  green  vehicles).  One  of  the   core  innovations  and  successes  of  Opinion  Space  is  the  very  fast  way  to  browse  (and  rate)  amongst  a   large  number  of  ideas  (even  if  this  is  a  visualization-­‐oriented  innovation).  From  the  scientific  point  of   view,  the  greatest  innovation  was  bringing  statistical  analysis  in  structured  discussion/  data.     One   of   the   best   endorsements   regarding   Opinion   Space   was   Hillary   Clinton’s   reference   to   the   initiative.  Other  endorsements  include  high  level  officers  of  collaborating  companies  as  presented  in   the  Opinion  Space  website.    As  far  as  the  Opinion  Space  team  is  aware  of,  Opinion  Space  has  not  yet  
  • 133.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   133  |  P a g e   been  incorporated  in  any  formal  decision  making  procedures.  The  State  Department,  however,  uses   “informally”  Opinion  Space  in  order  to  get  ideas  and  opinions  on  specific  policies.     4.1.4. 2050  Pathways  Analysis   The   UK   Department   of   Energy   and   Climate   Change   (DECC)   built   the   2050   Pathways   Analysis   Calculator  to  help  the  public  engage  in  the  debate,  and  for  Government  to  ensure  that  its  short-­‐  and   medium-­‐term  planning  was  consistent  with  achieving  the  long-­‐term  aim.  More  specifically,  as  the  UK   is  committed  to  reducing  its  greenhouse  gas  emissions  by  at  least  80%  by  2050,  relative  to  1990   levels,   a   transformation   of   the   UK   economy   is   needed   while   ensuring   secure,   low   carbon   energy   supplies   to   2050,   and   face   major   choices   about   how   to   do   this.   In   the   Carbon   Plan   published   in   December   2011,   the   Calculator   was   used   to   illustrate   three   2050   futures   that   show   some   of   the   plausible  routes  towards  meeting  the  target.     The  2050  Pathways  Analysis  features  four  resources:     1. A   web-­‐based   tool   for   the   public   to   try   their   own   ideas   for   reducing   greenhouse   gas   emissions.     2. An   in   depth   Excel-­‐based   tool   and   reporting   system   which   includes   the   methodology/the   models  that  are  used  for  the  analysis.     3. A  web-­‐based  presentation  for  younger  audiences  about  greenhouse  gas  emissions.     4. A  toolkit  for  leading  an  energy  debate  in  schools.       The  2050  Calculator  is  targeted  at  citizens,  policy  makers,  senior  officials  and  politicians  as  well  as   technical  experts  through  different  interfaces.     The   2050   Pathways   presents   a   framework   through   which   it   is   possible   to   consider   some   of   the   choices  and  trade-­‐offs  we  will  have  to  make  over  the  next  forty  years.  It  is  system-­‐wide,  covering  all   parts  of  the  economy  and  all  greenhouse  gases  emissions  released  in  the  UK.  It  is  rooted  in  scientific   and  engineering  realities,  looking  at  what  is  thought  to  be  physically  and  technically  possible  in  each   sector276 .   2050  pathways  is  a  tool  to  help  policy  makers,  the  energy  industry  and  the  public  understand  these   choices.  For  each  sector  of  the  economy,  four  alternative  trajectories  have  been  developed,  ranging   from  little  or  no  effort  to  reduce  emissions  or  save  energy  (level  1)  to  extremely  ambitious  changes   that  push  towards  the  physical  or  technical  limits  of  what  can  be  achieved  (level  4).     The  2050  Pathways  Calculator  –  available  on  the  DECC  website  -­‐  allows  users  to  develop  their  own   combination  of  levels  of  change  to  achieve  an  80%  reduction  in  greenhouse  gas  emissions  by  2050,   while  ensuring  that  energy  supply  meets  demand277 .   The  supportive  tools  of  the  initiative  provide  different  ways  of  securing  a  low-­‐carbon  future  for  the   UK  and  they  can  be  tried  out:     ·∙  By  creating  each  user’s  own  pathway  using  the  2050  Web  Tool.                                                                                                                               276  Department  of  Energy  and  Climate  Change  https://www.gov.uk/2050-­‐pathways-­‐analysis   277  HM  Government  (2010).  2050  Pathways  Analysis.  Available  at:   http://www.decc.gov.uk/assets/decc/what%20we%20do/a%20low%20carbon%20uk/2050/216-­‐2050-­‐pathways-­‐analysis-­‐ report.pdf  
  • 134.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   134  |  P a g e   ·∙   By   exploring   what   a   low-­‐carbon   UK   might   look   like   in   2050   by   playing   the   simplified   My2050   simulation.     ·∙  By  taking  the  debate  into  the  classroom  in  the  schools  toolkit.         Figure  17  Playing  the  My2050  game  for  the  demand  side     As  far  as  the  CROSSOVER  Policy  Cycle  is  concerned,  the  project  probably  fits  in  the  first  step,  this  of   Agenda  Setting.  This  is  due  to  the  fact  that  the  concept  is  a  high-­‐level  one  (e.g.  reduce  gas  emissions   to  80%  by  2050).  As  the  data  are  currently  being  updated  and  a  comparison  between  the  projected   and  the  actual  results  will  take  place,  probably  the  case  could  in  the  near  future  fit  into  the  Monitor   and  Evaluation  Policy  Cycle  step  as  well.     Impact  of  2050  Pathways  Analysis   The  numbers  of  visitors  and  of  interactions  with  the  tool  have  demonstrated  the  success  and  impact   of   the   case.   In   the   first   three   months   from   the   official   project   launch   there   were   about   10.000   unique  visitors  in  the  platform.  Regarding  My2050  there  are  over  16.000  pathways  up  to  the  date.   Regarding   the   stakeholders,   about   200   were   involved   in   the   initial   (building)   phase   and   after   the   launch  about  500  stakeholders  were  contacted.  Moreover,  a  week-­‐long  online  debate  including  5-­‐6   experts  took  place  with  lots  of  comments  from  open  public.  It  is  important  to  note  that  there  are   Master’s  programs,  both  in  and  outside  of  the  UK,  that  engage  the  2050  Pathways  models  and  tools   in  their  courses.  In  addition,  the  my2050  game  is  also  communicated  to  pupils  of  various  schools  in   the  UK;  there  is  a  “schools’  toolkit”  available  and  downloadable  from  the  project’s  website,  as  well   as  from  other  websites,  including  the  department  of  Education  website.    
  • 135.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   135  |  P a g e   It  has  to  be  noted  that  due  to  the  project’s  open  source  nature,  it  is  quite  difficult  to  tell  how  many   and  who  exactly  are  using  the  platform.     In   addition,   a   large   number   of   presentations   have   been   conducted   in   workshops,   schools,   conferences,   NGOs,   international   colleagues   etc.   A   presentation   was   made   to   the   European   Commission   too.   Really   positive   media   coverage   has   also   been   noticed   (around   15   key   articles   regarding   the   project278279 ).   Other   references   to   the   case   have   also   been   made   (e.g.   cultural   festivals).       4.1.5. Cross  analysis  of  the  case  studies   In  this  section  we  analyse  common  features  and  differences  between  the  case  studies  with  regard   to:   -­‐ Usage  (policy  phase,  policy  domain,  participation,  involvement  of  decision-­‐maker)   -­‐ Impact  (satisfaction,  role  in  the  actual  decision  taken,  quality  of  the  policy)                                                                                                                             278  https://www.gov.uk/2050-­‐pathways-­‐analysis   279  http://www.involve.org.uk/2050-­‐pathways-­‐public-­‐dialogue/  
  • 136.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   136  |  P a g e       2050  Pathways   GLEAM   Opinion   Space   3.0   UrbanSim   Policy  phase   Design   Design   Agenda  Setting   Design   Policy  domain   Energy   Health   Foreign  Policy   Urban  Planning   Number   of   participants   16.000   pathways   created   Not  relevant   2000  ideas   100s   Involvement   of   decision  makers   High   High   High   High   Actual   usage   of   the   output   in   policy-­‐making   High   (used   in   the   main   Low   Carbon   Strategy   document)   High,   used   by   international   agencies   Low   Medium,   used   by   several   US   municipalities     Press  impact   High   Low   High   n.a.   Feedback   by   policy-­‐makers   High   n.a.   High   n.a   Actual   improvement   of   policy  quality   n.a.   1  paper  positively   reviewed   the   predictions   n.a.   n.a.   Table  3  Cross  analysis  of  the  cases  impact     Firstly,  we  can  perform  a  matching  of  the  cases  under  scrutiny  with  respect  to  the  different  phases  of  the   policy  cycle.  As  we  can  see  from  Table  3  Cross  analysis  of  the  cases  impact    most  of  the  cases  are  related  to  the  “Design”  of  policies,  while  there  is  a  limited  coverage  of  the   “Agenda  Setting”  and  the  “Implementation”  and  “Monitor  and  Evaluation”  phases280 .   This  is  due  to  the  fact  that  the  key  challenges  faced  by  the  policy  makers  (e.g.  “the  need  to  detect   and  understand  problems  before  they  become  unsolvable”  or  “the  reduction  of  uncertainty  on  the   possible  impacts  of  policies”)  require  a  certain  degree  of  proactivity  in  order  to  deliver  high  quality,   evidence-­‐based   and   impact   oriented   policies   and   not   perform   trials   on   real   conditions.   In   this   respect  the  “Design”  phase  seems  to  prevail  over  the  others  when  it  comes  to  tools  that  are  mostly   desired  by  policy  makers.  More  in  particolar:   • In  the  “Design”  phase  policy  makers  are  able  to  both  explore  their  options  and  seek  for  the  ex-­‐ ante   assessment   of   the   policies   under   consideration   from   the   citizen’s   perspective.   On   the   other  hand  it  is  possible  that  decisions  have  been  already  taken  and  then  the  emphasis  is  laid   on  the  implementation  of  policies.   • In  the  “Implementation”  phase  the  main  object  is  to  increase  acceptance  and  collaboration   between  the  decision  makers  and  the  citizens  based  on  already  deployed  terms.  On  the  other   side   it   is   worth   noticing   that   the   improved   collaboration   is   handled   by   tools   and   methods   focusing  on  the  communication  of  messages  aimed  at  favoring  the  smooth  implementation  of   a  policy.  Those  tools  do  not  belong  to  the  “core”  Policy  Making  2.0  methods,  even  though  they                                                                                                                             280  X’s   marks   the   answers   retrieved   directly   for   the   responsible   team   of   each   case,   while   @’s   mark   potential   usage   as   envisaged  during  the  analysis  
  • 137.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   137  |  P a g e   display  a  close  relation  with  them.     • In  the  “Monitor  and  Evaluation”  step  decision  makers  get  informed  about  the  impact  of  the   already   deployed   policies,   so   that   it   is   possible   to   identify   only   few   ICT-­‐based   tools   and   methods  that  are  really  having  an  impact  and  engage  fruitfully  with  stakeholders  and  citizens.   There  are  many  discussion  tools,  which  have  already  been  experimented  by  policy  makers,   showing  so  far  many  limitations  and  constraints.     • The  issues  apply  also  to  the  “Agenda  Setting”  phase,  as  there  is  an  absence  of  new  ways  to   massively  engage  citizens  during  the  early  procedures  that  lie  before  the  actual  design  phase.   Most  of  the  tools  have  been  around  since  many  years  now,  and  in  some  cases  are  merely  re-­‐ furbished  with  some  new  tweaks  and  upgraded  features.  Crowdsourcing  seems  to  fit  very  well   this  stage,  but  again  the  impact  of  such  experiments  remains  anecdotal  and  the  results  are  not   embedded  in  policy-­‐making,  at  least  for  the  cases  analysed.     In   these   cases,   the   policy-­‐making   2.0   tools   have   been   used   to   address   real   problems   in   sensitive   policy  domains,  and  all  have  been  initiated  either  by  governments  or  as  a  result  of  collaboration   between  researchers  and  public  administrations  at  different  levels,  mainly  in  a  top-­‐down  approach.   In   particular,   GLEAM   and   Opinion   Space   3.0   were   initially   introduced   as   research   initiatives   that   gathered  significant  attention  and  subsequent  funding  from  public  authorities.  In  fact,  all  cases  build   on   a   wide   range   of   techniques   that   result   from   research   and   exemplify   how   research   can   be   effectively  applied  in  real-­‐life  settings  and  public  policies.  Multi-­‐disciplinarity  in  the  teams  of  all  cases   has   brought   together   different   perspectives   and   ensured   appropriate   modelling   of   policy   options   and   interpretation   of   outcomes.   Building   a   dynamic   dialogue   with   policy   makers   and   all   external   stakeholders  (NGOs,  academia,  industry)  and  specific  experts,  has  provided  significant  insights  and   feedback   to   all   cases   (to   different   extents   as   for   example   in   GLEAM,   where   the   participation   of   citizens  is  limited).  Further,  the  real  support  by  public  officials  and  experts  has  been  instrumental  in   the  success  of  all  cases.  To  address  the  targeted  needs  of  policy  makers  and  citizens  and  allow  them   contribute  in  a  more  efficient  and  productive  way  to  the  policy  issues  at  stake,  dedicated  tools  have   been   developed   in   each   case   study.   Naturally,   in   each   case,   the   required   learning   curve   to   understand   and   use   a   policy   model   significantly   varies   (and   it   depends   on   the   complexity   of   the   policy  model(s)  running  in  the  background  for  being  used  effectively  by  policy  makers).   Uptake   by   participants   varies,   from   few   hundreds   up   to   several   thousands.   Impact   evaluation,   however,  was  not  built-­‐in  the  initiative  from  the  beginning.    Typically,  no  specific  Key  Performance   Indicator   (KPIs)   were   set,   and   no   evaluation   envisaged.   However,   the   numbers   of   visitors   and   of   interactions  have  demonstrated  their  success  and  impact,  which  has  been  reinforced  with  the  help   of   appropriate   stakeholders’   engagement   strategies.   It   needs   to   be   noted   that   in   some   cases   (GLEAM)   users   resorted   to   the   corresponding   platform   as   a   result   of   a   natural   phenomenon   (i.e.   H1N1   pandemic)   whereas   in   others   (Opinion   Space   3.0   and   2050   Pathways   Analysis),   it   was   the   outcome  of  large  press  coverage  that  demonstrated  the  value  of  the  cases.  By  studying  cases  that   had  strong  internalization  aspects  (i.e.  transferring  experience  from  national  to  international  level  in   2050  Pathways  Analysis,  from  US  to  EU  in  UrbanSim),  the  difference  in  socio-­‐cultural  dimensions   emerges  and  should  not  be  neglected  as  it  may  decide  the  success  of  a  case  in  applying  it  to  different   geographic  settings  and  socio-­‐technical  landscapes.     4.2. Survey  of  Users’  needs  results     The  CROSSOVER  project  delivered  a  survey  of  users,  presented  in  Deliverable  D5.1.  As  part  of  the   survey  it  was  asked  which  ICT  tools  and  methodologies  are  needed  and  adopted  by  respondents  as  
  • 138.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   138  |  P a g e   part  of  the  governance  and  policy-­‐making  processes  they  are  involved  in.  In  Figure  2,  which  displays   some  preliminary  and  selected  results,  it  emerges  that  Open  Data  and  Big  Data  methodologies  are   already  adopted  by  more  than  30%  of  respondents.  Moreover,  other  methodologies,  which  are  also   used,  are  strictly  related  to  Open  and  Big  Data,  such  as  visual  analytics  that  can  be  used  to  make   sense  of  large  amounts  of  data,  and  large  scale  simulations  which  need  large  amount  of  data  to  be   performed.     Figure  18  Adoption  of  ICT  Tools  and  Methodologies  for  policy-­‐making  (source:  CROSSOVER  Survey   of  Users’  Needs  2012)   In  the  same  way  in  Error! Reference source not found.3  are  presented  the  respondents’  views   regarding  the  needs  and  challenges  in  the  policy  making  process.  
  • 139.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   139  |  P a g e   Figure  19  Needs  and  Challenges  in  the  Policy  Making  Process  (source:  CROSSOVER  Survey  of  Users’   Needs  2012)     On  the  horizontal  axis  is  reported  the  average  score  accruing  to  each  option.  The  score  goes  from   1  (not  important)  to  5  (very  important).  As  we  can  see  the  most  relevant  challenges  in  the  policy   making   process   are   “Detect   and   Understand   Problems   before   they   become   unsolvable”   and   “Understand  the  Actual  Impact  of  Policies”.  At  any  rate  the  difference  among  the  various  options   does  not  seem  overly  significant,  suggesting  that  the  broad  range  of  challenges  in  policy  making  is   recognized  as  important.   The   respondents   were   also   required   to   suggest   other   important   challenges   pertaining   to   the   policy   making   activity,   outside   the   one   adduced   in   the   online   questionnaire.   The   suggested   challenges  include:   • Create  an  effective  and  collaborative  dialogue  among  the  policy  makers  and  affected   stakeholders   • Ensure  reversibility  as  well  as  basic  societal  values  (e.g.  security,  equality,  privacy  etc.)   • Analyze  and  visualize  information  for  identifying  problems   • Foster  more  direct  communication  between  citizens  and  policy  makers   • Translate  citizens'  input  into  actionable  outputs   • Create  common  understanding  across  areas  of  responsibility   • Secure  buy-­‐in  from  key  stakeholders  and  prevent  blocking  of  new  policy  by  vested  interests   • Encourage  the  widespread  acceptability  of  simulation  as  a  public  policy  tool   • Take  responsibility  for  choices  made  by  either  him/her  or  own  team.  
  • 140.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   140  |  P a g e   Making  a  comparison  between  the  ICT  tools  and  methodologies  adopted  and  the  challenges  and   needs  of  the  policy  making  process,  it  is  clear  that  detecting  and  understand  problems  before  they   become  unsolvable  and  understanding  the  actual  impact  of  policies  are  possible  only  by  the  mean  of   advanced   policy   modelling   and   simulation   tools   and   techniques.   And   as   already   mentioned,   the   preciseness  of  large  scale  simulations  and  modelling  is  allowed  and  improved  by  the  availability  of   large  amounts  of  data.  From  the  analysis  of  these  preliminary  findings,  it  seems  evident  that  the  use   of  big  data  and  the  way  to  analyse  and  exploit  them  is  perceived  as  an  important  need  by  policy   makers  and  it  is  already  in  part  applied  in  some  sphere  of  public  governance.  However,  our  analysis   found  that  in  most  cases  the  application  of  techniques  and  methodologies  to  make  sense  of  massive   amounts  of  data  is  still  in  an  embryonic  stage  and  it  remains  largely  at  experimental  level.  This  is   confirmed  by  the  findings  of  the  activity  of  mapping  and  identification  of  case  studies  that  has  been   conducted  as  part  of  the  CROSSOVER  project,  and  in  part  illustrated  in  the  previous  section.     4.3. Analysis  of  the  prize  winners   The  project  also  launched  a  prize  competition  for  the  best  policy-­‐making  2.0  application.  The  prize   was  assigned  based  on  the  criteria  of  technological  innovation,  uptake  and  impact.   Let  us  present  now  the  description  of  the  three  winners  stemming  from  the  applications  to  the  prize       IdeaScale  SAVE  award     Describe  briefly  the  context  of  the  solution   President   Obama’s   belief   was   that   the   best   ideas   for   potential   government   savings  opportunities   would  come  from  the  front  lines  (federal  employees)  In  2009,  he  launched  the  SAVE  Award  (Securing   Americans  Value  and  Efficiency),  hoping  to  find  ideas  that  would  make  government  more  effective   and  efficient  and  ensure  taxpayer  dollars  was  spent  only  on  what  was  necessary.  Not  only  would  this   help  reduce  the  debt,  but  it  would  impact  every  American  tax  payer.    At  that  point,  there  was  no   existing   system   that   could   amalgamate   a   steady   stream   of   ideas   and   feedback   around   those   suggestions.   Out   of   desire   to   remain   true   to   the   values   of   the   open   government   initiative   (transparency,   participation,   and   collaboration),   the   White   House   selected   IdeaScale   as   the   most   viable  solution  that  served  all  of  these  needs.  Not  only  did  it  allow  employees  to  submit  ideas,  but   they  could  vote  on  those  ideas,  comment  and  improve  on  those  ideas  and  the  best  ones  rose  to  the   top  for  review.  Over  the  past  four  years,  federal  employees  have  submitted  tens  of  thousands  of   cost-­‐cutting  ideas  through  the  SAVE  Award.  Dozens  of  the  most  promising  ideas  have  been  included   in  the  President’s  Budget.  Each  year  the  OMB  narrows  the  best  ideas  to  a  “final  four.”  The  American   people  vote  online  to  choose  the  winner.  The  winner  then  comes  to  Washington  to  present  their   idea   to   the   President.   They   needed   a   system   that   would   serve   that   entire   process:   submission,   voting,  evaluation  and  monitoring,  transparent  presentation  and  collaborative  development.     What  impact  did  it  have  on  the  quality  of  policies?   Over   the   past   four   years,   the   White   House   has   collected   thousands   of   ideas   that   cut   costs   and   improve  efficiency.  This  has  allowed  the  White  House  to  meet  its  main  goals:   Main  Goals  
  • 141.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   141  |  P a g e   • Generate  Suggestions:  nearly  100,000  ideas  have  been  collected  in  the  past  four  years   of   SAVE   Awards.   Engagement   remains   high   with   thousands   of   users   signing   on   to   submit,  vote,  and  comment  each  year.  These  ideas  come  from  every  government  arena   and  from  numerous  geographic  locations  allowing  nationwide  collaboration.     • Improve   Government   Programs   and   Save   Money:   Each   year   a   winning   idea   was   selected.  Each  idea  has  been  assessed  as  saving  the  government  potentially  millions  of   dollars.     • 2009:  As  is  the  case  in  most  hospitals  all  across  the  country,  medicine  that  is  used  in  the   hospital  is  not  given  to  patients  to  be  brought  home;  instead,  it  is  thrown  out.  Nancy   Fichtner   proposed   ending   this   practice   and   sending   excess   medication   home   with   patients.  This  is  expected  to  save  $21  million  by  2014.   • 2010:  The  winning  idea  was  to  reduce  the  number  of  hard  copies  of  the  Federal  Register   by  offering  an  opt-­‐in  choice  for  those  who  wanted  to  be  able  to  access  the  Register  in   print.  This  is  expected  to  save  more  than  $4  million  per  year.   • 2011:  Matthew  Ritsko  suggested  that  NASA  employees  form  a  lending  library  of  tools   that  they  can  share  rather  than  purchasing  costly  equipment  each  time  they  need  to   build  something.     • 2012:  Frederick  Winter  proposed  that  all  Federal  employees  with  transit  benefits  adopt   the  reduced  senior  fare  as  soon  as  they  are  eligible.  In  the  DC  area,  this  change  would   lower  the  cost  of  employee  travel  by  50%  per  cent.     • Improve  Engagement:  Federal  employees  have  stated  that  they  feel  more  empowered   with   the   tool   that   is   available   to   them.   In   its   first   year,   the   Executive   Office   of   the   President  of  the  United  States  received  38,000  SAVE  Award.     How  extensive  policy  maker  and  public  take  up   The  application  required  a  minimal  commitment  on  the  part  of  the  policymaker,  because  the  ideas   had  been  submitted  and  prioritized  by  the  crowd  at  large.  Although  all  ideas  were  reviewed,  the   most  promising  options  revealed  themselves  at  an  early  stage  of  review.  The  contribution  on  the   part  of  each  individual  was  minimal,  as  well,  since  the  submission  of  ideas  and  voting  minimized  the   time  commitment  for  all.   Clear   goals   and   a   readymade   solution   allowed   IdeaScale   to   successfully   deploy   the   SAVE   Award   community   against   an   extreme   timeline.   IdeaScale   successfully   delivered   its   ideation   software   on   time  and  within  budget  to  the  Executive  Office  of  the  President.  The  platform  scaled  easily  and  has   never  shown  any  strain  under  a  high  volume  of  users  (nearly  90,000  over  the  past  four  years  of  SAVE   Awards).  In  the  first  week  three  weeks  of  2009  alone,  early  40,000  ideas  were  collected.       Liquid  Democracy   Describe  briefly  the  context  of  the  solution   The  Enquete-­‐Comission  Internet  and  digital  Society  stands  for  a  parliamentary  temporary  committee   established  between  2010-­‐2013.  During  this  period,  Politicians  and  experts  worked  together  in  order   to   develop   policy   recommendations   on   socially   relevant   and   complex   internet   policy-­‐issues   for   future  purposes.  This  specific  Enquete-­‐commission  decided  –  for  the  first  time  ever  in  the  German   history  -­‐  to  link  their  decision-­‐making  processes  and  to  work  with  an  eParticipation  platform,  which   aimed   to   enable   to   involved   citizens   a   wider   online-­‐participation.   The   platform   called   enquetebeteiligung.de   has   been   subsequently   implemented   by   the   non-­‐profit   association   Liquid  
  • 142.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   142  |  P a g e   Democracy  e.V.  which  is  settled  in  Berlin  and  which  develops  further  the  open  source  participation   software  Adhocracy.  On  this  platform  involved  citizens  could  build  up  proposals  and  solutions  for   issues  concerning  the  commission,  discuss  relevant  topics  and  vote  upon  the  proposals.  The  most   popular  proposals  have  been  implemented  in  the  final  report  of  the  Enquete-­‐Comission  Internet  and   digital,  the  official  policy  guideline  related  to  the  debated  topics.  The  success  lies  upon  that  two  of   twelve  recommendations  have  been  mentioned  in  the  final  report  by  taking  over  exact  quotes  from   the  proposals  of  enquetebeteiligung.de.   What  impact  did  it  have  on  the  quality  of  policies?   For  the  first  time  in  the  German  history,  citizens  could  take  part  through  an  online-­‐process  to  the   work  of  an  official  parliamentary  committee.  Their  proposals  have  been  partly  included  in  the  final   report   of   the   Enquete-­‐commission,   which   is   considered   as   the   official   policy-­‐guideline   for   the   German  Government  for  the  next  years.  Through  this  process  the  policy  recommendations  for  the   German   Government   could   be   provided   with   a   unique   democratic   legitimation.   The   fact   that   the   proposals  made  on  enquetebeteiligung.de  were  of  such  a  high  quality  that  a  committee,  composed   of  professional  politicians  and  experts,  decided  to  take  them  over  by  mentioning  full  quotes  in  their   final  report.  Moreover  it  proved  the  opposite  to  everyone  which  was  the  opinion  that  the  average  of   population   is   neither   interested   in   legislation   nor   able   to   suggest   high-­‐quality   contributions.   Enquetebeteiligung.de  has  shown,  that  it  works.  If  we  strive  to  enhance  the  possibilities  to  involve   citizens  in  online  policy-­‐making  processes,  setting  this  as  a  democratic  goal,  we  can  now  have  a  blue   print  on  how  it  can  be  done.   How  extensive  policy  maker  and  public  take  up   Enquetebeteiligung.de  and  the  Enquete-­‐Commission  itself  received  a  wide,  positive  consideration  in   the  German  media.  The  final  report,  in  which  citizen‘s  proposals  were  adopted,  has  been  recently   published.   According   to   a   scientific   evaluation   made   by   the   Zeppelin   University   settled   in   Friedrichshafen   (Germany),   the   quality   of   proposals   was   extraordinary   high   and   the   usage   of   the   adhocracy-­‐platform   for   future   commissions   will   be   highly   recommended.   This   reflects   the   high   accreditations  and  comforts  the  interviewed  citizens.     2050  Pathways  Calculator  
  • 143.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   143  |  P a g e   Describe   briefly   the   context   and   functionalities   of   the   solution,   and   how   it   was   used   in   policy-­‐ making   UK’s  Climate  Change  Act  2008  set  in  law  a  long-­‐term  greenhouse  gas  emissions  reduction  target  for   the  year  2050,  as  well  as  a  framework  for  5-­‐yearly  “carbon  budgets”  to  reach  it.  In  drafting  its  Low   Carbon  Transition  Plan  in  2009,  the  first  White  Paper  which  sought  to  bring  together  the  diverse   challenges  of  the  newly  created  Department  of  Energy  and  Climate  Change  (DECC),  the  Department   wished   to   investigate   further   what   options   the   country   had   in   meeting   its   target   to   reduce   greenhouse  gas  emissions  by  80%  on  1990  levels  by  2050.   The  Department  already  had  models  to  understand  long  term  options,  such  as  MarkAl,  but  none  of   these  was  easy  or  quick  to  run  inside  the  Department  and  so  senior  decision  makers  did  not  feel  they   had  an  opportunity  to  interrogate  the  results.     This  was  the  context  that  pointed  to  the  need  for  the  work,  but  also  highlighted  the  importance  of   the   ethos   of   the   work   –   that   it   should   be   understandable,   radically   transparent,   interactive,   give   quick  results,  and  set  out  easily  all  the  underpinning  assumptions.     The   solution   which   we   developed   in   DECC’s   Strategy   Directorate   was   the   “2050   Pathways   Calculator”.   This   is   an   interactive   computer   model,   available   in   three   formats:   the   detailed   Excel   model,   a   user-­‐friendly   web   tool,   and   a   simplified   ‘serious   game’   or   simulation.   https://www.gov.uk/2050-­‐pathways-­‐analysis     By  publishing  the  2050  Calculator  in  full,  it  has  enabled  a  numerate  and  broader  public  debate  about   the  UK’s  energy  demand  and  supply,  and  provided  a  platform  which  allowed  everyone  to  join  the   discussion   on   the   same   terms.   The   model   seeks   to   encompass   all   physically   possible   outcomes,   rather  than  point  to  only  those  thought  to  be  most  likely  at  any  one  time.       What  impact  did  it  have  on  the  quality  of  policies?   The   2050   Calculator   helps   everyone   engage   in   the   debate   and   lets   Government   make   sure   our   planning  is  consistent  with  the  long-­‐term  aim.  The  2050  Calculator  outlines,  in  minutes,  months  of   work  from  technical  experts.  It  can  be  used  to  engage  a  range  of  audiences  on  the  challenges  and   opportunities  of  the  energy  system.  It  brings  energy  and  emissions  data  alive,  showing  the  benefits,   costs  and  trade-­‐offs  of  different  versions  of  the  future.  It  allows  you  to  explore  the  fundamental   questions   of   how   the   UK   can   best   meet   energy   needs   and   reduce   emissions.   The   tool   has   been   shared  transparently,  both  in  the  sense  of  sharing  all  its  assumptions  and  formulations,  and  also  in   the  sense  of  sharing  its  results  in  a  way  that  people  can  understand  and  use.   The  analysis  has  been  used  in  the  Government’s  Budget  statements,  Annual  Energy  Statements  and   it  featured  centrally  in  the  UK  Government’s  Carbon  Plan  2011.  The  team  drew  out  key  conclusions   from  the  work  which  have  been  picked  up  by  teams  across  government:  for  example:  the  potential   doubling  of  electricity  demand  over  period  to  2050  even  as  energy  demand  as  a  whole  falls,  the   limited  supply  of  bioenergy  with  competing  demand  in  different  sectors,  its  use  in  understanding  the   renewable  strategy  and  targets.  It  has  helped  senior  people  in  the  Department  understand  issues   such   as   insulation   ambition   levels,   fossil   fuel   usage,   power   grid   decarbonisation,   etc.   The   2050   Calculator  has  been  shown  to  new  Ministers  when  they  join  the  Department.  It  was  used  at  points   such  as  the  Fukushima  incident  to  respond  to  new  questions.       Take  up  from  the  public  and  policy  makers   The  2050  Futures  team  often  train  colleagues  across  DECC  and  other  government  departments  in   how  to  use  the  2050  Calculator,  and  it  has  been  widely  used  alongside  other  more  detailed  models.   Outside   of   government,   the   2050   Calculator   has   also   been   widely   used.     The   transparency   and   accessibility  of  the  approach  has  led  to  collaborations  from  diverse  quarters:    
  • 144.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   144  |  P a g e     • With   Parliamentary   Select   Committees   and   staff   in   parliament   to  help   give   MPs   a   factual   basis  for  debate.   • With  individual  enthusiasts  and  experts  who  have  contributed  bug  fixes  and  improvements   to  our  modelling,  interfaces  and  documents.   • With   Cardiff   University   to   understand   public   attitudes   to   the   choices   we   face   when   considering  the  energy  system  as  a  whole.   • With  the  Foreign  Office  and  China,  South  Korea,  Taiwan,  Bangladesh,  South  Africa  and  the   Asian  Development  Bank  -­‐  helping  each  of  them  to  develop  their  own  versions  of  the  2050   calculator.  We  are  discussing  potential  partnerships  with  many  other  countries.   • With  many  schools  and  universities  in  their  teaching.  We  provided  a  ‘Schools  Toolkit’  to  help   teachers   of   Geography,   Science,   Maths   and   Citizenship,   to   use   the   2050   Calculator.   The   Toolkit  is  most  suited  to  students  aged  11  –  16  years  old.  We  also  funded  a  Youth  Panel  to   engage  with  the  work  and  report  to  DECC.   • With  companies  and  NGOs,  e.g.  the  infrastructure  company  National  Grid  and  Friends  of  the   Earth  in  their  own  outreach  and  internal  thinking.         In  terms  of  user  statistics:     • My2050  has  over  16,000  pathways  submitted  by  public.   • Team  presented  to  over  500  stakeholders  in  the  autumn  2010  Call  for  Evidence  period   • Typically  10,000  unique  users  of  the  web  tool  over  a  three-­‐month  period   • 100  people  registered  to  use  the  Wiki  (these  are  the  most  active  Calculator  users).       4.4. Lessons  learnt  from  cases  and  prize   What  emerged  from  the  analysis  of  the  prize  and  the  cases  is  that  evidence  for  uptake  is  clearly   available  and  now  can  be  considered  mature.   However,  the  evidence  presented  by  the  cases  and  the  prize  candidates  with  regard  to  their  impact   remains  thin  and  anecdotal  in  nature.  There  is  no  thorough  assessment  of  the  impact  on  the  quality   of  policies.  Typically,  the  impact  is  demonstrated  in  terms  of:   -­‐ Visits  to  the  website  and  participation  rates -­‐  feedback  and  visibility  towards  media  and  politicians,     -­‐ actual  influence  over  the  decisions  taken   while  the  actual  impact  on  the  quality  of  policies  is  yet  to  be  demonstrated.  Some  initial  work  (in  the   case  of  Gleam  and  Pathways  2050)  is  focussing  on  comparing  the  predictions  with  the  reality  as  it  is   unfolding.  Only  the  case  of  Ideascale  presents  some  tangible  ex  ante  estimates  of  the  advantages  of   the  decisions  taken  through  policy-­‐making  2.0,  but  no  thourough  ex  post  evaluation.   It  is  fair  to  conclude  that  evidence  about  the  impact  remains  mostly  at  the  level  of  actual  usage  of   the  final  results  in  the  policy  decisions.  Unfortunately,  there  is  no  systematic  ex  post  evaluation  of   such  decisions.  This  weak  evidence  base  is  a  major  obstacle  to  encourage  further  uptake  of  those   solution,  and  further  investment  in  them.  We  are  far  from  having  robust  impact  evaluation  of  policy-­‐ making  2.0,  even  at  the  micro-­‐level  of  individual  cases.  
  • 145.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   145  |  P a g e     4.5. An   additional   research   challenge:   counterfactual   impact   evaluation  of  Policy  Making  2.0     The   findings   of   the   case   studies,   the   survey   and   the   prize   are   consistent   that   no   systematic   evaluation   of   the   impact   of   policy-­‐making   2.0   is   available.   This   represents   a   major   challenge   to   further  adoption  and  experimentation  in  this  domain.  There  is  still  a  number  of  unresolved  questions   regarding  policy  making  2.0  tools  and  methodologies:       • Do  they  help  engaging  new  stakeholders  and  communities?   • Do  they  help  predicting  impact  better  than  other  models?     • Do  they  bring  new  relevant  ideas  useful  for  policy-­‐making?   • Do  they  actually  lead  to  better  policies?       These  questions  could  be  structured  in  a  new  evaluation  framework  for  policy-­‐making  2.0,  which   encompassess  the  full  intervention  logic,  from  contextual  information,  to  the  intervention,  uptake,   impact.     Figure  20:  a  proposed  evaluation  framework  for  policy-­‐making  2.0   The   originality   of   this   model   lies   in   its   comprehensiveness,   in   particularly   downstream.   Typical   evaluation  of  policy  making  2.0  initiatives  stop  at  the  level  of  the  level  of  uptake,  such  as  visitors  and   users.   In   the   best   practices   identified,   it   includes   actual   influence   on   the   decision   taken.   The   proposed   framework   includes   the   actual   benefits   on   the   quality   of   policy   making,   such   as   the   measurement   of   the   prediction   capacity,   the   improved   performance   of   public   sectors,   and   the   improved  empowerment  of  citizens.   In  this  respect,  there  is  a  lack  of  systematic  robust  evaluation  of  different  policy-­‐methods.  In  fact   initial  and  anecdotal  evidence  point  to  the  presence  of  potential  impacts,  but  there  is  a  lack  of  a   proper  counterfactual  impact  evaluation  approach  available  to  date.  In  what  follows  we  present  the   main  methodologies  in  the  field,  and  how  they  can  be  applied  to  policy  making  2.0.  We  would  like  to   stress  the  fact  that  counterfactual  impact  evaluation  is  more  likely  to  be  used  to  evaluate  policies   and   initiatives   rather   than   technologies   and   methodologies.   Moreover   it   is   more   suitable   for   evaluating  policies  impacting  a  number  of  distinctive  actors.   Evaluating  the  impact  of  policies  is  a  complex  task  because  one  would  like  to  know  what  would  have   been  the  value  for  a  given  output/outcome  variable  in  the  absence  of  the  project.    This  is  a  value   that,   by   definition,   cannot   be   observed   for   units   not   involved   in   the   project.   In   other   words,   evaluators  cannot  know  what  would  have  been  the  behaviour  of  a  treated  unit  in  the  absence  of   treatment.  Similarly,  we  have  no  counterfactuals  for  the  non-­‐treated  unit  (those  not  involved  in  the   program).  This  is  a  well-­‐known  problem  in  policy  evaluation  analysis  (see  for  instance  Neyman,  1923   Context' • Socio' poli.cal' factors' Interven.on' • Design'of' technology' • Design'of' methods' • Cost' Uptake' • More' par.cipants' • More' diverse' par.cipa.on' Impact' efficiency' • High'quality' of'ideas' • Impact'on' actual' decisions' • BeCer' predic.ons' Impact' effec.veness' • Improved' performanc e'of'public' sector' • Improved' empowerme nt'of'ci.zens'
  • 146.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   146  |  P a g e   and   Rubin,   1974,   1978,   1980,   1986),   which   has   been   overcome   using   several   methods.   What   is   common  to  all  these  ‘alternative’  approaches  is  that  they  attempt  to  identify  or  create  the  most   appropriate   control   group281  in   order   to   overcome   the   two   main   obstacles   in   the   estimation   of   counterfactual   • The  'selection  bias',  which  consists  of  the  fact  that  target  population  differs  from   counterfactual   population   due   to   pre-­‐intervention   features.   A   solution   is   the   introduction  of  an  identification  hypothesis  stating  that  pre-­‐intervention  variables   are   sufficient   to   'reconstruct'   the   control   group   of   non-­‐beneficiaries   (counterfactual)   • The   presence   of   spontaneous   dynamics,   due   to   the   fact   that   target   population   differs  from  control  population  for  the  trend  of  the  result  variable.  A  solution  is  the   introduction   of   an   identification   hypothesis   to   take   in   consideration   the   spontaneous  dynamics  of  the  result  variable  trend   There  are  basically  six  main  counterfactual  impact  assessment  methodologies   Randomised  controlled  trials   A  solution  can  be  found  in  case  of  randomized  processes  (this  happens  when  the  possibility  to  take   part  to  a  project  is  made  available  to  people  on  the  basis  of  a  random  process).  In  this  situation  we   do  not  expect  structural  differences  between  those  who  are  treated  (and  receive  support)  and  those   who  are  not,  so  that  we  can  use  the  non-­‐supported  subjects  as  a  control  group  for  comparison  with   the  former  group.     Difference-­‐in-­‐Difference  (DID)   The  impact  of  a  policy  on  an  outcome  can  be  estimated  by  computing  a  double  difference,  one  over   time  (before  and  after  the  treatment)  and  one  across  subjects  (between  treated  and  non  treated).   This   simple   method   requires   only   aggregate   data   on   the   outcome   variable,   and   at   least   3   observations  in  time:  two  observations  before  and  1  observation  after.  Unfortunately  the  difference   in  difference  method  implies  that  the  trend  in  treatments  and  comparisons  are  the  same.  With  only   four  points  of  observation  on  means  we  do  not  know  if  this  assumption  is  correct.  However,  with   two  additional  pre-­‐intervention  data  points  the  parallelism  assumption  becomes  testable.   Regression  Discontinuity  Design  (RDD)   One  solution  that  has  been  proposed  in  the  literature  is  the  use  of  so  called  “regression  discontinuity   design”.  This  method  can  be  applied  to  situations  in  which  it  is  possible  to  identify  a  clear  cut-­‐off   level  for  treatment  access  and  in  which  treatment  status  is  based  on  observable  characteristics.  In   this  case  the  cut-­‐off  is  defined  by  the  eligibility  rules  of  the  project  so  that  the  treatment  group  is   made  up  by  people  that  just  satisfy  these  criteria  (and  hence  have  access  to  the  project),  whereas   the  control  group  is  composed  of  people  that  are  just  below  the  cut-­‐off  level  and  do  not  have  access   to   the   project.   In   such   a   circumstance   it   is   reasonable   to   assume  that   the   control   group   and   the   treated  groups  are  very  similar  against  most  criteria,  and  that  the  small  difference  in  the  variables   guaranteeing   access   to   treatment   are   not   sufficient   to   justify   a   different   value   of   the   outcome   variable,  so  that  a  difference  in  the  latter  can  be  entirely  attributed  to  treatment.     Instrumental  variables  and  natural  experiments   This  category  is  relevant  when  the  exposure  to  the  policy  is  to  a  certain  degree  determined  by  an   external  force  which  does  not  affect  the  outcome  of  the  policy  directly,  but  only  indirectly,  through   its  influence  on  the  exposure.  Angrist  and  Krueger  (2001)  define  this  situation  as  natural  experiment,                                                                                                                             281  For  an  introduction  to  policy  evaluation  see  Khandker,  Koolwal  and  Samad  (2010)  
  • 147.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   147  |  P a g e   i.e.  “where  the  forces  of  nature  or  government  policy  have  conspired  to  produce  an  environment   somewhat  akin  to  a  randomized  experiment.”  There  are  two  main  approaches:   • Wald   estimator,   in   which   the   treatment   effect   is   identified   by   the   ratio   of   the   difference  in  average  outcome  between  units  eligible  and  not  eligible  for  treatment,   weighted  by  the  probability  of  treatment  induced  by  the  instrument.  This  method  is   used   in   case   of   randomization   with   partial   compliance   and   randomized   encouragement     • Two  stage  least  squares,  consisting  of  a  first  stage  in  which  is  estimated  a  model   predicting   the   probability   of   treatment   as   function   of   the   instrument   and   other   variables,  and  a  second  stage  in  which  the  outcome  equation  is  estimated  using  the   predicted   probability   of   treatment.   This   is   the   case   of   non-­‐randomized   natural   experiments   Unfortunately  this  method  is  not  often  feasible  as  does  not  work  when  treatment  exposure  is  not   mandatory  and  depends  upon  some  selection  process  that  needs  to  be  controlled  for.  This  is  the   case  at  hand,  in  which  the  participation  to  the  training  projects  has  been  voluntary.  Another  major   weakness  of  the  approach  is  that  it  can  be  difficult  to  find  an  instrument  that  is  both  relevant  and   exogenous.     Matching     The  most  common  matching  method  is  the  propensity  score  matching.  This  approach  is  based  on   the  premise  that,  for  each  unit  that  has  been  treated,  it  is  possible  to  find  at  least  one  non-­‐treated   unit  that  is  “close”  enough  to  the  treated  counterpart.  In  this  context  “close”  means  that  it  exhibits  a   value  for  the  propensity  score  very  similar  (if  not  identical)  to  the  one  observed  for  the  treated  unit.   The   propensity   score   is   defined   as   the   conditional   probability   of   receiving   the   treatment   and   is   usually  estimated  using  logit  or  probit  regressions.  After  having  computed  the  propensity  scores  for   all  the  firms  in  the  dataset,  it  is  possible  to  use  this  value  to  match  firms  in  the  treated  group  with  at   least   one   firm   in   the   control   group.   There   are   various   techniques   for   undertaking   this   matching   process.    Some  use  replacement  while  others  do  not,  and  some  use  more  complex  definitions  of   distance,  but  the  logic  in  all  these  approaches  is  very  similar  -­‐  find  a  close  match  for  the  treated  unit   within  the  group  of  untreated,  using  the  values  for  the  propensity  scores.  This  approach  works  well  if   the  evaluator  has  access  to  a  representative  sample  of  the  underlying  population  and  can  control  for   all   the   variables   determining   the   treatment   status   (the   so   called   “selection   on   observables”   assumption);  otherwise  the  process  can  be  bedevilled  with  the  selection  bias  issue.     There  are  three  main  types  of  propensity  score  matching:   • Nearest  available  matching,  according  to  which  each  treated  unit  is  matched  with   the  one  untreated  unit  having  the  most  similar  initial  characteristics   • Radius  matching,  according  to  which  each  treated  unit  is  matched  with  all  of  the   untreated  units  having  a  propensity  score  within  a  certain  degree  of  tolerance  with   respect  to  the  one  of  the  treated  unit     • Kernel   Matching,   in   which   the   outcome   of   each   treated   unit   is   compared   with   a   weighted  average  of  the  outcomes  of  all  non-­‐treated  units   There   is   a   very   important   difference   between   propensity   score   matching   and   multiple   regression   analysis.  In  propensity  score  matching  pre-­‐intervention  characteristics  are  different  between  treated   and   non-­‐treated   units,   affecting   differently   the   final   outcome   of   the   treated   and   non-­‐treated   independently  from  the  effect  of  the  programme,  thereby  creating  a  selection  bias.  On  the  other  
  • 148.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   148  |  P a g e   hand  multiple  regression  analysis  makes  use  of  the  data  from  all  the  treated  and  non-­‐treated  units   separating  the  impact  on  the  final  outcome  due  to  the  different  initial  characteristic  (included  in  the   model   as   control   variables)   from   the   impact   of   the   programme.   So   the   trick   is   to   find   as   control   variables  all  the  initial  characteristics  that  are  similar  between  the  treated  and  non-­‐treated  units  in   order  to  compare  the  final  outcome  and  interpret  the  difference  as  the  impact  of  the  programme.   When  there  will  be  a  higher  number  of  eInclusion  projects  we  will  adopt  also  the  multiple  regression   approach  in  our  analysis.   Matching   is   mostly   inspired   by   outcome   additionality   and   to   some   extent   overlooks   behavioural   additionality.  Findings  from  matching  should  always  be  combined  with  real-­‐time  case  study  evidence   to  allow  some  insight  into  the  causality  mechanisms.  The  matched  sample  approach,  in  fact,  always   raises  questions  of  just  how  similar  the  subjects  are.       Self-­‐reported  counterfactuals     This   approach,   employed   especially   for   assessing   the   issue   of   behavioural   additionality   (Aslesen,   Broch,   Koch,   &   Solum,   2001;   Davenport,   Grimes,   &   Davies,   1998),   consist   in   questioning   assisted   subjects   directly   and   posing   them   counterfactual   questions.   This   involves   asking   the   recipients   of   public  support  how  their  employment-­‐related  behaviour  changed,  asking  formerly  supported  people   how   the   withdrawal   of   assistance   affected   their   innovation   related   behaviour,   and   asking   non-­‐ supported  people  how  they  think  their  innovation  related  behaviour  would  have  changed  had  they   received   support.   Moreover,   as   one   of   the   objectives   of   our   investigation   is   to   improve   the   intervention  process,  the  questioning  would  involve  also  the  intermediary  actors.  Surveys  are  a  good   solution,  provided,  of  course,  the  respondents  do  not  answer  strategically  and  are  able  to  reflect  on   behavioural  changes  in  a  counter-­‐factual  situation.  The  analysis  of  direct  questions  on  additionality   assumes   that   the   respondents   are   indeed   able   to   reflect   on   their   behaviour   in   hypothetical,   counterfactual  situations  and  that  they  are  telling  the  truth  to  the  best  of  their  knowledge.  However,   as  respondents  have  an  interest  in  the  continuation  of  public  support,  they  might  be  tempted  to   over-­‐emphasize   the   merits   thereof   (Sakakibara,   1997).   From   an   opposite   perspective,   one   could   argue  that  some  people  might  be  reluctant  to  admit  their  dependence  on  public  support.  Either  way,   the  differences  between  hypothetical  and  real  situations  should  be  controlled  for  through  a  mixture   of  matching  and  self-­‐reported  counterfactuals.       Challenges  and  research  gaps   • Often  we  are  not  facing  a  natural  experiments  situation,  as  the  treatment  exposure  is   not  mandatory  and  depends  upon  some  selection  process  that  needs  to  be  controlled     • Often   it   is   not   clear   which   is   the   treated   unit.   For   example   a   policy   making   tool   implemented  in  the  internet  can  affect  many  groups  of  people  from  different  countries,   and  anyway  it  is  very  difficult  to  obtain  data  on  the  untreated   • On   the   other   hand   often   the   same   units   are   treated   with   different   policies   and   initiatives     • Sometimes   the   treated   unit   is   an   entire   country:   this   makes   it   impossible   to   apply   methodologies  such  as  randomized  control  trials  or  matching     • Finally  there  is  the  need  to  develop  new  sets  of  indicators  used  for  assessing  the  impact     The  most  promising  methods  seem  randomized  controlled  trials  and  self-­‐reported  counterfactuals.   Randomized  control  trials  can  be  used  for  assessing  the  impact  of  policies  and  initiatives  especially  at   local  levels.  On  the  other  hand  by  using  the  self-­‐reported  counterfactual  method  through  workshops   or   survey   it   would   be   possible   to   extract   counterfactual   information   from   the   agents   joining   the  
  • 149.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   149  |  P a g e   programs   and   initiatives,   complementing   with   the   investigation   of   the   underlying   background   information.     Let  us  see  now  some  examples  of  counterfactual  impact  evaluation  applied  to  open  government  for   assessing  the  validity  of  claims  for  transparency  and  participation:   • Zhang  (2012)282  ran  a  pilot  field  experiment  in  Kenya  to  explore  how  variation  in  the   content   of   an   information   campaign   can   impact   political   behavior   in   villages.   The   experiment  involved  two  interventions.  The  first  provided  a  Constituency  Development   Fund   (CDF)   report   card,   which   detailed   the   budgets   of   all   the   CDF   projects   allocated   funding   in   the   constituency   for   that   fiscal   year,   to   see   if   villagers   respond   to   unaccounted  for  money  in  locally  visible  projects.  The  second  intervention,  based  upon   the  mixed  findings  in  the  literature  as  to  how  information  can  enable  citizens  to  take   action   couples   the   report   card   with   a   public   participation   flyer,   to   see   if   information   about   legal   rights   and   decision-­‐making   processes   is   necessary   for   citizens   to   use   the   report  card  to  take  action   • Olken  (2010)  ran  an  experiment  in  which  49  Indonesian  villages  were  randomly  assigned   to   choose   development   projects   through   either   direct   election-­‐based   plebiscites   or   through  representative-­‐based  meetings.  In  villages  where  plebiscites  were  performed,   there  has  been  a  dramatic  increase  in  satisfaction  among  villagers,  in  knowledge  about   the   project,   in   greater   perceived   benefits,   and   a   higher   reported   willingness   to   contribute.  Moreover  we  have  that  changing  the  political  mechanism  had  much  smaller   effects  on  the  actual  projects  selected,  with  some  evidence  that  plebiscites  resulted  in   projects  chosen  by  women  being  located  in  poorer  areas.  According  to  the  outcomes  of   the  study,  satisfaction  and  legitimacy  are  substantially  increased  by  direct  participation.                                                                                                                               282  Kelly  Zhang,  Increasing  Citizen  Demand  for  Good  Government  in  Kenya  (May  2012)  (unpublished  manuscript),  available   at  http://cega.berkeley.edu/assets/cega_events/4/Zhang-­‐Kelly_Increasing-­‐Citizen-­‐Demand_Kenya_2012_v2.pdf  
  • 150.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   150  |  P a g e   5. Conclusions:  Policy-­‐Making  2.0  between  hype  and  reality   In  this  final  section,  we  bring  together  the  findings  of  the  different  sections  and  put  policy-­‐making   2.0  in  perspective  of  long  term  improvement  of  public  decision  making.   A   first   question   to   be   addressed   is   to   what   extent   the   research   challenges   relate   to   a   specific   challenge  in  policy-­‐making,  as  described  in  section  3  and  illustrated  in  the  figure  below.   As  we  can  see,  the  described  research  challenges  capture  all  the  main  needs  of  policy-­‐makers,  and  in   particular  the  capacity  to  detect  problems  early  and  to  leverage  the  collective  intelligence  in  policy-­‐ making.       Figure  21:  Relation  Between  Policy-­‐Making  Needs  and  Research  Challenges   In  view  of  this  analysis,  the  next  step  is  to  relate  each  of  the  research  challenges  in  the  policy-­‐making   cycle.  Each  research  challenge  is  in  fact  relevant  for  one  or  more  of  the  specific  tasks,  not  for  all.   The  figure  below  illustrates  this  relationship.  In  each  of  the  phases  of  the  cycle,  for  each  of  the  tasks,   we  can  identify  the  potential  impact  of  the  research  challenges  described.  
  • 151.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   151  |  P a g e     The  policy  cycle  starts  with  the  agenda  setting  phase,  where  the  problem  is  identified  and  analysed.   In  this  section,  visualization  and  opinion  mining  can  help  to  identify  the  problems  at  an  early  stage.   Advanced   modelling   techniques   are   then   used   to   untangle   the   casual   relationships   behind   the   problem,  understanding  the  causal  roots  that  need  to  be  addressed  by  policy.     Once  the  problem  is  clearly  spelled  out,  we  move  to  the  policy  design  phase,  where  collaborative   solutions  are  useful  to  identify  the  widest  range  of  options,  by  leveraging  collective  intelligence.  In   order  to  facilitate  the  choice  of  the  most  effective  option,  immersive  simulations  support  decision-­‐ makers   by   taking   into   account   unexpected   impacts   and   relationships.   Collaborative   governance   enables   then   to   develop   further   and   fine-­‐tune   the   most   effective   option,   for   example   through   commentable  documents.     Once  the  option  is  developed  and  adopted,  we  enter  into  policy  implementation.  In  this  phase,  it  is   crucial  to  ensure  awareness,  buy-­‐in  and  collaboration  from  the  widest  range  of  stakeholders:  social   network  analysis,  crowdsourcing  and  serious  gaming  are  useful  to  deliver  this.     Already  during  this  implementation,  we  move  into  the  monitoring  and  evaluation.  Open  data  allow   stakeholders   and   decision   makers   to   better   monitor   execution;   together   with   sentiment   analysis,   they   can   be   used   to   evaluate   the   impact   of   the   policy,   also   through   advanced   visualization   techniques.   In  summary,  our  vision  for  2030  embodies  a  radically  different  context  for  policy-­‐making  2.0.     On  policy  modelling  and  simulation,  thanks  to  standardisation  and  reusability  of  models  and  tools,   system   thinking   and   modelling   applied   to   policy   impact   assessment   has   become   pervasive   throughout  government  activities,  and  is  no  longer  limited  to  high-­‐profile  regulation.  Model  building   and  simulation  is  carried  out  directly  by  the  responsible  civil  servants,  collaborating  with  different   domain  experts  and  colleagues  from  other  departments.  Visual  dynamic  interfaces  allow  users  to   directly  manipulate  the  simulation  parameters  and  the  underlying  model.   Policy   modelling   software   becomes   productized   and   engineered,   and   is   delivered   as-­‐a-­‐service,   through   the   cloud,   bundled   with   added-­‐value   services   and   multidisciplinary   support   including   mathematical,  physics,  economic,  social,  policy  and  domain-­‐specific  scientific  support.  
  • 152.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   152  |  P a g e   Cloud-­‐based   interoperability   standards   ensure   full   reusability   and   modularity   of   models   across   platforms  and  software.   System   policy   models   are   dynamically   built,   validated   and   adjusted   taking   into   account   massive   dataset  of  heterogeneous  data  with  different  degrees  of  validity,  including  sensor-­‐based  structured   data   and   citizens-­‐generated   unstructured   opinions   and   comments.   By   integrating   top-­‐down   and   bottom-­‐up  agent  based  approaches,  the  models  are  able  to  better  explain  human  behaviour  and  to   anticipate  possible  tipping  points  and  domino  effects   On  collaborative  governance,  policy-­‐making  leverages  collective  intelligence  and  collective  action.  It   accounts   for   the   greater   policentricity   of   our   governance   system.   While   traditional   tools   are   designed  for  the  public  decision-­‐makers,  these  research  challenges  are  more  symmetric  by  nature,   in   order   to   engage   stakeholders   all   through   the   phases   of   the   policy-­‐making   cycle.   Thanks   to   visualisation  and  design,  it  is  able  to  reach  out  to  new  stakeholders  and  lower  the  barriers  to  entry  in   the  policy  discussions.  Policy-­‐making  2.0  is  not  only  designed  to  be  more  effective,  but  also  more   participatory.   This   document   described   at   length   the   specific   opportunities   of   policy-­‐making   technology,   and   identified  the  technological  bottlenecks  that  we  need  to  overcome  over  the  next  years  if  we  want  to   grasp  the  opportunities  of  Policy-­‐Making  2.0.  The  research  challenges  identified  so  far  are  not  just  a   simple  collection  of  research  issues,  but  an  integrated  bundle  of  innovative  solutions  that  together   can  lead  to  a  paradigm  shift  in  policy-­‐making.   Yet  it  does  not  fail  us  that  the  main  bottlenecks  to  achieving  this  vision  are  not  technological.  The   reason  why  policy-­‐making  is  not  already  as  open  and  evidence-­‐based  as  it  could  be,  lies  less  in  the   limitation  of  the  technology  than  on  the  concrete  needs  and  limitations  of  human  behaviour.   This   is   a   lesson   we   learnt   from   many   years   of   studies   on   the   impact   of   ICT,   for   example   on   e-­‐ government.  Regardless  of  the  technological  tools  at  your  disposal,  the  key  barriers  to  change  lie   with  cultural  and  organisational  issues.   We   can't   claim   to   propose   a   more   human   centric   policy   making,   that   takes   into   account   the   complexity  of  human  behaviour,  and  then  fail  to  recognize  the  humanity  of  policy-­‐makers.  Policy-­‐ makers  are  agents,  and  as  such  are  self-­‐interested  and  driven  by  an  own  agenda.  They  are  human,   and  therefore  not  perfectly  rational  and  atomised.  Citizens  are  human,  and  not  that  interested  in   public  policy.   It  would  therefore  be  foolish  to  expect  that  the  simple  availability  of  the  technology  will  suddenly   free  policy-­‐making  from  politicking,  corruption,  personal  interests  or  simple  incompetence.  It  is  not   within  the  scope  of  this  roadmap  to  develop  generic  policy  recommendations  for  improving  policy-­‐ making  as  such,  yet  we  cannot  treat  non-­‐technological  factors  as  a  simple  black  box:  as  described  in   section  2.3,  technological  tools  have  to  take  into  account  the  concrete  problems  of  policy-­‐making.   We  propose  that  policy-­‐making  2.0  is  not  a  panacea  for  better  government,  yet  it  is  not  neutral  to   power  relationship  that  enable  such  problems  as  corruption  and  incompetence  to  emerge.  In  other   words,  these  are  not  “just  tools”  that  can  be  used  for  good  or  bad:  they  provide  the  opportunity  to   re-­‐frame   the   system   of   check   and   balances   that   determine   the   likelihood   of   good   or   bad   policy-­‐ making.   More  open  data,  more  transparent  models,  more  visually  accountable  policy  measures  can  facilitate   the  uncovering  of  corruption,  personal  interests  and  incompetence.  The  emphasis  on  usability  and   openness  of  modelling  is  opening  up  policy-­‐making  to  a  wider  range  of  stakeholders.  The  availability   of   different   simulated   future   scenarios   enhances   the   accountability   of   today’s   decision   of   policy   makers.    
  • 153.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   153  |  P a g e   There  will  always  be  room  for  malpractice  and  greed  in  policy-­‐making  2.0  as  well  as  in  any  human   activities.  This  is  however  not  an  argument  to  give  up  on  improving  the  available  methods.  Raising   the  barriers  to  malpractice,  and  lowering  the  barrier  to  good  practice,  is  an  achievable  goal  worth   pursuing.                          
  • 154.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   154  |  P a g e   6. References   1. Alexopoulos,  C.  &  Kim,  S.H.  (2002)  Output  Data  Analysis  for  Simulations,  Proceedings  of  the  2002   Winter  Simulation  Conference,  San  Diego  (CA),  December  8-­‐11   2. Alexopoulos,   C.   &   Seila,   A.F.   (1998)   Output   Data   Analysis,   in:   Banks,   J.,   ed.,   Handbook   of   Simulation:  Principles,  Methodology,  Advances,  Applications  and  Practice,  John  Wiley,  New  York.   3. Balci,  O.  (1989)  How  to  assess  the  acceptability  and  credibility  of  simulation  results,  Proceedings   of  the  1989  Winter  Simulation  Conference,  Washington  (DC),  December  4-­‐6.   4. Barabasi,  A.  L.,  2003.  Linked:  How  Everything  is  Connected  to  Everything  Else  and  What  it  Means   for  Business  and  Everyday  Life,  Plume  Books.     5. Bederson,  B  and  Shneiderman,  B.  (2003).  The  Craft  of  Information  Visualization:  Readings  and   Reflections   6. Benjamin,   A.   O.   (2010)   "Direct   Democracy   and   Local   Public   Goods:   Evidence   from   a   Field   Experiment   in   Indonesia."   American   Political   Science   Review   104.2:   243-­‐67.   MIT   and   National   Bureau  of  Economic  Research. 7. Bertin,  J  (1967)  Sémiologie  Graphique.  Les  diagrammes,  les  réseaux,  les  cartes.   8. Boyd,   D.,   Crawford,   K,   (2011)   Six   Provocations   for   Big   Data.   A   Decade   in   Internet   Time:   Symposium  on  the  Dynamics  of  the  Internet  and  Society,  September  2011.   9. Burkholder,   L,   ed.   (1992)   Philosophy   and   the   Computer,   Boulder,   San   Francisco,   and   Oxford:   Westview  Press.   10. Chen, C. (2005). "Top 10 unsolved information visualisation problems." IEEE Computer Graphics and Applications 25:12-16. 11. Christakis  N  and  Fowler  J.“The  spread  of  obesity  in  a  large  social  network  over  32  years,”  in  New   England  Journal  of  Medicine,  357,  370–379,  2007   12. Colecchia,  A.  and  P.  Schreyer  (2002),  ICT  Investment  and  Economic  Growth  in  the  1990s:  Is  the   United  States  a  Unique  Case?  A  Comparative  Study  of  Nine  OECD  Countries,  Review  of  Economic   Dynamics,  Vol.  5,  No.  2,  April,  pp.  408–442.   13. European  Commission  (2009),  Impact  Assessment  Guidelines,  SEC  (2009)  92     14. Friendly,  M.  (2008)  "Milestones  in  the  history  of  thematic  cartography,  statistical  graphics,  and   data  visualization"   15. Gass,   S.I.   (1983)   Decision-­‐aiding   models:   validation,   assessment   and   related   issues   for   policy   analysis,  Operations  Research,  31(4),  pp.  601-­‐663   16. Gass,  S.I.  &  Joel.,  L.  (1987)  Concepts  of  model  confidence,  Computers  and  Operations  Research,   8  (4),  pp.  341-­‐346.   17. Goldsman,   D.   &   Tokol,   G.   (2000)   Output   Analysis   procedures   for   computer   simulations,   Proceedings  of  the  2000  Winter  Simulation  Conference,  Orlando  (FL),  10-­‐13  December  2000.   18. Goldsman,   D.   &   Nelson,   B.L.   (1998)   Comparing   systems   via   simulation,   In:   Banks,   J.,   ed.,   Handbook   of   Simulation:   Principles,   Methodology,   Advances,   Applications   and   Practice,   John   Wiley,  New  York   19. Heer, Jeffrey, Fernanda B. Viégas, and Martin Wattenberg. 2007. Voyagers and Voyeurs: Supporting Asynchronous Collaborative Information visualisation. In CHI 2007, April 28–May 3, 2007, San Jose, California, USA 20. Howard,  C.  (2005),  The  Policy  Cycle:  A  Model  of  Post-­‐Machiavellian  Policy  Making?.  Australian   Journal  of  Public  Administration,  64:  3–13.  doi:  10.1111/j.1467-­‐8500.2005.00447.x   21. Jorgenson,  D.  W.,  Ho,  M.  S.  and  K.  J.  Stiroh  (2008),  A  Retrospective  look  at  the  US  Productivity   Growth  Resurgence,  Journal  of  Economic  Perspectives,  Volume  22,  Number  1,  2008,  Pages  3-­‐24.   22. Keim,  D.,  2011.  Solving  Problems  with  Visual  Analytics  :  Challenges  and  Applications.  Society,  65,   p.1.     23. Keim, D. A., Mansmann, F., Schneidewind, J., Thomas, J. and Ziegler H. (2008). "Visual Analytics: Scope and Challenges"
  • 155.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   155  |  P a g e   24. Kelton,   W.D.   (1997)   Statistical   Analysis   of   Simulation   Input,   Proceedings   of   the   1997   Winter   Simulation  Conference,  Atlanta  (GA),  7-­‐10  December  1997.   25. Kuhlmann,  S.  &  Meyer-­‐Krahmer,  F.,  1994.  Practice  of  Technology  Policy  Evaluation  in  Germany:   Introduction  and  Overview.  In  G.  Becher  et  al.,  eds.  Evaluation  of  Technology  Policy  Programmes   in  Germany.  Springer  Netherlands,  pp.  3-­‐29   26. Landesberger,  T.  von,  Goerner,  M.,  Schreck,  T.  (2009)  Visual  Analysis  of  Graphs  with  Multiple   Connected  Components.  IEEE  Symposium  on  Visual  Analytics  Science  and  Technology     27. Landesberger,  T.  von,  Knuth,  M.,  Schreck,  T.,  Kohlhammer,  J.  (2008)  Data  Quality  Visualization   for  Multivariate  Hierarchic  Data,  IEEE  Information  Visualization  Conference  (INFOVIS),  Columbus,   OH,  USA  (2008)   28. Latour,  B.  (2009).  ‘Tarde’s  idea  of  quantification’,  in  The  Social  After  Gabriel  Tarde:  Debates  and   Assessments,  ed  M.  Candea,  London:  Routledge,  pp.  145-­‐162.   29. Law,  A.M.  (1983)  Statistical  Analysis  of  Simulation  Output  Data,  Operations  Research,  31(6),  pp.   983-­‐1029.   30. Law,  A.M.  (2006),  Simulation  Modelling  and  Analysis,  4th  Ed.,  McGraw-­‐Hill,  New  York.   31. Lazer,  D.,  Pentland,  A.,  Adamic,  L.,  Aral,  S.,  Barabási,  A.,  Brewer,  D.,Christakis,  N.,  Contractor,  N.,   Fowler,   J.,Gutmann,   M.,   Jebara,   T.,   King,   G.,   Macy,   M.,   Roy,   D.,   &   Van   Alstyne,   M.   (2009).   ‘Computational  Social  Science’.  Science  vol.  323,  pp.  721-­‐3.   32. Lessig,  L.  (2009).  The  New  Republic.   33. Livnat,   Y.   et   al.   (2005),   Visual   correlation   for   situational   awareness.   IEEE   Symposium   on   Information  Visualization.  INFOVIS,  1,  p.95-­‐102.   34. Nakayama,  M.K.  (2002)  Simulation  Output  Analysis,  Proceedings  of  the  2002  Winter  Simulation   Conference,  San  Diego  (CA),  8-­‐11  December  2002.   35. OECD.  (2005).  Modernising  government:  the  way  forward.  Paris,  France,  OECD.   36. Oliner,  S.  D.  and  D.  Sichel  (2000),  The  Resurgence  of  Growth  in  the  Late  1990s:  Is  Information   Technology  the  Story?  Journal  of  Economic  Perspectives,  14:  3-­‐22.   37. Ormerod,  P.,  (2010).  N  Squared:  public  policy  and  the  power  of  networks,  RSA  Pamphlets   38. Pang,  B.,  &  Lee,  L.  (2008).  Foundations  and  Trends®  in  Information  Retrieval,  2(1–2),  1-­‐135.  doi:   10.1561/1500000011.   39. Phelps  ed.  (1970),  Microeconomic  Foundations  of  Employment  and  Inflation  Theory.  New  York,   Norton  and  Co.  ISBN  0-­‐393-­‐09326-­‐3   40. Rhyne T.-M. et al.,( 2004) “Can We Determine the Top Unresolved Problems of visualisation?” Proc. IEEE visualisation, IEEE Press, pp. 563-566.   41. Rosenblum   (1994).   "Research   issues   in   scientific   visualization".   In:   Computer   Graphics   and   Applications,  IEEE.  Volume  14,  Issue  2,  March  1994  Page(s):61  -­‐  63.   42. Saffo,  Paul,  (1997),  Looking  Ahead:  Implications  Of  The  Present,  Are  You  Machine  Wise.  Harvard   Business  Review   43. Schelling,  T.C.,  (1969).  Models  of  segregation.  The  American  Economic  Review,  59(2),  pp.488-­‐ 493.   44. Sargent,  R.  (2009).  Verification  and  Validation  of  Simulation  Models.  Proceedings  of  the  2009   Winter  Simulation  Conference.   45. Schlesinger,  M.  (1979).  Terminology  for  model  credibility.  Simulation,  32(3),  103-­‐104.   46. Shneiderman,   B.   (1996)   The   Eyes   Have   It:   A   Task   by   Data   Type   Taxonomy   for   Information   Visualizations,  IEEE  Symposium  on  Visual  Languages    
  • 156.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   156  |  P a g e   47. Siewiorek,  A.  Smailagic,  J.  Furukawa,  A.  Krause,  N.  Moraveji,  K.  Reiger,  J.  Shaffer,  and  F.  L.  Wong,   (2003)  “SenSay:  A  Context-­‐aware  Mobile  Phone,”  in  Proceedings  of  the  7th  IEEE  International   Symposium  on  Wearable  Computers  (ISWC),  pp.  248–249   48. Taleb,  N.  (2008).  The  Black  Swan:  The  Impact  of  the  Highly  Improbable.  Penguin.   49. Thomas   and   Ahrweiler   Eds.   (2005).   Internationale   partizipatorische   Kommunikationspolitik   -­‐   Strukturen  und  Visionen.  Münster:  LIT.     50. Thomas,   J.J.   and   K.   Cook   (2003),   Illuminating   the   Path:   The   R&D   Agenda   for   Visual   Analytics.   2005:  IEEE   51. Tizzoni M., M., Bajardi, P., Poletto, C., Ramasco, J., Balcan, D., Gonçalves, B., Perra, N., Colizza, V., Vespignani, A., (2012). Real-time numerical forecast of global epidemic spreading: case study of 2009 A/H1N1pdm, BMC Medicine, 10:165. 52. Tufte,  E.  R  (1983)  The  Visual  Display  of  Quantitative  Information,  Graphics  Press   53. van  Wijk.  (2005).  The  Value  of  Visualization.  In  IEEE  Visualization  Conference.  pp.  79-­‐86   54. Ware,   C.   (2004)   Information   Visualization   -­‐   Perception   for   Design,   Second   Edition,   Academic     Press   55. Ware, Colin (2008): Visual Thinking: for Design. Morgan Kaufmann
  • 157.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   157  |  P a g e   56. Wattenberg,  M.  (2008)  The  Word  Tree,  an  Interactive  Visual  Concordance,  IEEE  Transactions  on   Visualization  and  Computer  Graphics,  Vol.  14,  No.  6                                                                    
  • 158.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   158  |  P a g e   7. List  of  Acronyms     ABM:  Agent  Based  Models     ADAMS:  Anomaly  Detection  at  Multiple  Scales     APES:  Agricultural  Production  and  Externalities  Simulator   BI:  Business  Intelligence     BRICs:  Brasil,  Russia,  India  and  China   CAPTCHA:  Completely  Automated  Public  Turing  test  to  tell  Computers  and  Humans  Apart     CDC:  Center  for  Disease  Control  and  Prevention     CGE:  Computational  General  Equilibrium  Models       CINDER:  Cyber-­‐Insider  Threat   COMA:  COllaborative  Modelling  Architecture     CSCW:  Computer-­‐Supported  Cooperative  Work     CTR:  Click-­‐through  Rate     CVADA:  Center  of  Excellence  on  Visualization  and  Data  Analytics   DARPA:  Defense  Advanced  Research  Project  Agency       DBMS:  Database  Management  Systems     DECC:  Department  of  Energy  and  Climate  Change     DID:  Difference-­‐in-­‐Difference     DHS:  Department  for  Homeland  Security     DOE:  Design  Of  Experiment     DSGE:  Dynamic  Stochastic  General  Equilibrium  Models       DW:  Data  Warehouse   ECB:  European  Central  Bank     EDW:  Enterprise  Data  Warehouse   EEA:  European  Economic  Area     eID:  Electronic  Identity       ERP:  Enterprise  Resource  Planning   ETL:  Extract,  Transform,  Load   FAO:  Food  and  Agriculture  Organization   GHG:  Greenhouse  Gas       GIS:  Geographic  Information  System     GLEAM:  Global  Epidemic  and  Mobility  Model     GPS:  Global  Positioning  System     GSS:  Global  Systems  Science  
  • 159.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   159  |  P a g e   HCI:  Human-­‐Computer  Interaction   HLA:  Higher  Level  Architecture   ICT:  Information  and  Communication  Technologies       ILEs:  Interactive  Learning  Environments     I/O:  Input/Output   KPIs:  Key  Performance  Indicators   LOD:  Linked  Open  Data     LMS:  Learning  Management  Systems     LTA:  Land  Transport  Authority     MarkAl:  MARKet  ALlocation     MAS:  Multi-­‐Agent  Systems     MPOs:  Members  of  Public  Organizations     MPP:  Massive  parallel  processing     NAF:  New  America  Foundation       NASA:  National  Aeronautics  and  Space  Agency     OECD:  Organisation  for  Economic  Co-­‐operation  and  Development     OGD:  Open  Government  Data     OGPL:  Open  Government  Platform     OLAP:  On-­‐line  analytical  processing     OOP:  Object  Oriented  Programming     PMOD:  Policy  Modelling     P2P:  Peer-­‐to-­‐Peer   RDD:  Regression  Discontinuity  Design   RDF:  Resource  Description  Framework     RTAP:  Real-­‐time  analytics  processing     SaaS:  Software  as  a  Service     SAML:  Security  Assertion  Markup  Language     SMD:  Social  Media  Data     SPARQL:  SPARQL  Protocol  and  RDF  Query  Language   STREP:  Specific  Targeted  Research  Projects     TRM:  Technology  road-­‐mapping   T21:  Treshold  21   UNOSAT:  United  Nations  Operational  Satellite  Applications  Programme   XML:  eXtensible  Markup  Language    
  • 160.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   160  |  P a g e      
  • 161.
                                                                                                                                       D2.2.1  INTERNATIONAL  RESEARCH  ROADMAP   161  |  P a g e