Benchlearning in eInclusion Study Introduction for participants
Content <ul><li>Policy Context </li></ul><ul><li>Measuring Impact </li></ul><ul><li>The Concept of Benchlearning </li></ul...
eInclusion is one of the main priorities of the European Union and its Member States  Policy Context Measuring Impact The...
Measurement of the  Impact  of eInclusion is yet immature Policy Context  Measuring Impact The Concept of Benchlearning B...
By using the Benchlearning approach, we can build on organisations’ experiences to  increase the Impact of eInclusion acti...
Therefore a continuous cycle of learning and sharing has to be established.. Policy Context Measuring Impact The Concept o...
… and key principles have to be in place Policy Context Measuring Impact The Concept of Benchlearning  Benchlearning Proc...
Today’s workshop is an important part of the Benchlearning Cycle <ul><li>Now we have collected, we can start measuring, co...
So let’s start the learning journey towards an inclusive Information Society! <ul><li>Agenda November 8th </li></ul><ul><l...
 
Back-up
Available Impact Indicators – per domain
Available Impact Indicators – per intervention
Upcoming SlideShare
Loading in …5
×

Introduction benchlearningine inclusionstudy

254 views

Published on

Published in: Business, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
254
On SlideShare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Plan: uitkiezen van organisaties op basis van: - Project type - Leading actor - Funding type - Project objectives - Target audience (elderly or low-skilled societal groups) - Geographical scope - Availability of data and evidence - Data quality - Level of Innovation - Willingness to participate - Relevance Applicability or results Collect: Data gathering templates Nu: Compare en Analyse
  • Behavioral success factors Trust: Benchlearning requires that participants are willing to share sensitive information amongst each other: stories of failure, financial data, examples of moral hazard etc. The collaborative network on which the study builds needs to cater for these sensitivities. Learning by experiencing: The project must not stay at the theoretical level but must actively engage actors to make them ‘experience’. ‘Experiencing’ is vitally important to make Benchlearning actors ‘learn things the hard way’ instead of limiting their work to abstract levels of analysis. As a simple metaphor, children have to fall to learn the dangers of speed and movements. Likewise, Benchlearning participants will need to challenge and test each other’s initiatives to effectively learn and progress. Experience can come through peer reviewing, challenging and questioning (each other’s policies for example), monitoring and partnering. It is important that the study is organised as a project for mutual learning and learning from experience. Each actor must take an active role and get the opportunity to reflect on his experience. The consortium’s facilitators will support this reflection process. Structured knowledge work: There are three basic categories of knowledge: explicit knowledge which is articulated by the ‘owner’; implicit knowledge which is not but can be articulated; tacit knowledge which is not articulated and of which the owner does not own ‘consciously’. Each type of knowledge requires a different type of knowledge work. Explicit knowledge is commonly shared by the owner on his own initiative; implicit knowledge can be brought to the surface by using question-and-answer sessions; tacit knowledge can only be decoded through mutual experiencing and observation. Management of moral hazard: Benchlearning participants will be assigned an official role in the study but at the same time will also embody other role models. Typically there will be ‘talkers’ and ‘listeners’, ‘visionaries’ and ‘grounded implementers’. ‘IT specialists’ and ‘policy makers’. These groups are likely to adopt different behaviours. They pursue different goals. They have a different vision on Benchlearning and different expectations as regards the outcome they expect from the project. In all this, it is important that participants are given sufficient room to express their viewpoints and do not find themselves in isolation.   Structural success factors Clear link of study to organizational strategy: The study’s raison d’être must be rooted in the agency’s vision or strategy; something that the organization really must do as opposed to a simple ‘nice to have’. For example, increase the take up of eGovernment services in remote areas by Y%. Ensure a participation rate of Z% in eHealth amongst the elderly. Commonly, the project goals will be derived from the organization’s objectives: its response to an EU policy or Directive, a national law, a mission statement or vision, and similar. These high-level objectives help to frame the Benchlearning study and align it with goals that have been commonly agreed upon and adhered to. Clear link of study to management approach: The study’s in fine goal is not the measurement itself. Rather, the focus must be on managing what has been measured and implementing related follow-up actions, including a potential re-design of policy. In this sense, the measurement must not be a one-off exercise but become an inherent pillar of the strategic steering of the organization. Then and only then, the Benchlearning project’s outcomes will effectively be used for example for decision making, human resource management, risk management, and suchlike, as part of a fully-fledged management cycle. Scalability: To build the data gathering capabilities of organizations, data will need to be gathered at the level of single initiatives i.e. bottom-up. This can have advantages but also bears a distinct risk: lack of scalability. The main challenge will therefore be to identify at an early stage of the study how the data could possibly be aggregated further, to the organizational, sectoalr or even national level for example. Scalability in general can be named as one of the major challenges of any Benchlearning project.
  • Overall goal project: Find Impact indicators that are comparable and valuable across eInclusion programmes.   Day 1:   Goal: introduction to the study and find out what participants are really measuring  and what measurement of impact means to them   Morning: Presentation Introduction to the Study (Dinand): Context and content of the Study Concept of Benchlearning: What is it, why do we do it, what does the process look like and what do we expect from the participants Workshop: why, what is the programme, follow up and what do we expect from the participants à also capture the expectations from the participants to feed into the workshop (or do so at 3.)    Presentations Introduction participants (participants): Who are they and what are their main activities What do they want to learn and what are they good at/do they have experience in   Presentation Preliminary results data gathering analysis (Gabriell/Richard, based on input from Trudy): High-level overview of goals, input and activities of participants Commonalities and differences between participants (as input for break-outs in afternoon) What are the organizations already measuring, why and how (as input for break-outs in afternoon)   Afternoon: Discussion on initial thoughts of common strengths and weaknesses programmes (to be derived from morning session and expectations projects return(ed) by email)   Break-outs: To identify two elements of other projects that could be useful to their project/ two learning elements (e.g. in measurement methods, analyzing results, proving  the value of the activities, setting learning goals, reaching target groups) To explore how to build comparability into measurement results To identify leading experiences/good practices Introduce Partnering structure   Day 2:   Goal: Set up learning structure and find concrete ways to measure impact and show a project’s value   Morning: Presentation General methodology (Gabriella): What methodology do we use What do impact indicators look like (Previous research on impact indicators?) Presentations on experiences with measurement by participants (eg Pane e Internet, Bibliotekas Pazangai, Commonwell). Both Pane e Internet and Pazangai confirmed already.   Afternoon:  Set up learning structure: Make learning teams Define common goals Define ways to learn from each other (flying circus, virtual community, peer review) Define ways Consortium can add value to the learning process
  • Introduction benchlearningine inclusionstudy

    1. 1. Benchlearning in eInclusion Study Introduction for participants
    2. 2. Content <ul><li>Policy Context </li></ul><ul><li>Measuring Impact </li></ul><ul><li>The Concept of Benchlearning </li></ul><ul><li>Benchlearning Process </li></ul><ul><li>Workshop Agenda </li></ul>
    3. 3. eInclusion is one of the main priorities of the European Union and its Member States  Policy Context Measuring Impact The Concept of Benchlearning Benchlearning Process <ul><li>Riga Ministerial Conference </li></ul><ul><ul><li>Reduce gaps in Internet usage </li></ul></ul><ul><ul><li>Reduce regional disparities in internet access </li></ul></ul><ul><ul><li>Reduce digital literacy gap </li></ul></ul><ul><li>Digital Agenda for Europe </li></ul><ul><ul><li>Promote internet access and take-up by all European citizens </li></ul></ul><ul><ul><li>Promote deployment and usage of modern accessible online services </li></ul></ul><ul><ul><li>Enhance digital literacy, skills and inclusion </li></ul></ul><ul><li>I2010 eInclusion Subgroup Committee </li></ul><ul><ul><li>Improve ICT access and skills to stimulate use </li></ul></ul><ul><ul><li>Address the Ageing trend by promoting ICT-enabled solutions </li></ul></ul><ul><ul><li>From Accessibility to Personalisation </li></ul></ul><ul><ul><li>Improve coordination and implementation of eInclusion measures for greater impact </li></ul></ul>The focal point has shifted from mere Access to Usage and Impact
    4. 4. Measurement of the Impact of eInclusion is yet immature Policy Context  Measuring Impact The Concept of Benchlearning Benchlearning Process <ul><li>Goals of this Study: </li></ul><ul><li>Coherent and practically usable framework to measure the impact of e-Inclusion policies </li></ul><ul><li>Increase performance by learning from each others’ experiences </li></ul>Measuring Impact has proven difficult <ul><li>Complexity and heterogeneity </li></ul><ul><li>Divergence across countries </li></ul><ul><li>Unclear links of causality </li></ul><ul><li>Requires longitudinal analysis </li></ul>Impact Indicators have not yet been implemented <ul><li>Indicators are incoherent </li></ul><ul><li>Indicators are not yet adapted to eInclusion organisations‘ needs </li></ul><ul><li>Indicators have not yet been validated </li></ul>
    5. 5. By using the Benchlearning approach, we can build on organisations’ experiences to increase the Impact of eInclusion activities Policy Context Measuring Impact  The Concept of Benchlearning Benchlearning Process Benchlearning Share knowledge Exchange experiences Contextualise data Find commonalities Test the relevance, feasibility and comparability of the chosen indicators Evaluate and adapt Impact measurement Accelerate improvement Recognize an impact Understand the drivers of impact (enablers, barriers, CSF) Comparable indicators Validated indicators Self-assessment Tool
    6. 6. Therefore a continuous cycle of learning and sharing has to be established.. Policy Context Measuring Impact The Concept of Benchlearning  Benchlearning Process Bench- learning Cycle
    7. 7. … and key principles have to be in place Policy Context Measuring Impact The Concept of Benchlearning  Benchlearning Process <ul><ul><ul><li>Structured Knowledge work </li></ul></ul></ul>Trust Benchlearning Learning by Experiencing Structured Knowledge Work Management of Moral Hazard Clear link of study to organisational strategy Scalability Clear link of study to management approach If we always do what we’ve always done, we will get what we’ve always got (Adam Urbanski)
    8. 8. Today’s workshop is an important part of the Benchlearning Cycle <ul><li>Now we have collected, we can start measuring, comparing and analysing </li></ul>Policy Context Measuring Impact The Concept of Benchlearning  Benchlearning Process <ul><li>Create a learning environment : Get to know each other and each others’ activities </li></ul><ul><li>Understand what Impact is : What are its main drivers and barriers </li></ul><ul><li>Compare the different ways of measuring results: How to show the value of your programme </li></ul><ul><li>Share knowledge and experiences: What can we learn from each other </li></ul>Measure, Compare and Analyse
    9. 9. So let’s start the learning journey towards an inclusive Information Society! <ul><li>Agenda November 8th </li></ul><ul><li>09:00-10:00 Introduction Benchlearning Workshop </li></ul><ul><li>10:00-12:00 Introduction 8 participating eInclusion projects </li></ul><ul><li>- Carrousel (each project to introduce him/herself and the project, max 10 minutes) </li></ul><ul><li>< Coffee break at 11.00 hrs> </li></ul><ul><li>12:00-13:00 Preliminary results Data Gathering </li></ul><ul><li>13:00-14:00 Lunch </li></ul><ul><li>14.00-14.30 Discussion on common strengths and weaknesses </li></ul><ul><li>14.30-17.00 Breakout sessions to explore comparability and determine learning elements </li></ul><ul><li>< Coffee break at 15.15 hrs> </li></ul><ul><li>  17.00-17.30 Wrap up and Introducing learning stuctures </li></ul><ul><li>20.00 hrs Joint dinner at Da Silvio’s </li></ul><ul><li>Address: Via San Petronio Vecchio n. 34D </li></ul>Agenda November 9th 09:00-09:30 Recap Day 1 09:30-10:30 General Methodology 10:30-11:00 Experiences with measurement of Benchlearning participants < Coffee break at 11.00 hrs> 11:00-12:00 Setting up a Learning structure (plenary) 12:00-13:00 Settin up a Learning structure (Break-outs) 13:00-14:00 Lunch 14.00-14.30 Setting up a Learning structure (Break-outs)) 14.30-15.30 Wrap up and Next steps
    10. 11. Back-up
    11. 12. Available Impact Indicators – per domain
    12. 13. Available Impact Indicators – per intervention

    ×