Field studies as evaluation method for socio-technical interventions in Technology-Enhanced Learning. Much research in TEL is design work – i.e., the research team designs an intervention that is intended to support learning. This intervention needs to be evaluated to show the extent to which this goal has been reached; and to gain additional insights that are sought for. Field studies are one main type of evaluations. They are challenging to set up; and in case of a bad study design cannot be easily repeated due to the effort and cost of running a field study. The goal of this lecture and workshop is To provide a blueprint for field studies as evaluation method for socio-technical interventions in technology enhanced learning To present a hierarchical principle of evaluating learning interventions– based on Kirkpatrick & Kirkpatrick: Usage/observable activities – Learning – Impact on task/work performance – Impact on organization (in workplace learning/applicable to settings in which individual learning impacts a wider social entity) To have students plan a field study for their own PhD in rough lines individually To discuss their plans with peers and the lecturer, as well as other senior researchers who may be present – i.e., students will get feedback on their own plan The blueprint for field studies is to evaluate in a hierarchy of research questions/evaluation level: First, one assesses the observable (learning) activities that are carried out – in particular how and whether participants adhered to the prescribed intervention; this helps understand the success of the intervention and it is possible to identify problems. Second, one assesses concrete learning outcomes – insights that are generated. Thirdly, one assesses a change in behaviour, and fourthly a change in performance. In parallel, a mix of qualitative and quantitative methods should be used – this allows on the one hand statistical comparison (pre/post; between groups). On the other hand, one can get in depth explanatory insights.