The document outlines the roles, tasks, and estimated costs for implementing a national evaluation of research organizations (NERO) in the Czech Republic. It describes the evaluation management structure, including an Evaluation Management Board and Team. The Team would manage the evaluation process, support peer review panels, and analyze results. Evaluated units would complete self-assessments. Peer review panels would include international experts who would assess submissions remotely. The document estimates direct costs for panels, management, and IT support, as well as indirect costs for evaluated units. Total costs are estimated to remain below 1% of public R&D funding over 5 years, as required.
This document provides information about quality management procedures including forms, tools, and strategies. It discusses the purpose and requirements of quality management procedures such as having quality objectives and using procedures/deliverables like a project management plan, communication management procedure, risk management procedure, and checklists. Quality management tools explained include check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams, and histograms. Other related topics like quality management systems and standards are also listed.
ENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICESCharles Xu
This document provides an overview of key concepts and best practices for energy efficiency program evaluation. It is divided into two parts. Part 1 covers general concepts such as types of evaluations, impact evaluation techniques, data sources, and the evaluation process. Part 2 focuses on specific techniques, including logic modeling, statistical billing analysis, engineering computation, sampling, measurement and verification, and determining net savings. The document aims to comprehensively yet concisely describe established theories and methodologies for evaluating the impact and effectiveness of energy efficiency programs.
The document provides feedback on a draft summary report for research evaluation methodology in the Czech Republic. It covers many topics and opinions are divided on several issues. Some view the reports as well-written and justified while others see them as too general. There is contrasting feedback on topics like self-assessment, treatment of PhD students and temporary workers, and assessment of research environment. The document also notes a few incorrect statements in the draft report and provides counterpoints on issues like applied research outputs and dividing duties between teaching and research. It advocates for a learning process to begin in applying the new methodology.
The proposed institutional funding principles and their rationaleMEYS, MŠMT in Czech
/ Barbara Good, Brigitte Tiefenthaler
+ více k II. Mezinádorní konferenci IPN Metodika v článku Konference k financování přinesla podnětnou diskuzi: http://metodika.reformy-msmt.cz/2-mezinarodni-konference
This document discusses adjustments made to the bibliometric reports used in pilot testing a new methodology for evaluating research organizations in the Czech Republic. Key changes included adding explanatory labels, reformatting tables and graphs to include numbers and bases for percentages, and modifying some indicator calculations. Problems encountered in compiling the reports centered around unclassified scientists, inaccuracies in mapping publication records between databases, errors in processing data and indicators, and inefficient formatting of reports. Recommendations focused on standardizing the bibliometric report format, presenting key indicators in overview tables, accounting for publications with multiple institutional authors, and expanding coverage of outputs from social sciences and humanities.
IS VaVaI as the information tool for the new Institutional Evaluation Methodo...MEYS, MŠMT in Czech
IS VaVaI is a comprehensive database that documents over 26 billion CZK in annual public expenditures on research and development in the Czech Republic along with over 50-60 thousand annual research outputs. It tracks research organizations, projects, personnel, outputs, and funding. It serves as the primary source of information on R&D in the Czech Republic, supporting analyses, evaluations, and transparency. The document argues that IS VaVaI could effectively serve as the core information tool to support the new Institutional Evaluation Methodology by providing relevant data, managing the evaluation process, and disseminating results in an efficient manner.
This document provides information about quality management procedures including forms, tools, and strategies. It discusses the purpose and requirements of quality management procedures such as having quality objectives and using procedures/deliverables like a project management plan, communication management procedure, risk management procedure, and checklists. Quality management tools explained include check sheets, control charts, Pareto charts, scatter plots, Ishikawa diagrams, and histograms. Other related topics like quality management systems and standards are also listed.
ENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICESCharles Xu
This document provides an overview of key concepts and best practices for energy efficiency program evaluation. It is divided into two parts. Part 1 covers general concepts such as types of evaluations, impact evaluation techniques, data sources, and the evaluation process. Part 2 focuses on specific techniques, including logic modeling, statistical billing analysis, engineering computation, sampling, measurement and verification, and determining net savings. The document aims to comprehensively yet concisely describe established theories and methodologies for evaluating the impact and effectiveness of energy efficiency programs.
The document provides feedback on a draft summary report for research evaluation methodology in the Czech Republic. It covers many topics and opinions are divided on several issues. Some view the reports as well-written and justified while others see them as too general. There is contrasting feedback on topics like self-assessment, treatment of PhD students and temporary workers, and assessment of research environment. The document also notes a few incorrect statements in the draft report and provides counterpoints on issues like applied research outputs and dividing duties between teaching and research. It advocates for a learning process to begin in applying the new methodology.
The proposed institutional funding principles and their rationaleMEYS, MŠMT in Czech
/ Barbara Good, Brigitte Tiefenthaler
+ více k II. Mezinádorní konferenci IPN Metodika v článku Konference k financování přinesla podnětnou diskuzi: http://metodika.reformy-msmt.cz/2-mezinarodni-konference
This document discusses adjustments made to the bibliometric reports used in pilot testing a new methodology for evaluating research organizations in the Czech Republic. Key changes included adding explanatory labels, reformatting tables and graphs to include numbers and bases for percentages, and modifying some indicator calculations. Problems encountered in compiling the reports centered around unclassified scientists, inaccuracies in mapping publication records between databases, errors in processing data and indicators, and inefficient formatting of reports. Recommendations focused on standardizing the bibliometric report format, presenting key indicators in overview tables, accounting for publications with multiple institutional authors, and expanding coverage of outputs from social sciences and humanities.
IS VaVaI as the information tool for the new Institutional Evaluation Methodo...MEYS, MŠMT in Czech
IS VaVaI is a comprehensive database that documents over 26 billion CZK in annual public expenditures on research and development in the Czech Republic along with over 50-60 thousand annual research outputs. It tracks research organizations, projects, personnel, outputs, and funding. It serves as the primary source of information on R&D in the Czech Republic, supporting analyses, evaluations, and transparency. The document argues that IS VaVaI could effectively serve as the core information tool to support the new Institutional Evaluation Methodology by providing relevant data, managing the evaluation process, and disseminating results in an efficient manner.
Evaluation systems in international practice / This report constitutes a background report to the Final report 1 – The R&D Evaluation Methodology. It collects the outcomes of the analyses related to the evaluation systems in ten countries.
This document provides an evaluation methodology for research infrastructures (RIs) in the Czech Republic. It defines RIs and their key characteristics, such as having stable management, intellectual property rights strategies, user access strategies, and development strategies. The evaluation methodology aims to create a unified set of rules and transparent process to evaluate RIs at different stages of their lifecycle to inform strategic decision making and funding allocation regarding the establishment, support, and termination of RIs.
The document discusses peer review processes for research funding and publications. It defines peer review as the evaluation of scientific work by qualified experts in the same field. The key areas of peer review application are the evaluation of research findings for publication, research/innovation proposals for funding, and the evaluation of research teams/institutions. The document outlines best practices for designing peer review processes, including selecting qualified peers, developing assessment criteria, and establishing review panels to evaluate and rank proposals in a transparent, impartial manner.
This document provides an overview of bibliometrics and scientometrics. It defines bibliometrics as the application of mathematical and statistical methods to books and other media, and scientometrics as the application of quantitative methods to analyze science as an information process. The lecture covers the history of bibliometrics, key data sources, basic metadata used for indicators, and four types of bibliometric indicators: research activity, research profile, research collaboration, and impact on further research. It also discusses the use of bibliometrics in evaluating the social sciences and humanities as well as research assessment.
ZÁSADY INSTITUCIONÁLNÍHO FINANCOVÁNÍ / Závěrečná zpráva 2MEYS, MŠMT in Czech
Závěrečná zpráva 2 je jednou ze tří Závěrečných zpráv studie zabývající se přípravou Metodiky hodnocení a zásad financování systému výzkumu a vývoje v České republice. Zpráva popisuje nové zásady pro institucionální financování výzkumných organizací (VO).
+ http://metodika.reformy-msmt.cz/zasady-institucionalniho-financovani
This report is one of the three Final reports of the study developing an evaluation methodology and institutional funding principles for the R&D system in the Czech Republic. It describes the methodology for the National Evaluation of Research Organisations (NERO) in the Czech Republic.
Selon le fondateur de Grovo, start-up spécialisée dans le microlearning, quelques minutes d'attention valaient mieux qu'un long discours en matière de formation.
«Comment et quand vous apprenez est en fait plus important que ce que vous apprenez».
Mémoire de recherche - Community management et stratégie média social en B2BNicolas Durand
Mon mémoire de recherche sur les stratégies média social en B2B. Le mémoire introduit en première partie les concepts principaux du community management puis répond à la problématique suivante : "Quelle Stratégie média social un community manager évoluant sur le marché du B2B doit-il adopter ? Pratiques, interactions et réseaux sociaux". Plus de 166 personnes ont participé à cette étude afin de répondre aux hypothèses formulées.
This background report holds the guidelines and templates for the implementation of the National Evaluation of Research Organisations (NERO), for the use of the main actors involved (evaluation management, panel experts, referees and research organisations).
The Small Pilot Evaluation - Feedback and Results
/
This report constitutes a background report to the Final report 3 – The Small Pilot Evaluation and the Use of the RD&I Information System for Evaluation. It focuses on the outcomes of the Small Pilot Evaluation (SPE) that was implemented in the context of this study from the month of September 2014 (launch of the preparatory activities) to the end of January 2015 (final panel reports).
Guidance for the his strategic planning process (1)MoIRP
This document provides guidance for countries on developing strategic plans to strengthen their health information systems (HIS). It outlines a 12-step process organized into 3 modules: 1) Preparing for planning, which includes analyzing HIS assessment results and identifying priority issues; 2) Priority setting and strategy design, including setting objectives and identifying intervention strategies; and 3) Detailed planning, costing, and establishing monitoring and evaluation. The goal is to guide countries through developing a prioritized, costed multi-year strategic plan to strengthen their HIS with broad stakeholder input and ownership.
Fuzzy Analytical Hierarchy Process Method to Determine the Project Performanc...IRJET Journal
This document discusses using the fuzzy analytical hierarchy process (F-AHP) method to analyze project performance in a project portfolio. It involves 6 steps: 1) defining the problem and criteria, 2) creating a comparison matrix, 3) calculating weights, 4) setting up triangular fuzzy numbers, 5) calculating fuzzy synthesis values, and 6) ranking projects. The researchers apply this method using data from 4 projects involving timeline, cost, revenue, and resources. They calculate criteria weights and determine that Project 1 has the highest weight, indicating it is performing best in the portfolio based on the analysis. The document concludes that F-AHP can accurately analyze project performance and assist managers in decision-making.
This document provides a project management plan template for an economic development department. It includes sections on the project introduction and background, executive summary, scope, work breakdown structure, cost and staffing management plans, stakeholder analysis, implementation plan, risk management plan, procurement management plan, log frame, evaluation plan, and annexures including a target setting worksheet and reporting template. The template provides guidance on the key elements to include in each section to effectively plan and manage the project.
This document discusses various techniques for evaluating projects, including:
- Strategic assessment to evaluate how projects align with organizational goals and strategies.
- Technical assessment to evaluate functionality against available hardware, software, and solutions.
- Cost-benefit analysis to compare expected project costs and benefits in monetary terms over time.
- Cash flow forecasting to estimate costs and benefits over the project lifecycle.
- Risk evaluation to assess potential risks and their impacts.
Project evaluation is important for determining progress, outcomes, effectiveness, and justification of project inputs and results. The challenges include commitment, establishing baselines, identifying indicators, and allocating time for monitoring and evaluation.
Evaluation systems in international practice / This report constitutes a background report to the Final report 1 – The R&D Evaluation Methodology. It collects the outcomes of the analyses related to the evaluation systems in ten countries.
This document provides an evaluation methodology for research infrastructures (RIs) in the Czech Republic. It defines RIs and their key characteristics, such as having stable management, intellectual property rights strategies, user access strategies, and development strategies. The evaluation methodology aims to create a unified set of rules and transparent process to evaluate RIs at different stages of their lifecycle to inform strategic decision making and funding allocation regarding the establishment, support, and termination of RIs.
The document discusses peer review processes for research funding and publications. It defines peer review as the evaluation of scientific work by qualified experts in the same field. The key areas of peer review application are the evaluation of research findings for publication, research/innovation proposals for funding, and the evaluation of research teams/institutions. The document outlines best practices for designing peer review processes, including selecting qualified peers, developing assessment criteria, and establishing review panels to evaluate and rank proposals in a transparent, impartial manner.
This document provides an overview of bibliometrics and scientometrics. It defines bibliometrics as the application of mathematical and statistical methods to books and other media, and scientometrics as the application of quantitative methods to analyze science as an information process. The lecture covers the history of bibliometrics, key data sources, basic metadata used for indicators, and four types of bibliometric indicators: research activity, research profile, research collaboration, and impact on further research. It also discusses the use of bibliometrics in evaluating the social sciences and humanities as well as research assessment.
ZÁSADY INSTITUCIONÁLNÍHO FINANCOVÁNÍ / Závěrečná zpráva 2MEYS, MŠMT in Czech
Závěrečná zpráva 2 je jednou ze tří Závěrečných zpráv studie zabývající se přípravou Metodiky hodnocení a zásad financování systému výzkumu a vývoje v České republice. Zpráva popisuje nové zásady pro institucionální financování výzkumných organizací (VO).
+ http://metodika.reformy-msmt.cz/zasady-institucionalniho-financovani
This report is one of the three Final reports of the study developing an evaluation methodology and institutional funding principles for the R&D system in the Czech Republic. It describes the methodology for the National Evaluation of Research Organisations (NERO) in the Czech Republic.
Selon le fondateur de Grovo, start-up spécialisée dans le microlearning, quelques minutes d'attention valaient mieux qu'un long discours en matière de formation.
«Comment et quand vous apprenez est en fait plus important que ce que vous apprenez».
Mémoire de recherche - Community management et stratégie média social en B2BNicolas Durand
Mon mémoire de recherche sur les stratégies média social en B2B. Le mémoire introduit en première partie les concepts principaux du community management puis répond à la problématique suivante : "Quelle Stratégie média social un community manager évoluant sur le marché du B2B doit-il adopter ? Pratiques, interactions et réseaux sociaux". Plus de 166 personnes ont participé à cette étude afin de répondre aux hypothèses formulées.
This background report holds the guidelines and templates for the implementation of the National Evaluation of Research Organisations (NERO), for the use of the main actors involved (evaluation management, panel experts, referees and research organisations).
The Small Pilot Evaluation - Feedback and Results
/
This report constitutes a background report to the Final report 3 – The Small Pilot Evaluation and the Use of the RD&I Information System for Evaluation. It focuses on the outcomes of the Small Pilot Evaluation (SPE) that was implemented in the context of this study from the month of September 2014 (launch of the preparatory activities) to the end of January 2015 (final panel reports).
Guidance for the his strategic planning process (1)MoIRP
This document provides guidance for countries on developing strategic plans to strengthen their health information systems (HIS). It outlines a 12-step process organized into 3 modules: 1) Preparing for planning, which includes analyzing HIS assessment results and identifying priority issues; 2) Priority setting and strategy design, including setting objectives and identifying intervention strategies; and 3) Detailed planning, costing, and establishing monitoring and evaluation. The goal is to guide countries through developing a prioritized, costed multi-year strategic plan to strengthen their HIS with broad stakeholder input and ownership.
Fuzzy Analytical Hierarchy Process Method to Determine the Project Performanc...IRJET Journal
This document discusses using the fuzzy analytical hierarchy process (F-AHP) method to analyze project performance in a project portfolio. It involves 6 steps: 1) defining the problem and criteria, 2) creating a comparison matrix, 3) calculating weights, 4) setting up triangular fuzzy numbers, 5) calculating fuzzy synthesis values, and 6) ranking projects. The researchers apply this method using data from 4 projects involving timeline, cost, revenue, and resources. They calculate criteria weights and determine that Project 1 has the highest weight, indicating it is performing best in the portfolio based on the analysis. The document concludes that F-AHP can accurately analyze project performance and assist managers in decision-making.
This document provides a project management plan template for an economic development department. It includes sections on the project introduction and background, executive summary, scope, work breakdown structure, cost and staffing management plans, stakeholder analysis, implementation plan, risk management plan, procurement management plan, log frame, evaluation plan, and annexures including a target setting worksheet and reporting template. The template provides guidance on the key elements to include in each section to effectively plan and manage the project.
This document discusses various techniques for evaluating projects, including:
- Strategic assessment to evaluate how projects align with organizational goals and strategies.
- Technical assessment to evaluate functionality against available hardware, software, and solutions.
- Cost-benefit analysis to compare expected project costs and benefits in monetary terms over time.
- Cash flow forecasting to estimate costs and benefits over the project lifecycle.
- Risk evaluation to assess potential risks and their impacts.
Project evaluation is important for determining progress, outcomes, effectiveness, and justification of project inputs and results. The challenges include commitment, establishing baselines, identifying indicators, and allocating time for monitoring and evaluation.
This document discusses project cost management processes including resource planning, cost estimating, cost budgeting, and cost control. It provides details on the inputs, tools and techniques, and outputs for each process. The overall goal is to estimate, budget, and control costs so the project can be completed within the approved budget.
BR 8 / Předběžné (ex-ante) zhodnocení navrhovaného systému financováníMEYS, MŠMT in Czech
Ex-ante assessment of the proposed funding system
/
This report constitutes a background report to the Final report 2 – The institutional funding principles.
The document discusses improving software testing processes at XYZ Company. It begins with objectives to analyze the existing testing process, identify areas for improvement, and reduce costs. It then provides background on software testing, including definitions, the purpose of testing, and why test process improvement is needed. The document outlines steps for test process improvement, including determining goals, analyzing the current situation, and implementing changes. It reviews literature on test process improvement models, focusing on the Test Process Improvement (TPI) model as a framework with key areas, maturity levels, checkpoints, and improvement suggestions.
1. The report assesses the validity and reliability of governance indicators in public financial management (PFM) and corruption using exploratory and confirmatory factor analysis.
2. For PFM, the Open Budget Survey indicators show reliability and validity, suggesting the data is of good quality. However, the Public Expenditure and Financial Accountability indicators measure similar constructs with weak validity, calling their quality into question.
3. For corruption, aggregate measures from different sources measure the same underlying concept of corruption, providing evidence of their validity. However, the Global Corruption Barometer indicators do not adequately measure the concepts they aim to, indicating room for improvement in their construction.
This document provides an overview of the research project objectives, approach, and information gathering methods. The objectives are to evaluate the budgetary control system of Sur Construction PLC, identify its links to performance management and decision making, and provide recommendations. Secondary data sources will be used, including literature and Sur's internal documents. Information will be collected from Sur's commercial, planning, finance, and HR departments. The budgetary control system will be evaluated based on its administration, preparation, and control components.
This document discusses the logical framework approach to project planning and evaluation. It begins by defining the logical framework as a tool used to conceptualize a project, analyze assumptions, and facilitate monitoring and evaluation. It then explains the key components of a logical framework matrix, including goals, objectives, outputs, inputs, indicators, means of verification, and assumptions. Finally, it outlines some important benefits of the logical framework such as reducing planning confusion, determining responsibility and management, facilitating evaluation, and ensuring projects are accessible and straightforward.
This document provides guidance on planning for monitoring and evaluation of country programs and projects. It discusses key principles for planning including developing an overall work plan, minimum requirements, and planning at the country program level. It also describes the planning process for monitoring and evaluation, including planning for monitoring, planning for evaluation, and project work planning. Key steps discussed are assessing needs, assessing current monitoring, reviewing monitoring scope/tools, and adapting/designing monitoring mechanisms. Criteria for selecting outcomes to evaluate are also outlined.
Master thesis Solvay Brussels School 2012-2014 - Teodora VirbanTeodora Virban
This document presents a comparative analysis of European Commission evaluations of development cooperation programmes in four countries between 2012-2014. It provides an overview of program evaluation concepts and the European Commission's approach to evaluations. The analysis examines four evaluation reports from Colombia, Jamaica, Nepal and the Philippines to compare how key evaluation criteria were addressed and the overall methodology and structure of the reports. The goal is to assess the level of consistency in approach across the different evaluations.
This Slideshare presentation is a partial preview of the full business document. To view and download the full document, please go here
https://flevy.com/browse/business-document/itil-process-assessment--service-strategy-xls-3666
DOCUMENT DESCRIPTION
This Excel spreadsheet system with approx. 300 Questions allows you to conduct a Assessment of ITIL v3 Service Strategy processes:
1 Strategy Management for IT Services
2 Service Portfolio Management
3 Financial Management for IT Services
4 Demand Management
5 Business Relationship Management
Assessment highlights areas that require particular attention and gives you idea on process maturity. It can also be used as a benchmarking mechanism and a boost in creating continual improvement culture for your ITSM / ITIL processes.
The assessment is based on Process maturity framework (PMF), (as recommended in ITIL Service Design book). Maturity rating levels are:
Level 1: Initial
Level 2: Repeatable
Level 3: Defined
(Level 3 +: Deployed )
Level 4: Managed
Level 5: Optimizing
The use of the PMF in the assessment of service management processes relies on an appreciation of the IT organization growth model. At the process level, assessment covered following groups of questions regarding process attributes to establish process maturity:
1. Process performance (outcomes achieved)
2. Performance Management ( activities performed)
3. Work product management ( inputs/outputs)
4. Process Definition ( roles documentation)
5. Process deployment( accepted, performed)
6. Process Measurement
7. Process control
8. Process innovation
9. Process optimisation
This document describes the methodology for organizing semantic evaluation campaigns. It outlines the key phases of an evaluation campaign: initiation, involvement, preparation and execution, and dissemination. Actors involved include evaluation campaign organizers and participants. The methodology provides guidelines for each phase and recommendations based on lessons from previous campaigns.
This is PMBOK Guide Planning Process Group Part two. It includes two Knowledge Area - Time and Cost management - with nine processes - Plan Schedule Management, Define Activities, Sequence Activities, Estimate Activity Resources, Estimate Activity Duration, Estimate Activity Duration, Plan Cost Management, Estimate Costs and Determine Budget - .
This document outlines the functional implementation of project controls. It discusses key aspects of project controls including project planning and scheduling, cost estimating and budgeting, communication and risk management, and scope and change management. It describes how project controls is integrated into the project management process to improve work efficiency and ensure projects meet their objectives. Project controls functions include developing the work breakdown structure, schedules, budgets, and reports to monitor project performance and support decision making. The implementation of project controls is tailored to each organization but generally involves project planning, execution, monitoring and control throughout the project lifecycle.
This document summarizes key points from a workshop on monitoring and evaluation (M&E) held in Tamale, Ghana from September 3-5, 2013. It discusses how a story about a horse can relate to project planning and implementation. It then defines what constitutes a project and its essential elements. The document outlines project cycles from various sources and stages where monitoring should be included. It distinguishes between monitoring and evaluation, describing monitoring as ongoing and evaluation as a one-time assessment. Key sources referenced are also listed.
Similar to BR 4 / Podrobný nákladový rámec hodnocení (20)
Pilot test of new evaluation methodology of research organisationsMEYS, MŠMT in Czech
In 2015, the pilot test of a new evaluation methodology of research, development and innovation, in which twelve research organisations
active predominantly in two fields (Chemistry and History) participated, was carried out. Three main and nine subject panels, in which thirty-five
foreign and six local experts were present, prepared evaluation reports of thirty-one registered, field-specific research units. The feedback of the panellists and the institutions evaluated, which are useful for the preparation of a nationwide evaluation of research organisations in the Czech Republic, are the key output of the pilot test.
This report is the one of the three Final reports of a study developing an evaluation methodology and institutional funding principles for the R&D system in the Czech Republic. It describes the new principles for the institutional funding of research organisations (RO) in the Czech Republic.
The Small Pilot Evaluation and the Use of the RD&I Information System for Eva...MEYS, MŠMT in Czech
This document summarizes the findings from a small pilot evaluation (SPE) testing a new research evaluation methodology and use of an RD&I information system in the Czech Republic. The SPE involved a few research organizations and aimed to test the evaluation processes and methodology. Feedback indicated some inefficiencies in the SPE and ways to improve the processes and quality of submitted information. The document also describes the RD&I information system and its potential to support the evaluation methodology by providing reliable data and reducing indirect costs if utilized fully. Extensions to the system are proposed to optimize its usefulness for evaluations.
Summary Report / R&D Evaluation Methodology and Funding PrinciplesMEYS, MŠMT in Czech
This study has been undertaken under a contract to the Ministry of Education, Youth
and Sports of the Czech Republic by a team from Technopolis, the Technology Centre
ASCR, NIFU, and Infoscience Praha.
V tradičním pojetí jsou rizika chápána negativně, tedy jako hrozby. Tím, že jsou identifikována a vyhodnocena, vytvářejí ale současně příležitosti, které lze využít pro zlepšení výsledku projektu nebo pro snadnější uvádění jeho výsledků do života. Z veřejného projednávání dokumentů IPN Metodika, diskusí na konferencích a zpětné vazby výzkumných organizací, které se účastnily dvou pilotních ověřování nové metodiky hodnocení, byly získány některé podněty, které jsou považovány za rizika pro zavádění národního hodnocení výzkumných organizací (NERO).
Analýza má posloužit k podrobnějšímu rozboru možností a k doporučení a zdůvodnění výběru určitého řešení. Jednoznačná identifikace je důležitá nejen pro výzkumníky, aby mohli prezentovat výsledky své vědecké činnosti, ale využívá se též pro hodnocení autorů a pracovišť, vydavatelům a poskytovatelům finančních podpor zjednodušuje administrativu a usnadňuje organizaci databází. Zavedením trvalého jedinečného identifikátoru výzkumníka se vyřeší jednoznačné přiřazení výstupů a dalších profesních aktivit danému vědeckému pracovníkovi.
Dokument je zpracovaný na základě zkušeností z pilotního ověření navrhované metodiky hodnocení výzkumných organizací a z diskusí s odbornou veřejností během celé doby trvání projektu. Zaměřen je nejen na změny vnitřních předpisů veřejných vysokých škol, ale také na dlouhodobé záměry vysokých škol. Jsou formulována doporučení pro veřejné vysoké školy na jedné straně a MŠMT na straně druhé.
Podklady a doporučení pro zapracování do věcného záměru zákona nahrazujícího ...MEYS, MŠMT in Czech
Dokument analyzuje, zda vůbec a popřípadě v jaké míře jsou nutné změny regulatorního rámce v oblasti výzkumu, vývoje a inovací, které by si vyžádala realizace návrhů vzešlých z výstupů Individuálního projektu národního Efektivní systém hodnocení a financování výzkumu, vývoje a inovací. Jsou zohledněny i probíhající záměry úprav legislativních předpisů.
Harmonogram postupných kroků realizace návrhů nového hodnocení a financování ...MEYS, MŠMT in Czech
Tato implementační doporučení zpracoval expertní tým IPN Metodika v závěrečné fázi projektu na základě zkušeností získaných během úzké spolupráce se zpracovatelem studie „Metodika hodnocení ve výzkumu a vývoji a zásady financování“ společností Technopolis Group z Velké Británie dále také díky bohatým zkušenostem z realizace pilotního ověření navržené metodiky hodnocení týmem projektu, díky intenzivním diskusím se zainteresovanými subjekty, s odbornou veřejností a zahraničními experty po celou dobu trvání projektu. Doporučení jsou výsledkem podrobných diskusí realizačních aspektů v rámci širokého expertního týmu IPN Metodika.
Pilotní ověření návrhu nové metodiky hodnocení výzkumných organizacíMEYS, MŠMT in Czech
Dokumenty shrnují výsledky pilotního ověření metodiky NERO hodnocení výzkumné činnosti vycházející ze závěrů a doporučení Technopolisu, které bylo realizováno týmem projektu v průběhu roku 2015.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
BR 4 / Podrobný nákladový rámec hodnocení
1. May 2015
R&D Evaluation Methodology and
Funding Principles
Background report 4: Detailed evaluation cost framework
2. R&D Evaluation Methodology and Funding Principles
Background report 4: Detailed evaluation cost framework
May 2015
Bea Mahieu, Oliver Cassagneau-Francis – Technopolis Group
3. Detailed evaluation cost framework
R&D Evaluation Methodology and Funding Principles i
Table of Contents
1. Introduction 1
2. Background: the roles and tasks of the actors involved 2
2.1 The evaluation management structure 2
2.1.1 The components of the evaluation management structure 2
2.1.2 Overview of the tasks for the Evaluation Management Team 3
2.1.3 Timeline of the evaluation process 6
2.2 Roles and tasks of the Evaluated Units (EvUs) 8
2.2.1 Tasks in the evaluation process 8
2.2.2 Data and information required 8
2.3 Roles and tasks of the evaluation panels 9
2.4 The role of the RD&I Information System (IS VaVaI) 10
3. Detailed cost framework 11
3.1 Cost estimate for a full-scale evaluation based on the Evaluation Methodology 12
3.2 Detailed estimates of the costs for the evaluation panels (direct costs) 13
3.2.1 Volume of the workload 14
3.2.2 The number of panel experts required 15
3.2.3 The number of meetings and days needed for remote assessment/review 16
3.2.4 Total number of mandays needed 17
3.2.5 Day rate per type of expert 19
3.2.6 Total costs for the panel experts’ work 20
3.2.7 Travel, hotel and subsistence costs 20
3.2.8 Total costs for the evaluation panels 22
3.3 Detailed estimates of the costs for the evaluation management (direct costs) 22
3.3.1 The Evaluation Management Team 22
3.3.2 The drafting of the bibliometric reports (to inform the panels) 25
3.3.3 IT services, including the use of and extensions to the RD&I Information
System 25
3.4 Detailed estimates of the indirect costs 26
4. Detailed evaluation cost framework
ii R&D Evaluation Methodology and Funding Principles
Table of Exhibits
Exhibit 1 The evaluation cycle......................................................................................................2
Exhibit 2 Main tasks of the evaluation management team...............................................4
Exhibit 3 Indicative time-line of the evaluation process ...................................................7
Exhibit 4 Costs of PRFS in other countries ............................................................................11
Exhibit 5 Maximum total costs of the new evaluation system ......................................12
Exhibit 6 Cost estimate for a full-scale evaluation .............................................................13
Exhibit 7 Estimated volume of the workload .......................................................................15
Exhibit 8 Number of experts involved.....................................................................................15
Exhibit 9 Number of panel meetings and number of days needed per panel.........16
Exhibit 10 Total number of mandays needed per type of expert.................................17
Exhibit 11 Time investment required per expert...............................................................19
Exhibit 12 Day rate per type of expert ....................................................................................20
Exhibit 13 Total manday costs per type of expert and overall......................................20
Exhibit 14 Standard travel and hotel & subsistence costs..............................................20
Exhibit 15 Trips and hotel days per type of panel expert ...............................................21
Exhibit 16 Total travel and hotel & subsistence costs ......................................................22
Exhibit 17 Total costs for the evaluation panels.................................................................22
Exhibit 18 Standard salary rates per staff level...................................................................23
Exhibit 19 Tasks and mandays for the panel secretariat.................................................24
Exhibit 20 Total costs for the Evaluation Management Team activities...................24
Exhibit 21 Breakdown of the costs for the drafting of the bibliometric reports (to
inform the panels) ...........................................................................................................................25
Exhibit 22 Total evaluation management costs ..................................................................26
Exhibit 23 Time and HR investment in the SPE (average per RU) ..............................27
Exhibit 24 Breakdown of the indirect costs..........................................................................27
5. Detailed evaluation cost framework
R&D Evaluation Methodology and Funding Principles 1
1. Introduction
This document is a background report to the Final report 1 – The R&D Evaluation
Methodology. It sets out the detailed cost framework for the implementation of the
National Evaluation of Research Organisations – NERO.
The Terms of Reference for the study required us “to design the evaluation in a way
that the total cost of its implementation (direct and indirect costs, costs incurred by
research organizations (ROs), evaluation organizers and providers) does not exceed
1.0 percent of public institutional funding for R&D for a five-year period (i.e.
estimated according to SB and 2012 prices, ca CZK 677 million).”
These cost limits were a factor that we took into consideration throughout the entire
process of the evaluation methodology design. Our objective was to develop a robust
evaluation methodology that would ensure a fair evaluation, while guaranteeing cost
efficiency for its implementation, within the limits required. We therefore designed
the operational set-up for the evaluation, bearing in mind the need for cost and time
efficiency, for all actors involved.
We estimated the costs of a full-scale evaluation using an activity-based cost approach.
We took into consideration all costs, both the ‘direct’ ones for the organisation of the
evaluation, the handling of the process, and the evaluation panels, and the indirect
ones, i.e. the costs related to the time that the evaluated organisations will need to
invest in their self-assessments.
This background report is structured as follows:
It starts with setting out the context for the cost framework. Section 2 provides a brief
description of the elements in the evaluation methodology design that influence the
costs of the evaluation implementation. This regards the roles and tasks of the actors
involved in the evaluation implementation, i.e. the Evaluation Management Team, the
Evaluated Units, the IT staff in charge of the RD&I Information System, and the
evaluation panels ().
In Section 3 we report on the estimated costs of a full-scale evaluation implementing
the EM, covering both the direct and the indirect costs.
An excel file with the detailed cost framework calculations has been delivered to the
IPN team.
The roles and tasks of the actors involved in the NERO implementation as well as the
process and timelines are described in detail in the Evaluation Handbook
(Background report 5).
6. Detailed evaluation cost framework
2 R&D Evaluation Methodology and Funding Principles
2. Background: the roles and tasks of the actors involved
In this chapter we describe the role and tasks of the evaluation management structure
(Section 2.1), the Evaluated Units participating in the evaluation (Section 2.2), the
evaluation panels (Section 2.3). In the final section, we cover the role of the RD&I
Information System (Section 2.4).
2.1 The evaluation management structure
2.1.1 The components of the evaluation management structure
We defined the entities responsible for the management of the National Evaluation of
Research Organisations (NERO) implementation as follows:
The Evaluation Management Board, acting as overall governance and supervisory
body
The Evaluation Management Team, responsible for the operational management
of evaluation
The tasks of the evaluation management structure, and in particular the evaluation
management team, are not limited to the management of a specific run of NERO but
should be seen in a broader and longer-term perspective. In fact, the implementation
of NERO is part of an evaluation cycle (Exhibit 1). In this study we have designed an
evaluation and funding system that is intended to constitute an integral part of the
R&D policy cycle in the Czech Republic. Especially because the evaluation results will
inform the performance-based research-funding component of the institutional
funding system, NERO needs to be considered as a policy intervention. As a
consequence, to a certain extent the evaluation methodology will be ‘dynamic’, i.e.
acting upon and reflecting changing policy needs. As for any other policy intervention,
its effects also need to be monitored, both the intended and unintended ones, so that
the methodology can be adjusted if needed in the next run of the evaluation.
Exhibit 1 The evaluation cycle
Issue
identification &
policy decision
Design /
revision of the
methodology
Launch of the
evaluation
Self-assessment
& informed
peer-review
Reporting
Monitoring
Feedback /
consultation
Communication
7. Detailed evaluation cost framework
R&D Evaluation Methodology and Funding Principles 3
The evaluation cycle can be divided in three main phases:
The preparation phase during which the evaluation methodology is revised (if
necessary) and the evaluation is launched with the publication of the Evaluation
Protocol
The NERO implementation phase during which the self-assessment and panel
assessments will take place and the results of the evaluation reported and
communicated
The follow-up phase during which the effects of the evaluation outcomes on the
RD&I system will be monitored and analysed, including a consultation of the
RD&I community and policymakers
Central to this evaluation cycle stands the policy decision for future revision of the
evaluation methodology, based upon the analysis performed in the follow-up phase
and taking into consideration the RD&I policy priorities.
A stable structure for the evaluation management is needed in order to ensure not
only a professional implementation of NERO but also the effective take-up of the
follow-up tasks, i.e. collecting the needed evidence for the policy decisions to be taken
for the next run of NERO. One should envisage also that this evaluation structure will
be in charge of the implementation of programme evaluation in the Czech RD&I
governance system.
This implies that the Evaluation Management Team should consist of a core team that
will include a number of directors and office staff members, complemented with
temporary staff for each specific run of NERO. For the cost estimate we counted with
A core team of 10 staff members, i.e. 3 staff members at director level and 7 staff
members of the Evaluation Secretariat including evaluation experts, an IT expert,
and administrative staff
15 additional staff members (FTE) that will support the core team during the time
of peak activity
9 additional staff members (FTE) that will constitute the panel secretariats
These numbers are based on international experience, taking into account the size of
the RD&I system - and therefore the workload for the evaluation management. For
example, the core team for the UK RAE in 2018 consisted of in total 19 staff members
and HEFCE employed approximately 40 additional temporary staff for the duration of
the RAE 2018 exercise (i.e. 2 years).
An Evaluation Management Board will supervise the activities of the Evaluation
Management Team and carry the ultimate responsibility. In the Final report 1 - The
R&D Evaluation Methodology we mention that in order to reach the maximum level
of legitimisation, this body should be composed of representatives of the R&D
governing bodies (funding ministries and ministries with responsibilities for research
institutes, the Academy and the agencies), chaired by a representative of the
Government Office. We count on a board of 6 members and a Chair, nominated for the
duration of the entire evaluation process, i.e. the preparation and the implementation
phase.
2.1.2 Overview of the tasks for the Evaluation Management Team
Exhibit 2 lists the main tasks of the evaluation management team in the different
phases of the evaluation cycle.
8. Detailed evaluation cost framework
4 R&D Evaluation Methodology and Funding Principles
Exhibit 2 Main tasks of the evaluation management team
Specifically, in the preparation phase, the main management tasks are:
The set up of the panel evaluation structure – this includes the drafting of
confidentiality and conflict of interest statements; the construction or update of an
international experts database; and the identification and selection of the Main
panel chairs and members
The design and publication of the evaluation protocol – this stands for the
finalisation of the evaluation methodology, including the eventual update of
eligibility criteria (thresholds, outputs, etc), assessment criteria and indicators; the
set-up of the self-assessment reports; the drafting of the bibliometric data reports;
the design and development of the guidelines for the panels and the participating
research organisations; the drafting of the panel report templates; the planning of
the evaluation process and establishment of deadlines; the drafting of the
evaluation protocol itself; and the launch of the evaluation
The set-up of the IT support structure – this regards the development of the
platforms for the submission of the selected research outputs, and the self-
assessment reports as well as for the workflow management of the panels’ and
reviewers’ work
The management of the registration of Evaluation Units and Research Units –
this includes the delivery of data support to the Evaluated Units through an
analysis of the data stored in the RD&I Information System; the set-up of a help
desk for the Evaluated Units; and the eligibility and quality check of the
registrations
Main management tasks in the evaluation phase are:
Finalisation of the panel evaluation structure – this stands for the definition of
the Subject panels; the identification, selection and contracting of the Subject
panel chairs and members and the referees; the management of the decision-
making on cross-referrals and Inter-disciplinary Research Units; and the
contracting of the panel secretariats
Handling the submission of the research outputs and self-assessments – relevant
management tasks include the helpdesk for the Evaluated Units; the delivery of
data support to the Evaluated Units; the eligibility and quality check as well as
Preparation Evaluation
Set up of the panel
evaluation structure
Registration of the
EvUs and RUs
Finalisation of the
evaluation panel structure
Handling the submission
of research outputs & self-
assessments
RU Panel report
EvU report
Field Report
Design & publication
of the evaluation
protocol
Disciplinary
Area Report
Follow-up
Data analysis and
reporting to the panels
Supporting the peer review
and panel evaluation
Analytical support
to the funding
bodies
Monitoring of the
effects
Consultation
process for revision
Final management
report
Set up of the IT
support structure
Random audits
9. Detailed evaluation cost framework
R&D Evaluation Methodology and Funding Principles 5
control on confidentiality issues; management of the panels’ workflow and the
transfer of submitted outputs and self-assessments.
Data analysis and reporting to the panels – this includes the collection and
analysis of bibliometric data - at the level of disciplinary areas, fields, and
Research Units; and the drafting of the bibliometric data reports1
Supporting the peer review and panel evaluation – this includes a broad range of
support activities to the Subject panels, Main panels and referees, ranging from
the support during the remote reviews and assessments and the Subject panel
meetings (the panel secretariats) to workflow management, progress monitoring,
and reporting to the Evaluation Management Board (management reports)
Performing random audits of the submissions by RUs - A first type of audit
concerns a request for proof regarding, for example, the number of researchers,
PhD graduates, and strategic partnerships and the volume of grants and contract
research. In the interest of proportionality, the first type of audits is done for
around 5% of EvU submissions. A second type of audit is the confrontation of
submitted information with information in databases about, for example, staff and
revenues of research organisations, dissertations, grants and service contracts. In
the interest of proportionality, the second type of audits is done for around 10% of
EvU submissions. They will also perform targeted audits in case of concerns raised
by Main or Subject panel members and Referees. As much as possible, random
audits are spread across different research organisations.
Main tasks for the final steps in the evaluation phase are:
The coordination of the finalisation of the Panel Reports at the Research Unit,
Field and Disciplinary Area level, including the final meetings of the Subject panel
chairs with the Main panel chairs
The drafting of the conclusive analytical reports at the Evaluated Unit level
The publication of the evaluation results and the transfer of structured
information to the R&D governance bodies, including information for funding
purposes
Main tasks in the finalisation phase can be divided in short-term and mid-term
activities. Main follow-up tasks in the shorter term include:
In a first instance, this regards the reporting on the evaluation management
process, for the sake of transparency but also as part of a broad and shared
learning process. In contrast to the other (internal) management reports and
minutes of the management board and team meetings, the final management
report this report should be made public. The main purpose of this final
management report is an open communication on the process for the evaluation
implementation and the criteria used for decision making; the challenges
encountered and solutions found; the lessons learned for future evaluation
exercises; and the details of the direct costs – and if possible an estimate of the
indirect costs.
Another main task is the delivery of analytical support to the R&D governance
bodies. As is described in the Final report 2 - The Institutional Funding Principles,
the evaluation results will constitute important information for the R&D
governance bodies in their management of the Research Organisations of their
competence - and more specifically the performance agreements. The evaluation
1 The structure of the self-assessment template is defined such that there is no need for statistical data
analysis or additional drafting of ‘comprehensive data reports’ as it was done in the Small Pilot Evaluation
(see Background report 10: The Small Pilot Evaluation- feedback and results)
10. Detailed evaluation cost framework
6 R&D Evaluation Methodology and Funding Principles
results will constitute strategic information also the broader R&D policy-making
activities of the R&D governance bodies, such as the design of policy interventions
or programmes
Mid-term follow-up activities are centred on the monitoring of the effects of the
evaluation and funding system. This should have both a backward and forward-
looking function, i.e. assessing whether and to what extent the objectives have been
reached and identifying the elements that should be modified in order better to reach
the objectives or to avoid the unintended negative effects. They will include data
collection and analysis activities and in the last steps, a proper stakeholder
consultation process in support of the decision-making for an eventually revised
evaluation methodology.
We describe the tasks of the evaluation management team in detail in the Evaluation
Handbook (Background report 5).
2.1.3 Timeline of the evaluation process
It maps out an indicative timeline for the main milestones in the evaluation process.
This gantt chart accounts for a full-scale evaluation and takes into account the needed
lapse times for the involvement of the panel experts and the submissions by the
participating Research Units, as well as the time needed for the evaluation
management team to handle the different activities, seeing the workload involved and
especially, the number of experts to be nominated.
We have estimated that the entire evaluation process, i.e. from the preparation of the
evaluation to the publication of the reports, will need three years. Roughly, each of the
three steps, i.e. the preparation, evaluation and finalisation, will take a year each.
While three years may seem an exaggeratedly long time, it is in the norm of
international practice. A more complex evaluation system for a bigger country like the
UK, for example, takes approximately five years (the preparatory phase for the
RAE2008 in the UK started in 2004, with the results published in 2009).2
Nevertheless, one should consider that the time needed for the evaluation process is
determined by a number of key factors, i.e.
The need for revision of the evaluation methodology and its tools, determining
the time needed for the preparation phase
The volume of the work, i.e. the number of Evaluated Units and Research Units,
which influences the workload for the panels
The need for a lapse time between the time of nomination of the panel experts and
their active involvement – the golden rule is at least 3 months
The need for a lapse time between the launch of the evaluation (with the
publication of the evaluation protocol) and the submission of the Research Units’
research outputs and self-assessments. The Research Units need to be given the
needed time to collect their data, select the research outputs to submit and run
their internal self-assessment process. Three months need to be considered as an
absolute minimum
A more detailed indicative timeline is shown and described in the Evaluation
Handbook (Background report 5).
2 Source: RAE2008, RAE Manager’s Report, April 2009
11. Detailed evaluation cost framework
R&D Evaluation Methodology and Funding Principles 7
Exhibit 3 Indicative time-line of the evaluation process
Task Mth 1-3 Mth 4-6 Mth 7-9 Mth 10-12 Mth 1-3 Mth 4-6 Mth 7-9 Mth 10-12 Mth 1-3 Mth 4-6 Mth 7-9 Mth 10-12
Set-up
Launch of the evaluation
Registration of the EvUs – indicating the RUs
Kick-off meetings of Main panels
Submission of the research outputs for review
Submission of the self-assessment forms
Kick-off meeting of the Subject panels
Remote review & assessments
Subject panel evaluation meetings
RU Panel reports - finalisation & approval
Analytical reports at EvU, Field & Disciplinary Area level
Publication results on the website
Finalisation
Year 1 Year 2 Year 3
Preparatory phase Evaluation phase
12. Detailed evaluation cost framework
8 R&D Evaluation Methodology and Funding Principles
2.2 Roles and tasks of the Evaluated Units (EvUs)
A core objective of the evaluation is the improvement of research and research
management. To meet this objective, the evaluation system entails a self-assessment
and an expert panel review. Input for the expert panel evaluation is, therefore, not
only a set of data and information submitted by the participating Research
Organisations, but also and foremost the participants’ self-reflections on their
performance. The final outcome of the evaluation system will be an assessment of past
performance but will also have a forward-looking component, as a result of both the
self-assessment and the panel evaluation.
2.2.1 Tasks in the evaluation process
Specific tasks of the participating Research Organisations in the evaluation process
are:
To identify and register the researchers forming the Research Unit(s)
To coordinate the collection of information in order to justify recommendations
for cross-referrals and/or the application for the registration of an
Interdisciplinary Research Unit, if appropriate
To set up the criteria and processes for the selection of the Research Units’ most
outstanding research outputs for review
To coordinate the collection of the required data and information, controlling and
guaranteeing their completeness and correctness
To set up an internal self-assessment system, focusing on the identification of the
Research Units’ competitive positioning in the national and international
environment
To conclude the self-assessment process with a SWOT analysis, based on the data
and information collected for the panels combined with the outcomes of the self-
assessment, and define areas and actions for improvement
For this purpose, the Research Organisations and their Evaluated Units (EvU) will
need to set up a structure, i.e. ‘committees’, at the level of the EvU and for each
Research Unit.
These committees should include the relevant representatives of the research
organisation’s management as well as lead researchers in the different fields (Research
Units). Their task is to coordinate the evaluation process within the institution,
including the management of the self-assessment. Ideally, the latter will include all
researchers forming the specific RU(s).
2.2.2 Data and information required
Quantitative data are asked in the form of time series, i.e. covering the 4 or 5 years
(depending on the evaluation frequency) prior to the evaluation year (i.e. the
‘evaluated period’). Break-off date is December 31 of the previous year.
Quantitative data is asked for the following:
Research and support staff and the researchers’ profile, at Evaluated Unit and
Research Unit(s) level
PhD students - enrolled, awarded, ‘recruited’ or trained by researchers in the
Research Unit (only for Research Units teaching or training PhD students)
Total institutional funding, at Evaluated Unit and (estimates) at Research Unit(s)
level
External competitive and contract research funding, at Research Unit(s) level
13. Qualitative information is requested for the following topics:
Background information on the Evaluated Unit and the Research Unit(s)
(organisational structure, scientific focus, history)
Human resources management, for all researchers and for PhD students and
postdoctoral fellows (the latter only for universities and university hospitals), at
the Evaluated Unit and the Research Unit(s) level
Research infrastructure, available and used by the Research Unit
Research strategy and plans for the Research Unit
National and international collaborations and esteem measures
Knowledge and technology transfer activities to non-academic actors in society
For this qualitative information, word limits are given. These need to be considered
(also) as an indication for the level of detail in the description that is required.
Results of the self-assessment should be reported regarding
The adequacy of the research infrastructure in the Evaluated Unit for research in
the field
Key value and relevance of the RU activities for research and development
The RU competitive positioning in the national and international context
Societal value of the RU activities and research
The final SWOT analysis and conclusions
The input for assessment required from the participating Evaluated Units as well as
the process for the submission of the information is described in detail in the
Evaluation Handbook (Background report 5).
2.3 Roles and tasks of the evaluation panels
The role of the main panel is to moderate. It has an auditing function and provides a
bridge between the Evaluation Management Team and the panels.
The main panel chairs (international experts) are key in the evaluation process in their
decision-making role in any sensitive matters. They guide and support the Subject
panel Chairs and most important, ensure consistency in the Subject panels’ approach
to evaluation, thus guaranteeing a fair evaluation for all. They are also in charge of the
conclusive analytical report at the Disciplinary Area level.
The main panel members (national experts) constitute an additional ‘auditing’
element to the panel review, advise on matters relating to insufficient or unclear
information in the RUs’ submissions, and support the Main panel Chair, amongst
others by providing information on the national context whenever required.
The subject panels have the primary function of conducting the performance
assessment.
Subject panel chairs will report to the main panels on the progress and on how the
working methods are implemented, guarantee the quality of the evaluation process
and its outcomes, and be in charge of drafting a conclusive analytical report on the
state of research in their field(s) of discipline.
The referees will have the exclusive role of assessing the excellence of a set of
submitted research outputs. There will be two Referees for each submitted research
output: a First and a Second reader. The First reader has the core responsibility for the
Review report and acts as ‘lead’ referee. They will work in remote to keep costs down.
14. Detailed evaluation cost framework
10 R&D Evaluation Methodology and Funding Principles
We describe the tasks and role of the different panels and referees in detail in the
Evaluation Handbook (Background report 5)
2.4 The role of the RD&I Information System (IS VaVaI)
The Small Pilot Evaluation (SPE) enabled us to test to what extent the RD&I
Information System was a useful and credible source of information and could provide
support to the participating EvU for the collection of data as well as support the
evaluation management in the collection and processing of the data on research
outputs. There were some teething problems in the whole SPE process (see the report
The Small Pilot Evaluation: Feedback and Results – Background report 10), but the
high value of using the RD&I Information System as much as possible in the
evaluation process was apparent to most of the actors involved.
The value of the RD&I Information Society is in a first instance related to the richness
of the information it holds on the research outputs, but also the data on funding and
researchers etc . However, also its capacity to create an interlinking and integration of
these data, internally and to external datasets such as Web of Sciences and Scopus, is
of high relevance in the context of evaluation.
The RD&I Information System proved its use in reducing the burden on the Research
Organisations during the Small Pilot Evaluation. Valuable support can be delivered
through
The transfer of the lists of researchers, registered in the system against the
Evaluated Units, from which to choose the ones ‘belonging’ to the Research Units
The use of the unique identifiers of these researchers in the system to extract and
transfer to the Evaluated Units the lists of research outputs in the system and their
field allocation, among which the choose the ones to submit for review
The same list forms also the basis for the analysis of the Research Units’
publication profile, complementing the information from the international
bibliometric datasets
The analysis of the research outputs registered in the system at the level of fields
and disciplinary areas, again complementing the information from the
international bibliometric datasets
The delivery of the lists of all publications by the Research Units to the evaluation
panels, complete with links for access to the abstracts stored in the system
In order fully to support the evaluation process and exploit the IS VaVaI to its best
potential, the system needs to be optimised. We describe the potential extensions to
the system in the report The RD&I Information System as an information tool for
evaluation (Background report 9).
15. 3. Detailed cost framework
We estimated the costs of a full-scale evaluation using an activity-based cost approach.
We took into consideration all costs, both the ‘direct’ ones for the organisation of the
evaluation, the handling of the process, and the evaluation panels, and the indirect
ones, i.e. the costs related to the time that the evaluated organisations will need to
invest in their self-assessments.
Our estimate is based on our experience, the outcomes of the Small Pilot Evaluation,
and international experience.
Information on the full costs of national evaluations is hard to find: only a few
countries look into the indirect costs of these exercises. Through literature review and
interviews, we collected sufficiently useful data on the UK RAE 2009 and the Italian
VQR (Exhibit 4).
Exhibit 4 Costs of PRFS in other countries
UK RAE
2008 IT VQR
NZ QE
2006
Direct costs (staff & panels) (k€) € 15,120 € 10,570
Indirect costs (k€) € 74,340 € 54,029
Total direct & indirect costs (k€) € 89,460 € 64,600 € 46,960
Indirect costs % total costs 83% 84%
Nr FTE researchers 68,563 61,822 8,671
Total costs per researcher € 1,305 € 1,045 € 5,416
Source: Technopolis calculation on multiple sources3
While the evaluations in Italy and New Zealand are based on the UK system, which
acts as the model for PRFS worldwide, the three evaluation systems show some
significant differences that influence their overall costs4:
The UK RAE/REF is a mature system that has gone through many steps of
improvement and enhancement (i.e. after each evaluation round) in response to
the comments from the research community and policy needs. It has become
every time more complex – and costly, some say outrageously.5 It is an expert
panel-based system and covers universities only, who select the best of their
researchers for the competition. It runs every 6 years covering a five-year period
Italy has run three national peer review-based evaluations so far, the VTR in 2006
and the VQR 2004-2010 in 2013. Both are inspired by the UK system, but the
VQR has a strong element of bibliometrics (for cost-saving reasons) while
3 Main sources of information were: On the RAE: RAE 2008 Manager Report (2009); RAE (2008)
Accountability Review, PA Consulting, 2009; On the VQR: Geuna, A., Piolatto. M., The development of
research assessment in the UK and Italy: costly and difficult, but probably worth (for a while),
Department of Economics and Statistics “Cognetti de Martiis”, Universita’ degli Studi di Torino, Italy: 2014
4 We compare the features of PRFS internationally in the Final report 1 – The R&D Evaluation
Methodology and cover them in detail in the Background report 1 - Evaluation systems in international
practice
5 Martin, B., The Research Excellence Framework and the ‘impact agenda’: are we creating a
Frankenstein monster?, Research Evaluation, 20(3), September 2011, pages 247–254
16. Detailed evaluation cost framework
12 R&D Evaluation Methodology and Funding Principles
bibliometrics is only marginal in the UK system (even in the latest REF) due to
opposition in the research community. All universities and some interuniversity
research institutes are covered and all researchers participate, submitting a
minimum number of publications (6 per researcher). The assessment is done
predominantly through bibliometrics for the ‘hard’ sciences and peer review for
the other. It is meant to run every 3 years
We included New Zealand in Exhibit 4, above, as it is the only evaluation system
that we know off where the panel-based evaluation includes also onsite visits. It
covers all universities and all researchers. As can be noticed, this leads to huge
costs. The evaluation runs every 6 years
In the next section (Section 3.1) we show the results of our calculations, i.e. the cost
estimate for a full-scale implementation of the Evaluation Methodology (EM).
We then break these costs down in
Estimates of the costs for the evaluation panels (Section 3.2)
Estimates of the costs for the evaluation management (Section 3.3), including the
evaluation management team, the bibliometric data analyses, and the IT services
and use/extensions of the RD&I Information System
Estimates of the indirect costs (Section 3.3)
3.1 Cost estimate for a full-scale evaluation based on the Evaluation
Methodology
As mentioned in the introduction to this report, the target maximum budget for the
EM was set on “1.0 percent of public institutional funding for R&D for a five-year
period”, estimated at CZK 677 million.” This estimate refers to the total of the national
R&D institutional funding expenditure budget, of which Institutional funding for
Research Organisations is one of the budget lines.
Considering both the total institutional funding for R&D budget and the ‘specific’ one,
Exhibit 5, below, shows that the maximum cost accounts for a cost per researcher of
€1,336 and €876, respectively.
In other words, the 1% limit calculated against the total institutional funding for R&D
budget would imply that the evaluation system is more expensive than the UK and the
Italian ones, while the one based on the ‘specific’ budget line indicates more cost
efficiency than the UK RAE but only slightly so compared to the IT VQR (Exhibit 4,
above).
Exhibit 5 Maximum total costs of the new evaluation system
R&D Institutional funding
expenditure – total
Institutional funding for
conceptual development of
ROs*
2012 budget (m CZK) 13,536 Kč 8,874 Kč
Total for 5 years based on 2012 budget (m
CZK) 67,680 Kč 44,370 Kč
1% limit (m CZK) 676.8 Kč 443.7 Kč
1 % limit (k €) € 24,611 € 16,135
Nr FTE researchers 18,422 18,422
Maximum total costs per researcher € 1,336 € 876
Notes: *includes research intentions
Taking into consideration the comparison with the cost rates in the other countries, we
designed the evaluation methodology counting on a cost limit 1% of the institutional
funding for the conceptual development of ROs budget line, i.e. 16,135€.
17. This sum was, from our perspective, the absolute maximum possible. A key principle
for our design of the Evaluation Methodology was that the cost and burden of the
evaluation should be the minimum possible to deliver a robust and defensible
evaluation process. The sum of 16,135€, therefore, did not constitute a target budget,
but a limit.
In this context we took into consideration that, as in any other country, the evaluation
system is bound to become more complex and sophisticated in response to the needs
and comments from the research community. This is especially the case when
evaluation results are linked to institutional funding, as the experiences in, e.g., the
UK with the RAE and the Czech Republic with the Metodika show. Our approach was
therefore to design an evaluation system that would set the basis for these future
enhancements, keeping it as simple as possible and ensuring methodological
robustness while adequately responding to the needs of the Czech RD&I community
and policy makers.
We have estimated the total costs of a full-scale evaluation at 12,929 k€ (Exhibit 6),
i.e. approximately 3,000 k€ under the limit.
This implies a total cost per researcher of €702; the indirect costs that will need to be
carried by the RD&I community represent 48% of the total. For both parameters, the
EM therefore constitutes a cost efficient evaluation system internationally.
The cost estimates are based on an evaluation running every six years, covering a five-
year period.
Exhibit 6 Cost estimate for a full-scale evaluation
Total – in k€
Total direct costs € 6,672
Evaluation panels € 3,905
Evaluation management € 2,767
Total indirect costs € 6,257
Estimated overall costs for a full-scale evaluation € 12,929
Indirect costs % of total costs 48%
Nr FTE researchers 18,422
Total costs per researcher € 702
The costs for the evaluation panels are based on the set-up of 26 Subject panels and a
maximum of 20 days involvement for each expert, of which 5 days in the Czech
Republic. The Evaluation management costs include 5 year salaries of the core staff in
the evaluation management team seeing their on-going involvement in the evaluation
cycle; 3 years salaries for the evaluation management board members (approximately
one FTE/year in total); additional staff for 1 year (24 FTE/year); costs for external
bibliometric analysis support and the costs for the RD&I Information System use and
updates (cumulative essential and recommended updates). The indirect costs envisage
the involvement of 10 researchers and 1 administrative staff per Research Unit.
We detail this further down in the next sections.
3.2 Detailed estimates of the costs for the evaluation panels (direct costs)
The use of evaluation panels is costly, especially when international experts are
involved. The requirement to have a 100% international profile of the evaluating
experts constitutes therefore a high risk factor in terms of cost efficiency of the
evaluation exercise.
During the design of the evaluation methodology we have focused on developing a
robust evaluation system while taking into account the need for an as high as possible
cost efficiency.
18. Detailed evaluation cost framework
14 R&D Evaluation Methodology and Funding Principles
One of the key factors that influence the achievement of this objective is the capacity of
the evaluation management team to identify and contract the international evaluation
experts that have the required profile.
The ‘review fatigue’ in the international research community, with many researchers
involved in peer review appraisals of project proposals and various evaluation
exercises requiring external experts, implies that the identification and nomination of
evaluation experts is not an obvious task. The important role that these experts play in
the evaluation requires, therefore, attention for their needs and expectations.
This regards in a first instance their limits of availability. Experience tells us that
especially the time that the experts are requested to spend in the country is critical.
Rule of thumb is that international experts should spend only 5 days in the country for
their meetings at a time, for preferably 1 maximum 2 meetings. A maximum (realistic)
expected time investment is to be set at around 20 mandays in total, especially for the
Chairs.
In order to reach these parameters, we have
Set a strong focus on the remote preparatory work for the final assessment (i.e.
remote assessment and remote review), which will need to be as complete as
possible and of the needed quality level to allow for short meetings
Foreseen as many Subject panels as the budget allowed for in order to allow for
the distribution of the workload (RU assessment) to as many Subject panel
members as possible
Spread the tasks over Subject panel members, Subject panel chairs, and Main
panel Chairs
Assigned wherever possible preparatory work and report drafting work to the
Evaluation Management Team
Ruled out onsite visits
Second, a day-rate that reflects the level of expertise required from the experts needs
to be foreseen.
We took all of these factors into account in the design if the evaluation methodology
and the drafting of the cost estimate set out below.
We took the approach of Activity Based Cost modelling for the cost estimates. Our
calculations are, therefore, centred on the total number of days needed to implement
specific activities and tasks.
Elements that influence the costs of an evaluation panel and set the ‘baseline’ for the
calculation of the costs are
The volume of the workload that can be expected, and closely related to this:
The number of experts required
The number of meetings and days needed for the remote assessment/review
3.2.1 Volume of the workload
The volume of the workload depends on the number of eligible Research Units (RU)
and Evaluated Units (EvU) and for the latter, how many EvU cover more than 1 RU.
Also an estimate of the number of submitted outputs is needed to identify the peer
review workload. As we defined the number of research outputs that can be submitted
19. in terms of a range, the number of submitted outputs for review can only be an
estimate.
Based on the analysis of data on research outputs in the RD&I Information System
related to the period 2009-20136, we can establish the base for the cost calculations in
terms of volume as follows:
Exhibit 7 Estimated volume of the workload
Number
Total nr eligible EvU (above minimum threshold) 300
Total nr eligible RU (above minimum threshold) 780
Total nr EvU with more than 1 RU 170
Estimate total submitted research outputs min 2889
Estimate total submitted research outputs max 3467
Average estimate submitted outputs 3178
3.2.2 The number of panel experts required
A second item that constitutes the baseline is the number of panel experts involved.
We count on the establishment of 6 Main panels and 26 Subject panels, covering the
36 fields – as advised by our international experts. There are therefore an average of 4
Subject panels per Main panel.
Each Main panel will have 4 experts: 1 international (the Chair) and 3 national (the
members); each Subject panel will have in average 7 experts (1 Chair and 6 members).
As shown in Exhibit 8, a full-scale evaluation can be expected to involve 206 experts.
We make a distinction between panel chairs and members as well as between
international and national experts, envisaging different day rates. Higher day rates
need to be foreseen for panel chairs because of the higher level of expertise that is
required from them.
Exhibit 8 Number of experts involved
Number
Total nr main panel chairs (foreign) 6
Total nr main panel members (local) (3 per Main panel) 18
Total nr subject panel experts (foreign) 182
Total nr Subject panel chairs 26
Total nr Subject panel members (average of 6 per panel) 156
Total number of panel experts 206
Next to the panel members, we have foreseen also the involvement of Specialist
Advisors. These experts will have a limited involvement in the process: they will have
6 See the report Typology of the Research Organisations and the Effects of the EM Thresholds (Background
report 2)
20. Detailed evaluation cost framework
16 R&D Evaluation Methodology and Funding Principles
an advisory function only and will be called in by the Subject panels in order to
complement their expertise in a specific field or interdisciplinary research (the
international advisors) or their knowledge of the research in the national context (the
local advisors). We counted on a maximum 2 international Specialist Advisors and 2
national ones per panel.
Number
Nr foreign specialist advisors (2 per panel) 52
Nr local specialist advisor (2 per panel) 52
Total nr specialist advisors (local) (4 per panel) 104
The number of referees cannot be estimated because that depends on the number of
sub-fields for which the RUs will submit outputs. We estimated the total number of
days required instead (see the next section).
3.2.3 The number of meetings and days needed for remote assessment/review
We defined the number of meetings for the different panels and the average workload
taken up per manday (which influences the number of days needed per panel) as
shown in Exhibit 9. This is based on our expertise and international practice, and
considers the process for the evaluation and the tasks that the experts need to cover.
We considered that in one day of remote assessment, the experts will be able to assess
in average 2 Research Units. This takes account of the fact that assessment of large
Research Units may be more complex than the assessment of small ones. Data related
to the period 2009-2013 suggest that the number of ‘large’ Research Units (more than
500 research outputs in 5 years) will account for approximately 20% of the registered
Research Units.
The estimate of decisions taken on 6 Research Units (in average) during the final
Subject panel meeting is based on the expectation that the experts responsible for the
remote assessment of the RUs (2 per RU) will have developed the draft RU report
prior to the meeting. As a result, the panel will need to have in-depth discussions only
for those cases where the two experts had different opinions.
We estimated that the referees will assess in average 10 submitted research outputs a
day. This estimate seems more than appropriate when set in the international context.
In the Italian VQR 2014, the reviewers were expected to do 15 to 20 reads a day; in the
Swedish FOKUS (to be implemented in 2016), the research council budget counts on
15 reads a day (which is similar to the estimates by the Academy of Sciences for the
2015 evaluation).
Exhibit 9 Number of panel meetings and number of days needed per panel
Number
Main panels
Total nr Main panel meetings - per panel 3
Days per Main panel meeting 2
Subject panels
Remote assessment: nr RU assessed per remote
assessment day (includes reporting) 2
Nr RU assessed per panel meeting day 6
Mandays for First panel meeting - per panel 3
Mandays for assessment meeting - per panel 5
Total mandays for Subject panel meetings - per panel 8
Total nr Subject panel meetings - per panel 2
21. Referees
Remote reviews per manday referee (includes reporting) 10
3.2.4 Total number of mandays needed
Based on this baseline per panel and taking into consideration the volume of the
workload and the number of experts involved, we come to the total number of
mandays needed for the experts’ involvement.
The different panels and experts in those panels have different tasks, so we calculate
the total mandays for each of them separately (Exhibit 10).
Exhibit 10 Total number of mandays needed per type of expert
Main panel chairs
Number
of days
Days for main panel meetings per Chair 6
Total mandays for Main panel meetings 36
Days for meetings with subject panels (per subject panel) 2
Coordination of subject panels (per subject panel) 1
Total mandays for coordination of subject panels (1d per subject p) 78
Days for assessment of IRU - per panel Chair 1
Total mandays for assessment of IRU 6
Analytical report: mandays per Area report 1
Total Area reporting mandays 6
Total mandays tasks for Main panel chairs 126
Main panel members
Number
of days
Average days per main panel meeting 2
Total nr main panel meetings - per panel 2
Days for main panel meetings per member 4
Total days for main panel meetings 24
Days for support to main panel Chair - per member 4
Total mandays support to Main panel chair 72
Days for support to Subject panels - per member 4
Total mandays support to Subject panels 113
Total mandays tasks for Main panel members 213
22. Detailed evaluation cost framework
18 R&D Evaluation Methodology and Funding Principles
Subject panel chairs
Number
of days
Total nr Subject panel meetings - per panel 2
Total mandays for Launch meeting (3 days per meeting) 78
Total nr subject panel meeting days for assessment - per panel 5
Total mandays for assessmt meetings 130
Total nr meetings with Main panel chair 2
Mandays per meeting 1
Mandays for meetings with Main panel Chair 52
Analytical report: EvU reports per manday 2
Total EvU reporting mandays 85
Total number fields 36
Analytical report: Mandays per Field report / margin 4
Total Field reporting mandays 144
Total mandays tasks for Subject panel Chairs 489
Subject panel members
Number
of days
Total mandays for First panel meetings (3 days per meeting) 468
Remote assessment: nr RU assessed per remote assessment day 2
Nr RU for remote assessment (2 panel members per RU) 1560
Total mandays for remote assessment 780
Mandays for assessment meeting - per panel 5
Total mandays for assessment meetings 780
Support for Field reports: mandays per panel member 1
Total mandays support for Field reports 156
Support for Cross-referrals / IRU: mandays per panel member /
margin 4
Total mandays support for cross-referrals/IRU 624
Total mandays tasks for Subject panel members 2808
Referees
Number
of days
Estimate submitted outputs 3178
Referees per output 2
Total outputs for review 6356
Remote reviews per manday referee (includes reporting) 10
Total remote review mandays for Referees 636
23. Specialist Advisors
Number
of days
Average mandays involvement of Specialist advisors per advisor 2
Total days involvement of Foreign specialist advisors 104
Total days involvement of Local specialist advisors 104
Total mandays tasks for Specialist Advisors 208
Exhibit 11 shows the breakdown and the number of days of involvement per expert
Exhibit 11 Time investment required per expert
Number
Mandays per
expert
Main panel chairs (foreign) 6 21
Main panel members (local) 18 12
Subject panel chairs 26 19
Subject panel members 156 18
Foreign specialist advisors (2 per panel) 52 2*
Local specialist advisor (2 per panel) 52 2*
Total nr experts involved 310
*In average
3.2.5 Day rate per type of expert
The next step is the translation of the number of mandays into costs, based on a day
rate per type of expert. In the preceding section we mentioned the ‘review fatigue’
in the international research community and its implications for the difficulties in
identifying and nominating international experts. Apart of the time investment
required, also the envisaged day rates play an (obvious) role, in particular in relation
to the needed competence profiles of the experts.
In 2014, appropriate day rates for international experts range between a maximum of
€1000 and a minimum of €500.
We are well aware that this is considerably higher that the EC standard rate of €450 a
day. However, one should consider that these rates are for the delivery of specialist
support to the Commission. There are multiple reasons why people are willing to take
up the task even at such low rate (for Western European standards), the most
important one being the prestige of supporting the Commission and the importance
for networking and future contracts.
In the context of this evaluation, we consider a day rate of €450 to be unrealistic.
Especially when one wishes to have a geographical spread of the involved experts
(which should, indeed, be an objective) and when setting high requirements for the
competence profile of the experts in order to guarantee a quality evaluation, the
budget should count on paying out higher rates.
24. Detailed evaluation cost framework
20 R&D Evaluation Methodology and Funding Principles
Exhibit 12 shows the range of day rates that we defined for the different experts.
Exhibit 12 Day rate per type of expert
Day rate
Foreign experts
Main panel & panel chair € 1,000
Subject panel members € 800
Specialist advisors € 800
Referees € 500
Local experts
Main panel members (local) € 500
Specialist advisor € 500
3.2.6 Total costs for the panel experts’ work
Based on these day rates and the total number of mandays needed, the total costs for
the panel experts’ work is shown in Exhibit 13.
Exhibit 13 Total manday costs per type of expert and overall
Total manday
costs
For main panel chairs € 126,000
For main panel members € 106,333
For subject panel chairs € 489,000
For subject panel members € 2,246,400
For referees € 317,800
For foreign specialist advisors € 83,200
For local specialist advisors € 52,000
Grand total manday costs for the panel experts € 3,420,733
3.2.7 Travel, hotel and subsistence costs
The involvement of international experts and their meetings in the Czech republic
implies that also travel, hotel and subsistence costs are to be counted in. We calculated
these only for the foreign experts. We established the standard costs as shown in
Exhibit 14.
Exhibit 14 Standard travel and hotel & subsistence costs
In €
Travel expenses (per trip) € 300
Hotel & subsistence (per day) € 150
The level of these costs obviously depends on the number of trips needed for the
meetings and days per meeting. Again, this depends on the type of expert in line with
his/her tasks (Exhibit 15).
25. Exhibit 15 Trips and hotel days per type of panel expert
Meetings main panel chairs
Total nr trips for main panel meetings 18
Trip for meeting with subject panels 12
Total nr trips 30
Hotel days for Main panel meetings 36
Hotel days for meeting with subject panels 12
Total hotel days (panel mtgs & coordination subject
panels) 48
Meetings subject panel chairs
Total nr trips - all panel meetings 52
Total nr trips for meetings with Main panel chair 52
Total nr trips - all meetings 104
Hotel days for Launch meeting (3 days per meeting) 78
Hotel days for assessment meetings 780
Hotel days for meetings with Main panel chair 52
Total hotel days 832
Meetings subject panels
Total nr trips - all panel meetings 312
Total hotel days for First panel meetings (3 days per
meeting) 468
Total mandays for assessment meetings 780
Total hotel days 1248
Specialist advisors (foreign)
Total nr trips (1 trip per advisor) 52
Hotel days 104
Multiplied with the standard costs for trips and hotel & subsistence, we come at a total
of €484,200 for travel and hotel & subsistence (Exhibit 16).
26. Detailed evaluation cost framework
22 R&D Evaluation Methodology and Funding Principles
Exhibit 16 Total travel and hotel & subsistence costs
Total cost
Main panel chairs
Travel costs € 9,000
Hotel & subsistence € 7,200
Subject panel chairs
Travel costs € 31,200
Hotel & subsistence € 124,800
Subject panel members
Travel costs € 93,600
Hotel & subsistence € 187,200
Specialist advisors (foreign)
Travel costs € 15,600
Hotel & subsistence € 15,600
Overall
Travel costs € 149,400
Hotel & subsistence € 334,800
Grand total travel & hotel/subsistence costs € 484,200
3.2.8 Total costs for the evaluation panels
The combination of the total manday costs and travel/hotel costs brings us at a total
cost for the evaluation panels of €3,904,933 (Exhibit 17).
Exhibit 17 Total costs for the evaluation panels
Total cost
Grand total manday costs for the panel experts € 3,420,733
Grand total travel & hotel/subsistence costs € 484,200
Total costs for the evaluation panels €3,904,933
3.3 Detailed estimates of the costs for the evaluation management (direct
costs)
This section covers the costs related to the evaluation management, i.e. the activities of
the Evaluation Management Team and the support by external experts for the drafting
of the bibliometric reports (to inform the panels) and the delivery of IT services
including the use of the RD&I Information System.
3.3.1 The Evaluation Management Team
The evaluation management team comprises a core team that will be responsible for
the longer-term design and management of evaluation, an Evaluation Management
Board, additional temporary staff and the panel secretariats. We took account of
different salary levels for the members involved (Exhibit 18), applying salary rates that
are standard in public administration in the Czech Republic.
27. Exhibit 18 Standard salary rates per staff level
Gross yearly salaries incl. health and social
insurance paid by employers in CZK In CzK In €
Deputy Minister level 1,244,500 Kč € 45,255
Head of Department level 786,000 Kč € 28,582
Head of Unit level 550,200 Kč € 20,007
Second level 419,200 Kč € 15,244
Secretaries 314,400 Kč € 11,433
The Evaluation Management Team will consist of a core team that will be responsible
for evaluation, including the implementation of the evaluation exercise but not only.
We estimated their costs on 5 years of activity, taking into consideration the whole
evaluation cycle.
Based on international experience while taking into account the size of the RD&I
system in the Czech Republic, we counted on 3 staff members for the Evaluation
Directorate and 7 staff members for the Evaluation Secretariat7
An Evaluation Management Board will supervise the activities of the Evaluation
Management Team and carry the ultimate responsibility. They will be nominated for
the duration of the evaluation process, i.e. three years. There will be 6 board members
and a Chair and they will be involved in average for 2 days a month.
A number of additional temporary staff members will support the core team during
the time of peak activity (the submission phase and assessment phase - Section 2.1.1,
above). As is shown in Exhibit 2, above, above, this will take approximately 1 year so
we counted on 15 additional staff members (FTE) for 1 year.
Exhibit 19, below, lists the tasks of the panel secretariats and the estimate of the total
number of needed mandays The assessment phase will ask for the involvement of
panel secretariats to assist the panels before, during and after their meetings.
Experience teaches that this will require approximately 20 days per panel for the
secretariat staff.
This is a support that is critical for the panels – and for the efficiency of the panel
evaluation process as such. Two secretariat staff members per panel are needed and
they should have sufficient knowledge on the matter adequately to support the panels.
Support staff members are needed also to support the panels during the reporting
phase. Ideally, these will be the same members of the Panel secretariats. They will
support the panels in writing the RU, Field and Area analytical reports. They will also
draft the analytical reports at the EvU level, in order to limit the level of time
investment required from the Subject panel chairs. We calculated that these reporting
support activities would require 4 days per EvU in average, for 170 EvUs; 3 days in
average per field, for 36 fields; 3 days in average per area, for 6 areas. We also counted
in an additional 2 days per panel.
As a result, we counted on 9 additional staff members (FTE) for 1 year to fulfil these
tasks.
7 According to our calculations and based on the RAE 2009 management reports, HEFCE employed a core
team of 19 members and approximately 40 additional temporary staff for the duration of 2 years
28. Detailed evaluation cost framework
24 R&D Evaluation Methodology and Funding Principles
Exhibit 19 Tasks and mandays for the panel secretariat
Total
days
Panel secretariat - 26 panels, 20 days per panel, 2 staff per panel 1040
Drafting analytical report EvU level (170 EvU - 4 days per EvU) 680
Support for panel reporting field level (36 fields - 3 days) 108
Support for panel reporting area level (6 areas - 3 days) 18
Additional coordination tasks / margin (26 panels - 2 days) 52
Total mandays 1898
Days in FTE years (220 working d) 9
Exhibit 20, below, shows our estimate of the total costs for the Evaluation
Management Team activities.
Exhibit 20 Total costs for the Evaluation Management Team activities
Evaluation Management Board
Nr FTE
days Total costs
Deputy Minister level 36 € 7,405
Head of Department level 180 € 23,385
Total cost Evaluation Management Board 216 € 30,790
Core evaluation team - in place for 5 years
Nr FTE
year Total costs
Evaluation Directorate (Head of Department level) - 1
FTE for 5 years 5 € 142,909
Evaluation Directorate (Head of Unit level) - 2 FTE
for 5 years 10 € 200,073
Evaluation Secretariat (Second level) - 4 FTE for 5
years 15 € 228,655
Evaluation Secretariat (Secretary level) - 3 FTE for 5
years 15 € 171,491
Total cost core evaluation team 45 € 743,127
Additional staff per evaluation
Nr FTE
year Total costs
Additional staff members - 15 FTE for 1 year
(secretary level) 15 € 171,491
Panel secretariat / report drafting - 1 year (Second
level) 9 € 137,193
Total costs additional staff per evaluation 24 € 308,684
29. 3.3.2 The drafting of the bibliometric reports (to inform the panels)
Bibliometric analyses require specific expertise and IT support, which will require the
contracting of external experts. The tasks that need to be implemented and expertise
needed as well as the estimated day rates are shown in the table below.
Task Expertise Day rate
Definition of indicators, professional
communication with evaluation
management
Bibliometrician(s) 500€
Project architecture set-up and workflow
management
IT analyst, Software
architect
500€
Output environment coding Software coder 300€
Handling, distribution, communication Clerical 300€
Exhibit 21, below, shows the total estimated costs for this external support, based
upon the experience gained in the Small Pilot Evaluation. The estimate assumes the
highest possible degree of automation (IS VaVaI linkage to commercial databases
etc.).
Exhibit 21 Breakdown of the costs for the drafting of the bibliometric reports (to
inform the panels)
Expert Mandays Costs
Bibliometrician(s) 20 € 10,000
IT analyst, Software architect 20 € 10,000
Software coder 80 € 24,000
Clerical support staff 40 € 12,000
Total costs bibliometric reports 160 € 56,000
3.3.3 IT services, including the use of and extensions to the RD&I Information
System
As explained in Section 2.4, above, the RD&I Information System (IS VaVaI) is an
important component for a cost efficient implementation of the evaluation – both for
the Evaluation Management Team and the participating Research Organisations.
The report The RD&I Information System as an information tool for evaluation
(Background Report 9) holds a detailed cost estimate for the support services to the
evaluation process that the RD&I IS can provide. These include the costs for services
that are directly related to the evaluation and the support for its implementation, as
well as other costs related to the enhancement of the technical features of the RD&I
Information System in order better to serve the RD&I communities.
The report makes a distinction between costs related to activities that are considered
1. Essential for the implementation of the evaluation methodology as it is currently
foreseen
2. Recommended extensions because they will bring important benefits for the
quality of the underlying data and/or the smoothness of the evaluation process.
The estimates include all the associated project management, quality assurance, and
other overhead at a standard rate of 500 €/man-day. Where extensive manual
processing is required, the rate is lowered to 300 €/man-day.
30. Detailed evaluation cost framework
26 R&D Evaluation Methodology and Funding Principles
The following three types of costs are calculated:
Initial set-up costs,
Costs per evaluation campaign,
Running costs per year.
These types of costs are then combined into an aggregated cost figure that includes
one half of the initial set-up costs (counting on the set-up costs to amortize over the
first two evaluation campaigns), the per-evaluation costs, and the running costs for the
period between the evaluations (six years in our estimates).
We counted in into our overall evaluation cost estimates the direct EM costs per
evaluation period for both the essential and recommended services and extensions.
Necessity level Essential Recommended
Essential +
Recommended
Initial set-up costs 365,500 € 417,500 € 783,000 €
Each evaluation run costs 114,000 € 51,500 € 165,500 €
Running costs / yr 205,300 € 130,400 € 335,700 €
Total costs per eval period 1,528,550 € 1,042,650 € 2,571,200 €
Direct EM costs per eval period 1,018,360 € 609,765 € 1,628,125 €
Exhibit 22 shows the total evaluation management costs, adding up the costs for
the evaluation management team, the bibliometric analyses, and the IT services/use
and extensions of the RD&I Information System
Exhibit 22 Total evaluation management costs
Evaluation Management Team € 1,082,601
Bibliometric reports € 56,000
External IT support & RD&I IS € 1,628,125
Total evaluation management costs € 2,766,726
3.4 Detailed estimates of the indirect costs
In this last section we cover the indirect costs, i.e. the costs related to the investment
of time and staff members for the submission of the self-assessment reports and the
selection of the publications to submit for review.
Our estimates are in a first instance based on the experience gained during the Small
Pilot Evaluation (SPE). The Research Organisations that participated in the SPE
indicated that an investment of time and human resources as shown in Exhibit 23,
below.
In average, a Research Unit (RU) spent 30,967 CZK for its participation in the Small
Pilot Evaluation. This sum is calculated on the basis of a 300 k CZK yearly salary for
administrative staff and a 500 k CZK yearly salary for researchers, which is the
average of the salary indications given by the participating RU.
31. Exhibit 23 Time and HR investment in the SPE (average per RU)
Time spent (days)
Nr of people
involved (FTE)
Administrative staff 7.2 0.9
Researchers 9.2 1.2
We made the following considerations:
The revision of the Evaluation Methodology after the Small Pilot Evaluation put a
stronger emphasis on self-assessment, so more time investment by researchers
will be required
The evaluation panels in the SPE indicated that often the qualitative information
in the submitted self-assessments was of limited quality and value. In fact, as is
shown also in Exhibit 23, above, the impression was that in most cases only 1
researcher rather than a group of researchers had been involved. This will not be
possible with the final version of the EM (where a SWOT analysis is required), nor
this practice in line with the intentions of the evaluation, i.e. to create an
opportunity for ‘self-assessment’ in the Research Organisations. In other words,
more time will need to be invested and more researchers involved
The RUs that participated in the SPE were relatively small, while a full-scale
evaluation will involve also (very) large RUs, so more time investment will be
required in average. Again, this will regard in particular the researchers. For the
administrative staff, the investment in an improved support through the use of the
RD&I Information System should compensate
As a result of these considerations, we kept the number of mandays for the
administrative staff as appeared from the SPE (i.e. an average of 7.2 days per RU). For
the researchers, we counted on an average involvement per RU of 10 researchers for
the same number of days as indicated against the SPE.
We therefore estimated the indirect costs as shown in Exhibit 24, below, calculated
using the average salary indications given by the research organisations participating
in the Small Pilot Evaluation
Exhibit 24 Breakdown of the indirect costs
Total Mandays Total Costs
Administrative staff 5,614 € 334,039
Researchers 107,502 € 5,922,995
Total Indirect costs 113,116 € 6,257,033