Academic writing - publishing in international journals and conferencesNatalia Konstantinova
This presentation provides information about publishing in international journals and conferences. It gives several pieces of advice about where to publish, why to publish and also what the typical structure of the paper is.
Progressive focusing and trustworthiness in qualitative research: The enablin...University of Glasgow
* The business and management community increasingly recognises that qualitative research is a ‘messy’, non-linear and often unpredictable undertaking. Yet, a considerable proportion of the qualitative research published in top journals is still presented as the result of a linear, predictable research process, thus wrongly suggesting deductive reasoning. * In this paper, we focus on a particular type of ‘messiness’ where during fieldwork, the research context is revealed to be more complex than anticipated, forcing the researcher to gradually refine/shift their focus to reflect ‘what really matters’. We adopt Stake’s notion of progressive focusing for this gradual approach. * Progressive focusing is well-suited to qualitative research in international business requiring complex iteration between theory and data, and the truthful yet coherent presentation of the research process. We propose that this dual challenge of complexity and trustworthiness may be addressed by using computer-assisted qualitative data analysis software (CAQDAS). * We present conceptual considerations and guidelines and offer a view on a ‘messy’, non-linear doctoral research project conducted using a progressive focusing approach, to demonstrate how CAQDAS can help to develop and re-negotiate insights from theory and interview data, as well as enhance trustworthiness, transparency and publication potential.
Quantitative CV-based indicators for research quality, validated by peer reviewNadine Rons
Rons, N. and De Bruyn, A., POSTER presented at the 11th International Conference of the International Society for Scientometrics and Informetrics. CSIC, Madrid, Spain, 25-27 June 2007
Comparing scientific performance across disciplines: Methodological and conce...Ludo Waltman
Presentation at the 7th International Conference on Information Technologies and Information Society (ITIS2015) in Novo Mestro, Slovenia on November 5, 2015.
Academic writing - publishing in international journals and conferencesNatalia Konstantinova
This presentation provides information about publishing in international journals and conferences. It gives several pieces of advice about where to publish, why to publish and also what the typical structure of the paper is.
Progressive focusing and trustworthiness in qualitative research: The enablin...University of Glasgow
* The business and management community increasingly recognises that qualitative research is a ‘messy’, non-linear and often unpredictable undertaking. Yet, a considerable proportion of the qualitative research published in top journals is still presented as the result of a linear, predictable research process, thus wrongly suggesting deductive reasoning. * In this paper, we focus on a particular type of ‘messiness’ where during fieldwork, the research context is revealed to be more complex than anticipated, forcing the researcher to gradually refine/shift their focus to reflect ‘what really matters’. We adopt Stake’s notion of progressive focusing for this gradual approach. * Progressive focusing is well-suited to qualitative research in international business requiring complex iteration between theory and data, and the truthful yet coherent presentation of the research process. We propose that this dual challenge of complexity and trustworthiness may be addressed by using computer-assisted qualitative data analysis software (CAQDAS). * We present conceptual considerations and guidelines and offer a view on a ‘messy’, non-linear doctoral research project conducted using a progressive focusing approach, to demonstrate how CAQDAS can help to develop and re-negotiate insights from theory and interview data, as well as enhance trustworthiness, transparency and publication potential.
Quantitative CV-based indicators for research quality, validated by peer reviewNadine Rons
Rons, N. and De Bruyn, A., POSTER presented at the 11th International Conference of the International Society for Scientometrics and Informetrics. CSIC, Madrid, Spain, 25-27 June 2007
Comparing scientific performance across disciplines: Methodological and conce...Ludo Waltman
Presentation at the 7th International Conference on Information Technologies and Information Society (ITIS2015) in Novo Mestro, Slovenia on November 5, 2015.
Citation analysis: State of the art, good practices, and future developmentsLudo Waltman
Presentation at Bibliometrics & Research Assessment: A Symposium for Librarians & Information Professionals. Bethesda, MD, United States, October 31, 2016.
Classification of Researcher's Collaboration Patterns Towards Research Perfor...Nur Hazimah Khalid
A VIVA presentation slide for Master of Computer Science on 24th May 2016 at Faculty of Computing, Universiti Teknologi Malaysia by Nur Hazimah Khalid. Thank you.
LITERATURE REVIEWING WITH RESEARCH TOOLS, Part 1: Systematic ReviewNader Ale Ebrahim
“Research Tools” enable researchers to collect, organize, analyze, visualize and publicized research outputs. Dr. Nader has collected over 700 tools that enable students to follow the correct path in research and to ultimately produce high-quality research outputs with more accuracy and efficiency. It is assembled as an interactive Web-based mind map, titled “Research Tools”, which is updated periodically. “Research Tools” consists of a hierarchical set of nodes. It has four main nodes: (1) Searching the literature, (2) Writing a paper, (3) Targeting suitable journals, and (4) Enhancing visibility and impact of the research. In this workshop some tools as an example from the part 1 (Searching the literature) will be described. The e-skills learned from the workshop are useful across various research disciplines and research institutions.
Makes the case that we should let metrics do the "heavy lifting" in the UK REF [Research Excellence Framework]. I show that a university-level ranking based on metrics (Microsoft Academic citations for all papers published with the university's affiliation between 2008-2013) correlates at 0.97 with the The REF power rating taken from Research Fortnight’s calculation. Using metrics to distribute research-related funding would free up a staggering amount of time and money and would allow us to come up with more creative and meaningful ways to build in a research quality component in the REF.
Introduction to the peer review workshop for the PhD students of the Wageningen Graduate Schools. The goal is to explain peer review, entice PhD students to take part in the peer review process and give some tips on how to start with peer review.
Similar to Reliability and Comparability of Peer Review Results (20)
Testing Reviewer Suggestions Derived from Bibliometric Specialty Approximatio...Nadine Rons
Many contemporary research funding instruments and research policies aim for excellence at the level of individual scientists, teams or research programmes. Good bibliometric approximations of related specialties could be useful for instance to help assign reviewers to applications. This paper reports findings on the usability of reviewer suggestions derived from a recently developed specialty approximation method combining key sources, title words, authors and references (Rons, 2018). Reviewer suggestions for applications for Senior Research Fellowships were made available to the evaluation coordinators. Those who were invited to review an application showed a normal acceptance rate, and responses from experts and coordinators contained no indications of mismatched scientific focus. The results confirm earlier indications that this specialty approximation method can successfully support tasks in research management.
Poster presented at the 23rd International Conference on Science and Technology Indicators (STI 2018) "Science, Technology and Innovation indicators in transition", Leiden, The Netherlands, 12-14 September 2018 (http://sti2018.cwts.nl/).
Paper: Rons, N. (2018). Testing Reviewer Suggestions Derived from Bibliometric Specialty Approximations in Real Research Evaluations. In: Proceedings of the 23rd International Conference on Science and Technology Indicators (STI 2018) "Science, Technology and Innovation indicators in transition", 12-14 September 2018, Leiden, the Netherlands, Rodrigo Costas, Thomas Franssen, Alfredo Yegros-Yegros (Eds.), 170-173 (http://hdl.handle.net/1887/65261).
4D Specialty Approximation: Ability to Distinguish between Related SpecialtiesNadine Rons
Poster presented at the 21st International Conference on Science and Technology Indicators, 14-16 September 2016, València, Spain. (http://www.sti2016.org/).
Paper: Rons, N. (2016). 4D Specialty Approximation: Ability to Distinguish between Related Specialties. In: Proceedings of the 21st International Conference on Science and Technology Indicators, 14-16 September 2016, València, Spain.
Research Excellence Milestones of BRIC and N-11 CountriesNadine Rons
Rons, N., POSTER presented at the 13th Conference of the International Society for Scientometrics and Informetrics, Durban, South Africa, 04-07 July 2011
Interdisciplinary Research Collaborations: Evaluation of a Funding ProgramNadine Rons
Rons, N., PRESENTATION at the Sixth International Conference on Webometrics, Informetrics and Scientometrics (WIS) and Eleventh COLLNET Meeting. University of Mysore, Mysore, India, 19-22 October, 2010
Output and citation impact of interdisciplinary networks: Experiences from a ...Nadine Rons
Rons, N., POSTER presented at Creating Value for Users. 11th International Conference on Science and Technology Indicators. Leiden, The Netherlands, 9-11 September 2010
Quality related publication categories in social sciences and humanities, bas...Nadine Rons
Rons, N. and De Bruyn, A., POSTER presented at Creating Value for Users. 11th International Conference on Science and Technology Indicators. Leiden, The Netherlands, 9-11 September 2010
Impact Vitality – A Measure for Excellent ScientistsNadine Rons
Rons, N. and Amez, L., PRESENTATION at Excellence and Emergence. A New Challenge for the Combination of Quantitative and Qualitative Approaches. 10th International Conference on Science and Technology Indicators. Vienna, Austria, 17-20 September 2008
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Reliability and Comparability of Peer Review Results
1. Universiteit
Antwerpen
Conference "New Frontiers in Evaluation", Vienna, April 24th-25th 2006.
Reliability and Comparability of
Peer Review Results
Nadine Rons, Coordinator of Research Evaluations & Policy Studies
Research & Development Department, Vrije Universiteit Brussel
Eric Spruyt, Head of the Research Administration Department
Universiteit Antwerpen
2. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 2
“Three cheers for peers”
‘Three cheers for peers’, Editorial, Nature 439, 118 (12 January
2006).
• "Thanks are due to researchers who act as
referees, as editors resolve their often contradictory
advice."
• "Only in a minority of cases does every referee
agree ..."
3. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 3
Presentation plan
I. Validation of results
Reliability & comparability
II. Material investigated
'Ex post' peer review + citation analysis of teams
III. Investigation of results
Reliability: inter-peer agreement & different rating habits
Comparability: related concepts & intrinsic characteristics
IV. Conclusions
Aimed at improved results, a better understanding, choosing the
right method
4. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 4
I. Validation of results
1. Reliability
Peer review: principal method to evaluate research quality.
BUT: various kinds of bias & different rating habits.
& Not always feasible to use measures limiting their influence.
⇒ Possible to measure reliability ?
2. Comparability
H F Moed (2005), 'Citation Analysis in Research Evaluation', chapter
18: 'Peer Review and the Validity of Citation Analysis', Springer.
More reliable results ⇒ better correlations with other outcomes?
Correlations often relatively weak & depending on the discipline.
⇒ Can this be explained? (crucial for further acceptance!)
5. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 5
II. Material investigated
(Peer review)
1. Peer review
– Shared principles for the panel-evaluations of teams per discipline:
• Expertise-based
• International level
• Uniform treatment
• Coherence of results
• Multi-criteria approach
• Pertinent advice
– Exceptions:
• Different experts for each team (1 discipline at VUB).
• Specific methodology using different indicators (1 discipline at UA).
6. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 6
II. Material investigated
(Peer review @ VUB)
– VUB-indicators:
Standard procedure 'VUB-Richtstramien voor de Disciplinegewijze
Onderzoeksevaluaties', VUB Research Council (2001).
• Scientific merit of the research / uniqueness of the research
• Research approach / plan / focus / coordination
• Innovation
• Quality of the research team
• Probability that the research objectives will be achieved
• Research productivity
• Potential impact on further research and on the development of applications
• Potential impact for transition to or utility for the community
• Dominant character of the research (fundamental / applied / policy oriented)
• Overall research evaluation
7. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 7
II. Material investigated
(Peer review @ UA)
– UA-indicators:
'Protocol 1998' for the Assessment of Research Quality, Association of Universities of
the Netherlands (VSNU, 1998).
• Academic quality
• Academic productivity
• Scientific relevance
• Academic perspective
Exception (1 discipline, "partial" indicators):
• Publications
• Projects
• Conference participations
• Other
• Globally
8. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 8
II. Material investigated
(Citation analysis)
2. Citation analysis
'New Bibliometric Tools for the Assessment of National Research Performance:
Database Description, Overview of Indicators and First Apllications', H F Moed et al.,
Scientometrics 33 (1995).
– Centre for Science and Technology Studies (CWTS), Leiden University.
– Thomson ISI citation indexes, corresponding period, same teams.
– Indicators include:
• CPP/JCSm: citations / publication with respect to expectations for the
journals
• CPP/FCSm: citations / publication with respect to expectations for the field
• JCSm/FCSm: journal citation score with respect to expectations for the field
9. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 9
III. Investigation of results
(Overview)
1. Reliability
a. Inter-peer agreement:
Three groups of evaluations according to measured level of agreement.
b. Rating habits:
Panel-procedures vs. exception with different experts for each team.
⇒ Influence on results & on correlations between peer review indicators
investigated.
2. Comparability
a. Related concepts:
'Global' vs. 'partial' indicators & variation with discipline.
b. Intrinsic characteristics of methods:
Contributions to ratings counted differently & scale effects.
⇒ Influence on comparability investigated.
10. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 10
III. Investigation of results
(1. Reliability, a. Inter-peer agreement)
1. Reliability
1. a. Inter-peer agreement
In panels: different opinions ⇒ different positions of teams.
⇒ Level of inter-peer agreement measured by correlations
between the ratings from different peers.
⇒ 3 groups compared: panels with high, intermediate and low
inter-peer agreement.
11. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 11
III. Investigation of results
(1. Reliability, a. Inter-peer agreement)
– Influence on results:
Results compared to citation analysis:
⇒ Better inter-peer agreement = higher number of significant
correlations,
BUT: only at the higher aggregation level of the 3 groups.
⇒ Other mechanisms have a stronger impact on correlations.
– Influence on correlations between peer review indicators:
Significant correlations for each pair of peer review indicators, for
each of the 3 groups (also for indiviual disciplines).
⇒ Correlations between peer review indicators are relatively robust
for variations in inter-peer agreement.
12. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 12
III. Investigation of results
(1. Reliability, b. Rating habits)
1.b. Rating habits
Opinions → ratings: according to own habits, reference levels
in other evaluations, scores given to other files, known use
of scores, ...
Two cases compared:
• Exception with different experts for each team ⇒ scores not
necessarily in line with opinions.
• Standard panel-evaluations ⇒ uniform reference level.
13. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 13
III. Investigation of results
(1. Reliability, b. Rating habits)
– Influence on results:
Results compared to citation analysis:
• Panel-evaluations: significant correlations for all peer review
indicators with some or all citation analysis indicators (& vice versa).
• Different experts: significant correlation for only 1 pair of indicators.
⇒ Rating habits can influence results significantly.
– Influence on correlations between peer review indicators:
• Panel-evaluations: significant correlations for all pairs of indicators.
• Different experts: significant correlations for only 8% of the pairs.
⇒ Low observed correlations between indicators (expected to be
correlated) can indicate diverging rating habits.
14. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 14
III. Investigation of results
(2. Comparability, a. Related concepts)
2. Comparability
2.a. Related concepts
– Partial indicators (publications, projects, conferences, ...): no significant
correlations between peer review indicators, in contrast to global
indicators (scientific merit, productivity, relevance, ...).
⇒ Performances in different activities are not necessarily correlated.
– Correlations of peer review with citation analysis indicators: the pairs
correlating best strongly vary with discipline.
⇒ An indicator may not represent a same concept for all subject areas.
⇒ Always use more than one indicator!
15. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 15
III. Investigation of results
(2. Comparability, b. Intrinsic characteristics)
2.b. Intrinsic characteristics
– Contributions to ratings:
Different in the minds of peers (pro & contra) and in citation analysis
(positive counts).
– Scale effects:
Minimum & maximum limits & their position with respect to the mean
value.
16. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 16
III. Investigation of results
(2. Comparability, b. Intrinsic characteristics)
• Peer rating
frequency
distribution:
– Peer ratings:
pro & contra,
also elements
counted
'negatively'.
– Scale:
minimum &
maximum
limit.
Relative frequency distribution of peer results
0%
5%
10%
15%
20%
25%
30%
35%
40%
45%
50%
LO
W
(1)
LO
W
(2)
FAIR
(3)
FAIR
(4)AVERAG
E
(5)AVERAG
E
(6)
G
O
O
D
(7)
G
O
O
D
(8)
H
IG
H
(9)
H
IG
H
(10)
Peer results
Percentageofthenumberofteams(58)
Scientific merit of the research —
Uniqueness of the research
Research approach / plan / focus /
co-ordination
Innovation
Quality of the research team
Probability that the research
objectives will be achieved
Research productivity
Potential impact on further
research and on the
development of applications
Potential for transition to or utility
for the community
Overall research evaluation
17. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 17
III. Investigation of results
(2. Comparability, b. Intrinsic characteristics)
• Citation impact
frequency
distribution:
– Citation impact:
only positive
counts, strong
influence of
highly cited
articles.
– Scale: minimum
limit closer to
mean & no
maximum limit.
Relative frequency distribution of citation impact
All teams in the pure ISI analysis
0%
5%
10%
15%
20%
25%
30%
35%
40%
0,1 0,4 0,7 1 1,3 1,6 1,9 2,2 2,5 2,8 3,1
Indicator value
Percentageofthenumberofteams(60)
CPP/JCSm
CPP/FCSm
18. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 18
III. Investigation of results
(2. Comparability, b. Intrinsic characteristics)
⇒ Good
correlations
only when
effects of
intrinsic
characteristics
can be filtered
out.
Scientific relevance vs. Field citation impact
High & intermediate inter-peer agreement group
19. Universiteit
Antwerpen
Reliability and Comparability of Peer Review Results
Nadine Rons (Vrije Universiteit Brussel) & Eric Spruyt (Universiteit Antwerpen) | pag. 19
IV. Conclusions
• Reliability
– Peer review results can be influenced considerably by rating habits.
– It is recommended to create a uniform reference level (e.g. using
panel procedures) or check for signs of low reliability by analysing the
outcomes of the peer evaluation itself.
• Comparability
– Besides reliability, comparability of results depends on the nature of
the indicators, on the subject area, on intrinsic characteristics of the
methods, ...
– Different methods describe different aspects. The most suitable
method should be carefully chosen or developed.
• Evaluations should always be based on a series of indicators,
never on one single indicator.