2021年12月にオンライン登壇したピーチコーポレーション「Peach Women’s Business School」のスライド資料(一部を編集)になります。休憩をはさみながら、3時間にわたって受講生の皆さんと講義&ディスカッションを行いました。
https://www.peach-corporation.co.jp/peach-womens-business-school/
2021年12月にオンライン登壇したピーチコーポレーション「Peach Women’s Business School」のスライド資料(一部を編集)になります。休憩をはさみながら、3時間にわたって受講生の皆さんと講義&ディスカッションを行いました。
https://www.peach-corporation.co.jp/peach-womens-business-school/
İletişim kuramları ve iletişim araştırmalarıyla desteklenmiş medya ve medya sahipliği üzerine detaylı bir slide. Medya kuramları ve ilgili tanımları içeren bu çalışma iletişim, iletişim bilimleri ve eğitim fakültesi öğrencilerinin yanında sosyoloji öğrencileri için de çok pratik bir kaynak özelliği teşkil ediyor.
İletişim kuramları ve iletişim araştırmalarıyla desteklenmiş medya ve medya sahipliği üzerine detaylı bir slide. Medya kuramları ve ilgili tanımları içeren bu çalışma iletişim, iletişim bilimleri ve eğitim fakültesi öğrencilerinin yanında sosyoloji öğrencileri için de çok pratik bir kaynak özelliği teşkil ediyor.
Webinar : A CCS communications framework developed by the Japanese Knowledge ...Global CCS Institute
This webinar was led by Dr Hiroyasu Takase, Managing Director of Quintessa Japan. Knowledge sharing in a multidisciplinary and multi-stakeholder project such as a CCS project, presents challenges for motivating and facilitating creative interactions among experts from diverse areas of practice, as well as interactions between experts and non-expert stakeholders.
In the last three years, the Global CCS Institute has been empowering a community of CCS experts in Japan to develop and test their ideas, methodologies and toolkits to meet these challenges. An overview of this work was presented in the webinar together with lessons learnt, bearing in mind their possible use by other members. In addition, issues to be addressed by further work will be briefly discussed.
Mizushima, L., & Watari, Y. “Do English education in Japanese high schools provide sufficient pragmatic instruction?: A quantitative and qualitative study of English textbooks and teachers." at the 14th International Pragmatics Conference (The University of Antwerp, Belgium, Jul. 26-31)
4. • 要素的見方 → 多次元的・発達的見方 (Cf. Fulcher, 2012, Vogt & Tsagari, 2014)
• Taylor (2013) → 実証に基づくthe LAL surveyの開発 (n = 1,086)
• 10因子を仮定 → 71 − 18 = 53項目9因子モデル (CCR = 73%)
Kremmel & Harding (2020)
Towards a Comprehensive, Empirical Model of Language
Assessment Literacy across Stakeholder Groups: Developing the
Language Assessment Literacy Survey
Benjamin Kremmela
and Luke Hardingb
a
University of Innsbruck, Innsbruck, Austria; b
Lancaster University, Lancaster, UK
ABSTRACT
While scholars have proposed different models of language assessment lit-
eracy (LAL), these models have mostly comprised prescribed sets of compo-
nents based on principles of good practice. As such, these models remain
theoretical in nature, and represent the perspectives of language assessment
researchers rather than stakeholders themselves. The project from which the
current study is drawn was designed to address this issue through an empirical
investigation of the LAL needs of different stakeholder groups. Central to this
aim was the development of a rigorous and comprehensive survey which
would illuminate the dimensionality of LAL and generate profiles of needs
across these dimensions. This paper reports on the development of an instru-
ment designed for this purpose: the Language Assessment Literacy Survey. We
first describe the expert review and pretesting stages of survey development.
Then we report on the results of an exploratory factor analysis based on data
from a large-scale administration (N = 1086), where respondents from a range
of stakeholder groups across the world judged the LAL needs of their peers.
Finally, selected results from the large-scale administration are presented to
illustrate the survey’s utility, specifically comparing the responses of language
teachers, language testing/assessment developers and language testing/
assessment researchers.
Introduction
Given the widespread use of language assessments for decision-making across an increasing
number of social domains (education, immigration and citizenship, professional certification), it
has become vital to raise awareness and knowledge of good practice in language assessment for
a wide range of stakeholder groups. Scholars have thus called for the promotion of language
assessment literacy (LAL) not only for teachers and assessment developers, the two groups most
typically involved with language assessments, but also for score users, policymakers and students
(among others) (e.g. Baker, 2016; Deygers & Malone, 2019). For such groups, a heightened
awareness of the principles and practice of language assessment would ideally lead to more
informed discussion of assessment matters, clarity around good practice in using language
assessments, and ultimately more robust decision-making on the basis of assessment data
LANGUAGE ASSESSMENT QUARTERLY
2020, VOL. 17, NO. 1, 100–120
https://doi.org/10.1080/15434303.2019.1674855
先行研究(1)
6. • 要素的見方 → 多次元的・発達的見方 (Cf. Fulcher, 2012, Vogt & Tsagari, 2014)
• Taylor (2013) → 実証に基づくthe LAL surveyの開発 (n = 1,086)
• 10因子を仮定 → 71 − 18 = 53項目9因子モデル (CCR = 73%)
Kremmel & Harding (2020)
Towards a Comprehensive, Empirical Model of Language
Assessment Literacy across Stakeholder Groups: Developing the
Language Assessment Literacy Survey
Benjamin Kremmela
and Luke Hardingb
a
University of Innsbruck, Innsbruck, Austria; b
Lancaster University, Lancaster, UK
ABSTRACT
While scholars have proposed different models of language assessment lit-
eracy (LAL), these models have mostly comprised prescribed sets of compo-
nents based on principles of good practice. As such, these models remain
theoretical in nature, and represent the perspectives of language assessment
researchers rather than stakeholders themselves. The project from which the
current study is drawn was designed to address this issue through an empirical
investigation of the LAL needs of different stakeholder groups. Central to this
aim was the development of a rigorous and comprehensive survey which
would illuminate the dimensionality of LAL and generate profiles of needs
across these dimensions. This paper reports on the development of an instru-
ment designed for this purpose: the Language Assessment Literacy Survey. We
first describe the expert review and pretesting stages of survey development.
Then we report on the results of an exploratory factor analysis based on data
from a large-scale administration (N = 1086), where respondents from a range
of stakeholder groups across the world judged the LAL needs of their peers.
Finally, selected results from the large-scale administration are presented to
illustrate the survey’s utility, specifically comparing the responses of language
teachers, language testing/assessment developers and language testing/
assessment researchers.
Introduction
Given the widespread use of language assessments for decision-making across an increasing
number of social domains (education, immigration and citizenship, professional certification), it
has become vital to raise awareness and knowledge of good practice in language assessment for
a wide range of stakeholder groups. Scholars have thus called for the promotion of language
assessment literacy (LAL) not only for teachers and assessment developers, the two groups most
typically involved with language assessments, but also for score users, policymakers and students
(among others) (e.g. Baker, 2016; Deygers & Malone, 2019). For such groups, a heightened
awareness of the principles and practice of language assessment would ideally lead to more
informed discussion of assessment matters, clarity around good practice in using language
assessments, and ultimately more robust decision-making on the basis of assessment data
LANGUAGE ASSESSMENT QUARTERLY
2020, VOL. 17, NO. 1, 100–120
https://doi.org/10.1080/15434303.2019.1674855
先行研究(1)
Version 1.0
- Simplified definitions
2015
Version 2.0 – 2.4
- Elaborated definitions
- Multi-item scales
Expert review 1
- 6 professors in LTA
Pre-test 1
- 62 participants
- QUAN/QUAL feedback
Versions 2.5 – 2.10
- Further refinement to
wording
Pre-test 2
- 25 participants
- QUAN/QUAL feedback
Expert review 2
- 2 language assessment
experts (with expertise
in questionnaire design)
Version 2.11
Final version created
Survey launched May
2017
Data pulled from
Qualtrics platform
November 2017
Figure 2. Overview of instrument development process.
7. Kremmel & Harding (2020)
and a modification of both Taylor’s (2013) initial framework, and our own hypothesized dimensions
based on Taylor’s work. The evolution of these dimensions across the three stages – initial frame-
work, hypothesized dimensions, data-driven factor structure – is summarized in Table 6.
Table 4. Eigenvalues for 9-factor solution.
Initial Eigenvalues Extraction Sums of Squared Loadings
Factor Total % of Variance Cumulative % Total % of Variance Cumulative %
1 22.065 44.129 44.129 21.755 43.509 43.509
2 4.634 9.267 53.397 4.346 8.691 52.201
3 2.242 4.485 57.882 1.880 3.759 55.960
4 1.840 3.680 61.561 1.549 3.098 59.059
5 1.317 2.634 64.196 1.060 2.121 61.179
6 1.259 2.518 66.713 .979 1.959 63.138
7 1.134 2.269 68.982 .866 1.731 64.869
8 1.040 2.079 71.061 .760 1.519 66.388
9 1.013 2.026 73.088 .671 1.343 67.731
Table 5. The nine factors of LAL as represented in the final version of the LAL survey.
Item numbers α
Factor 1 Developing and administering language assessments 62, 68, 61, 63, 64, 66, 70, 69, 65, 60, 67, 58, 59, 17 .96
Factor 2 Assessment in language pedagogy 8, 7, 6, 5, 1, 21 .89
Factor 3 Assessment policy and local practices 12, 11, 38, 14, 39, 22 .88
Factor 4 Personal beliefs and attitudes 46, 47, 45, 48 .93
Factor 5 Statistical and research methods 50, 49, 51, 52 .95
Factor 6 Assessment principles and interpretation 32, 31, 3, 10 .85
Factor 7 Language structure, use and development 28, 27, 26, 29, 33 .85
Factor 8 Washback and preparation 24, 25, 23, 19 .87
Factor 9 Scoring and rating 56, 55, 53 .85
先行研究(2)
0 = not knowledgeable at all
1 = slightly knowledgeable
2 = moderately knowledgeable
3 = very knowledgeable
4 = extremely knowledgeable
This scale had been developed and modified during pre-testing, and provided the most useful way of
assessing the perceptions of needs among different roles/professional groups. An almost identical
question was used for a set of items (grouped together) which referred to skills rather than types of
knowledge (see Appendix 1).
106 KREMMEL AND HARDING
Towards a Comprehensive, Empirical Model of Language
Assessment Literacy across Stakeholder Groups: Developing the
Language Assessment Literacy Survey
Benjamin Kremmela
and Luke Hardingb
a
University of Innsbruck, Innsbruck, Austria; b
Lancaster University, Lancaster, UK
ABSTRACT
While scholars have proposed different models of language assessment lit-
eracy (LAL), these models have mostly comprised prescribed sets of compo-
nents based on principles of good practice. As such, these models remain
theoretical in nature, and represent the perspectives of language assessment
researchers rather than stakeholders themselves. The project from which the
current study is drawn was designed to address this issue through an empirical
investigation of the LAL needs of different stakeholder groups. Central to this
aim was the development of a rigorous and comprehensive survey which
would illuminate the dimensionality of LAL and generate profiles of needs
across these dimensions. This paper reports on the development of an instru-
ment designed for this purpose: the Language Assessment Literacy Survey. We
first describe the expert review and pretesting stages of survey development.
Then we report on the results of an exploratory factor analysis based on data
from a large-scale administration (N = 1086), where respondents from a range
of stakeholder groups across the world judged the LAL needs of their peers.
Finally, selected results from the large-scale administration are presented to
illustrate the survey’s utility, specifically comparing the responses of language
teachers, language testing/assessment developers and language testing/
assessment researchers.
Introduction
Given the widespread use of language assessments for decision-making across an increasing
number of social domains (education, immigration and citizenship, professional certification), it
has become vital to raise awareness and knowledge of good practice in language assessment for
a wide range of stakeholder groups. Scholars have thus called for the promotion of language
assessment literacy (LAL) not only for teachers and assessment developers, the two groups most
typically involved with language assessments, but also for score users, policymakers and students
(among others) (e.g. Baker, 2016; Deygers & Malone, 2019). For such groups, a heightened
awareness of the principles and practice of language assessment would ideally lead to more
informed discussion of assessment matters, clarity around good practice in using language
assessments, and ultimately more robust decision-making on the basis of assessment data
LANGUAGE ASSESSMENT QUARTERLY
2020, VOL. 17, NO. 1, 100–120
https://doi.org/10.1080/15434303.2019.1674855
8. Kremmel & Harding (2020)に基づく今回の調査
Towards a Comprehensive, Empirical Model of Language
Assessment Literacy across Stakeholder Groups: Developing the
Language Assessment Literacy Survey
Benjamin Kremmela
and Luke Hardingb
a
University of Innsbruck, Innsbruck, Austria; b
Lancaster University, Lancaster, UK
ABSTRACT
While scholars have proposed different models of language assessment lit-
eracy (LAL), these models have mostly comprised prescribed sets of compo-
nents based on principles of good practice. As such, these models remain
theoretical in nature, and represent the perspectives of language assessment
researchers rather than stakeholders themselves. The project from which the
current study is drawn was designed to address this issue through an empirical
investigation of the LAL needs of different stakeholder groups. Central to this
aim was the development of a rigorous and comprehensive survey which
would illuminate the dimensionality of LAL and generate profiles of needs
across these dimensions. This paper reports on the development of an instru-
ment designed for this purpose: the Language Assessment Literacy Survey. We
first describe the expert review and pretesting stages of survey development.
Then we report on the results of an exploratory factor analysis based on data
from a large-scale administration (N = 1086), where respondents from a range
of stakeholder groups across the world judged the LAL needs of their peers.
Finally, selected results from the large-scale administration are presented to
illustrate the survey’s utility, specifically comparing the responses of language
teachers, language testing/assessment developers and language testing/
assessment researchers.
Introduction
Given the widespread use of language assessments for decision-making across an increasing
number of social domains (education, immigration and citizenship, professional certification), it
has become vital to raise awareness and knowledge of good practice in language assessment for
a wide range of stakeholder groups. Scholars have thus called for the promotion of language
assessment literacy (LAL) not only for teachers and assessment developers, the two groups most
typically involved with language assessments, but also for score users, policymakers and students
(among others) (e.g. Baker, 2016; Deygers & Malone, 2019). For such groups, a heightened
awareness of the principles and practice of language assessment would ideally lead to more
informed discussion of assessment matters, clarity around good practice in using language
assessments, and ultimately more robust decision-making on the basis of assessment data
LANGUAGE ASSESSMENT QUARTERLY
2020, VOL. 17, NO. 1, 100–120
https://doi.org/10.1080/15434303.2019.1674855
and a modification of both Taylor’s (2013) initial framework, and our own hypothesized dimensions
based on Taylor’s work. The evolution of these dimensions across the three stages – initial frame-
work, hypothesized dimensions, data-driven factor structure – is summarized in Table 6.
Table 4. Eigenvalues for 9-factor solution.
Initial Eigenvalues Extraction Sums of Squared Loadings
Factor Total % of Variance Cumulative % Total % of Variance Cumulative %
1 22.065 44.129 44.129 21.755 43.509 43.509
2 4.634 9.267 53.397 4.346 8.691 52.201
3 2.242 4.485 57.882 1.880 3.759 55.960
4 1.840 3.680 61.561 1.549 3.098 59.059
5 1.317 2.634 64.196 1.060 2.121 61.179
6 1.259 2.518 66.713 .979 1.959 63.138
7 1.134 2.269 68.982 .866 1.731 64.869
8 1.040 2.079 71.061 .760 1.519 66.388
9 1.013 2.026 73.088 .671 1.343 67.731
Table 5. The nine factors of LAL as represented in the final version of the LAL survey.
Item numbers α
Factor 1 Developing and administering language assessments 62, 68, 61, 63, 64, 66, 70, 69, 65, 60, 67, 58, 59, 17 .96
Factor 2 Assessment in language pedagogy 8, 7, 6, 5, 1, 21 .89
Factor 3 Assessment policy and local practices 12, 11, 38, 14, 39, 22 .88
Factor 4 Personal beliefs and attitudes 46, 47, 45, 48 .93
Factor 5 Statistical and research methods 50, 49, 51, 52 .95
Factor 6 Assessment principles and interpretation 32, 31, 3, 10 .85
Factor 7 Language structure, use and development 28, 27, 26, 29, 33 .85
Factor 8 Washback and preparation 24, 25, 23, 19 .87
Factor 9 Scoring and rating 56, 55, 53 .85
0 = not knowledgeable at all
1 = slightly knowledgeable
2 = moderately knowledgeable
3 = very knowledgeable
4 = extremely knowledgeable
This scale had been developed and modified during pre-testing, and provided the most useful way of
assessing the perceptions of needs among different roles/professional groups. An almost identical
question was used for a set of items (grouped together) which referred to skills rather than types of
knowledge (see Appendix 1).
106 KREMMEL AND HARDING
方法(1)
全く知らない
わずかに知っている
まあまあ知っている
よく知っている
きわめてよく知っている
(49)−(71)のskilledは
「長けている/いない」と訳した
9. Kremmel & Harding (2020)に基づく今回の調査
方法(2)
• 53 − (2因子)10項目 = 43項目の日本語版
• +年齢・職歴・学校数、養成課程種別・学位
• URL共有 → オンライン回答 [Google Form]
• 2022年12月〜2023年4月(n = 53)
• 静岡県・三重県・宮崎県の中高英語教員
• 研修の場を通じて、指導主事を介して
work, hypothesized dimensions, data-driven factor structure – is summarized in Table 6.
Table 4. Eigenvalues for 9-factor solution.
Initial Eigenvalues Extraction Sums of Squared Loadings
Factor Total % of Variance Cumulative % Total % of Variance Cumulative %
1 22.065 44.129 44.129 21.755 43.509 43.509
2 4.634 9.267 53.397 4.346 8.691 52.201
3 2.242 4.485 57.882 1.880 3.759 55.960
4 1.840 3.680 61.561 1.549 3.098 59.059
5 1.317 2.634 64.196 1.060 2.121 61.179
6 1.259 2.518 66.713 .979 1.959 63.138
7 1.134 2.269 68.982 .866 1.731 64.869
8 1.040 2.079 71.061 .760 1.519 66.388
9 1.013 2.026 73.088 .671 1.343 67.731
Table 5. The nine factors of LAL as represented in the final version of the LAL survey.
Item numbers α
Factor 1 Developing and administering language assessments 62, 68, 61, 63, 64, 66, 70, 69, 65, 60, 67, 58, 59, 17 .96
Factor 2 Assessment in language pedagogy 8, 7, 6, 5, 1, 21 .89
Factor 3 Assessment policy and local practices 12, 11, 38, 14, 39, 22 .88
Factor 4 Personal beliefs and attitudes 46, 47, 45, 48 .93
Factor 5 Statistical and research methods 50, 49, 51, 52 .95
Factor 6 Assessment principles and interpretation 32, 31, 3, 10 .85
Factor 7 Language structure, use and development 28, 27, 26, 29, 33 .85
Factor 8 Washback and preparation 24, 25, 23, 19 .87
Factor 9 Scoring and rating 56, 55, 53 .85
→キャリア・学校種によってどのような違いが見られるか(RQ2)
11. 結果(2)
Data: bit.ly/LAQ2023wtrych
Appendix 4. Descriptive statistics of LAL needs for three key stakeholder groups
LTA developers
(n = 198)
LTA researchers
(n = 138)
Language teachers
(n = 645)a
M SD M SD M SD
Developing and administering language assessments 3.35 .59 3.28 .60 2.53 .87
Assessment in language pedagogy 2.53 .83 3.12 .70 2.96 .72
Assessment policy and local practices 2.75 .77 3.01 .82 2.28 .86
Personal beliefs and attitudes 3.21 .85 3.28 .74 2.83 .89
Statistical and research methods 3.25 .80 3.38 .74 2.10 1.03
Assessment principles and interpretation 3.60 .52 3.63 .49 2.94 .79
Language structure, use and development 3.19 .70 3.25 .61 3.02 .73
Washback and preparation 2.85 .82 3.04 .74 3.01 .79
Scoring and rating 3.45 .68 3.31 .79 2.83 .83
a
Note, for the Language teachers group: n = 644 for Personal beliefs and attitudes and Assessment principles and interpretation; n
= 643 for Statistical and research methods and Scoring and rating
120 KREMMEL AND HARDING
Kremmel & Harding (2020, p. 120)
n = 53
M SD
F1 1.03 0.72
F2 1.11 0.95
F4 1.19 0.95
F6 1.19 0.91
F7 1.46 0.87
F8 0.92 0.87
F9 1.68 0.71
F1
F2
F4
F6
F7
F8
F9
→Kremmel & Harding (2020)の結果と比べ…
• どの因子の平均値も著しく低い
• その中でも、F9 > F7 > … > F8
0 = not knowledgeable at all
1 = slightly knowledgeable
2 = moderately knowledgeable
3 = very knowledgeable
4 = extremely knowledgeable
This scale had been developed and modified during pre-testing, and p
assessing the perceptions of needs among different roles/profession
question was used for a set of items (grouped together) which referre
knowledge (see Appendix 1).
Respondents who completed the 71 items were asked to provide
responses using a sliding scale (0% to 100% confident), and to comp
gender, age, years of experience in role/profession, country of residen
professional/learning role. A space for open-ended comments was al
finally asked if they would like to continue on to provide a self-assess
skill on the same set of items (the analysis of self-assessment data
current paper).
Main trial sample
We did not use a probability sampling approach in the main trial
population for each category was unknown (e.g., there is no reliable d
teachers worldwide). We also faced a challenge in gaining acces
stakeholder groups and encouraging them to complete the survey. Th
of language professionals (e.g., teachers, examiners) working within o
dispersed and difficult to reach. For that reason, we implemented
全く知らない
わずかに知っている
まあまあ知っている
よく知っている
きわめてよく知っている
21. 2.2. Please specify if you need training in the following domains
None Yes, basic
Training
Yes, more advanced
training
a) Giving grades ! ! !
b) Finding out what needs to be
taught/ learned
! ! !
c) Placing students onto
courses, programs, etc.
! ! !
d) Awarding final certificates
(from school/program; local,
regional or national level
! ! !
3. Content and concepts of LTA
3.1. Please specify if you were trained in the following domains.
Not at all A little (1-2 days) More
advanced
1. Testing/Assessing:
a) Receptive skills (reading/listening) ! ! !
b) Productive skills (speaking/ writing) ! ! !
c) Microlinguistic aspects (grammar/vocabulary) ! ! !
d) Integrated language skills ! ! !
e) Aspects of culture ! ! !
2. Establishing reliability of tests/assessment ! ! !
3. Establishing validity of tests/assessment ! ! !
4. Using statistics to study the quality of
tests/assessment
! ! !
3.2. Please specify if you need training in the following domains
None Yes, basic
Training
Yes, more advanced
training
1. Testing/Assessing:
a) Receptive skills (reading/listening) ! ! !
b) Productive skills (speaking/writing) ! ! !
c) Microlinguistic aspects
(grammar/vocabulary)
! ! !
d) Integrated language skills ! ! !
e) Aspects of culture ! ! !
2. Establishing reliability of tests/assessment ! ! !
3. Establishing validity of tests/assessment ! ! !
4. Using statistics to study the quality of
tests/assessment
! ! !
Q3 Are there any skills that you still need?
Q4 Please look at each of the following topics in language testing.
For each one please decide whether you think this is a topic that should be included in a course
on language testing.
Indicate your response as follows:
5 = essential
4 = important
3 = fairly important
2 = not very important
1 = unimportant
A. History of Language Testing ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
B. Procedures in language test design ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
C. Deciding what to test ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
D. Writing test specifications/blueprints ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
E. Writing test tasks and items ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
F. Evaluating language tests ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
G. Interpreting scores ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
H. Test analysis ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
I. Selecting tests for your own use ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
Downloaded
by
[Texas
State
Univer
LANGUAGE CLASSROOM ASSESSMENT LITERACY 131
J. Reliability ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
K. Validation ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
L. Use of statistics ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
M. Rating performance tests (speaking/writing) ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
N. Scoring closed-response items ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
O. Classroom assessment ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
P. Large-scale testing ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
Q. Standard setting ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
R. Preparing learners to take tests ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
S. Washback on the classroom ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
T. Test administration ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
U. Ethical considerations in testing ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
V. The uses of tests in society ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
W. Principles of educational measurement ◦ 1 ◦ 2 ◦ 3 ◦ 4 ◦ 5
Q5 Which was the last language testing book you studied or used in class?
What did you like about the book? What did you dislike about the book?
Q6 What do you think are essential topics in a book on practical language testing?
Q7 What other features (e.g. glossary/activities etc) would you most like to see in a book on
practical language testing?
Q8 Do you have any other comments that will help me to understand your needs in a book on
practical language testing?
Q9 How would you rate your knowledge and understanding of language testing?
5 = very good
4 = good
3 = average
2 = poor
1 = very poor
by
[Texas
State
University
-
San
Marcos]
at
22:28
14
April
2013
Fulcher (2012) Vogt & Tsagari (2014)
でもなんか
しっくり
こない…
22. 参考文献
• 出口マクドナルド 友香理・福田 純也・亘理 陽一 (2019).「高等学校における英語運用能力アセスメ
ントの現状と課題: 静岡県内高校のパフォーマンス・タスク分析」『教育実践総合センター研究紀
要』29, 162–168.
• Fulcher, G. (2012). Assessment literacy for the language classroom, Language Assessment
Quarterly, 9, 2, 113-132. DOI: 10.1080/15434303.2011.642041
• Kremmel, B., & Harding, L. (2020). Towards a comprehensive, empirical model of language
assessment literacy across stakeholder groups: Developing the language assessment literacy
survey, Language Assessment Quarterly, 17, 1, 100−120, DOI: 10.1080/15434303.2019.1674855
• Taylor, L. (2013). Communicating the theory, practice and principles of language testing to
test stakeholders: Some re
fl
ections. Language Testing, 30, 3, 403–412. DOI:
10.1177/0265532213480338
• Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: Findings of a
european study. Language Assessment Quarterly, 11, 4, 374-402. DOI:
10.1080/15434303.2014.960046