1. A study examined how teachers analyzed and used results from a common assessment to inform their teaching. Teachers struggled to understand the specific skills being tested in questions and how results linked to curriculum standards.
2. Through workshops, teachers learned to systematically analyze learner results using color coding and item difficulty rankings. This process helped teachers identify class weaknesses and strengths.
3. The workshops showed that while teachers could identify what learners knew, they struggled to develop strategies to address gaps. Teachers need demonstrations of alternative teaching methods and support linking analysis to improvement plans.
It refers to the collection of information on which judgment might be made about the worth and the effectiveness of a particular programme. It includes making those judgments so that decision might be made about the future of programme, whether to retain the program as it stand, modify it or throw it out altogether.
It refers to the collection of information on which judgment might be made about the worth and the effectiveness of a particular programme. It includes making those judgments so that decision might be made about the future of programme, whether to retain the program as it stand, modify it or throw it out altogether.
Making Observations Count: Mary Brownell, Daisy Pua, David Peyton, and Nathan...Terra Elswick
Valid observation protocols are critical to teacher preparation. When aligned with what we know about effective special education teaching, such instruments can help us to evaluate and inform preservice teachers on their progress towards instructional effectiveness. To date, scant research has been conducted on viable observation systems to be used in field experiences. We describe the development and initial validation process of a Preservice Observation Instrument for Special Education (POISE) using Kane's validity framework. We demonstrate the practicality of using the tool to assess the instruction of teacher candidates as well as beginning special education teachers.
Dr. William Allan Kritsonis earned his BA in 1969 from Central Washington University, Ellensburg, Washington. In 1971, he earned his M.Ed. from Seattle Pacific University. In 1976, he earned his PhD from the University of Iowa. In 1981, he was a Visiting Scholar at Teachers College, Columbia University, New York, and in 1987 was a Visiting Scholar at Stanford University, Palo Alto, California.
In June 2008, Dr. Kritsonis received the Doctor of Humane Letters, School of Graduate Studies from Southern Christian University. The ceremony was held at the Hilton Hotel in New Orleans, Louisiana.
Making Observations Count: Mary Brownell, Daisy Pua, David Peyton, and Nathan...Terra Elswick
Valid observation protocols are critical to teacher preparation. When aligned with what we know about effective special education teaching, such instruments can help us to evaluate and inform preservice teachers on their progress towards instructional effectiveness. To date, scant research has been conducted on viable observation systems to be used in field experiences. We describe the development and initial validation process of a Preservice Observation Instrument for Special Education (POISE) using Kane's validity framework. We demonstrate the practicality of using the tool to assess the instruction of teacher candidates as well as beginning special education teachers.
Dr. William Allan Kritsonis earned his BA in 1969 from Central Washington University, Ellensburg, Washington. In 1971, he earned his M.Ed. from Seattle Pacific University. In 1976, he earned his PhD from the University of Iowa. In 1981, he was a Visiting Scholar at Teachers College, Columbia University, New York, and in 1987 was a Visiting Scholar at Stanford University, Palo Alto, California.
In June 2008, Dr. Kritsonis received the Doctor of Humane Letters, School of Graduate Studies from Southern Christian University. The ceremony was held at the Hilton Hotel in New Orleans, Louisiana.
Determinants of Lecturers Assessment Practice in Higher Education in Somaliaijejournal
This research investigated the determinants of lecturers' assessment practices in higher education institutions in Mogadishu, Somalia. The factors that determined the lecturer’s assessment practice were design, interpretation, application, and administration mechanisms. A quantitative research design was conducted. The questionnaire was used, Cronbach's alpha value is.917. This shows that the scale's internal consistency and reliability for this sample are quite excellent. r =.636, P = 0.000,.05., the findings revealed a significant, favorable, and robust relationship between design and lecturers' assessment practices. Also, the correlation table shows a good connection between assessment, interpretation, application, and lecturers' assessment practice. (Explained) (r =.575, p = 0.000,.05) (R =.516, p =.000, 0.05) there is a strong positive relationship between assessment design, interpretation, and application to lecturers’ assessment practice. I recommend that the administration of public and private higher education institutions focus on in-service training on how to upgrade the skills of lecturers toward assessment practice.
IAO’s 2020 Accreditor sheds light on approaching educational developments for the New Year. The Accreditor features exclusive interviews and insight into the modern educational world and how teachers can catch up with EdTech in the new decade.
Traditional Student Evaluations of Teaching (SETs) are feedback forms returned by students at the close of a course. Institutions intend that data from these forms be used to improve the quality of teaching and as an assessment of quality of teaching for deciding faculty promotion and tenure decisions. Although it is recognized that students can offer valuable information on the appropriateness of teaching quality, it has also been recognized that these traditional SETs are likely to have negative effects on the quality of teaching. These negative criticisms are quite extensive and range from "dumbing down" of courses to restrictions on academic freedom. One patently obvious criticism is that the information given by one group of students at the end of a course cannot be used to improve the teaching on that course. Similarly, it can only be useful to future students to the extent that future groups of students are similar to the feedback group and to the extent that the course and teaching remain similar. However, courses and teaching methods hopefully evolve and the constituent subgroups of a student cohort can change considerably from one year to the next.
This paper introduces an alternative method of allowing students to assess the quality of teaching that circumvents many of the problems associated with traditional SETs. In particular it allows feedback to be used for optimizing teaching quality during the course for the whole class, for individuals or for identified subgroups of students within the whole group. The feedback is quick and cheap to process - as it requires only eight ratings from each course member.
The paper outlines the method and the theory behind it. These three objectives - kills, understanding and attitudes - are emphasized to a determined amount in the teaching and assessment of the course. Feedback forms used during the course give data on the lecturer’s and students’ expectations for change in these objectives. This data allows for calculations of the alignment between the lecture’s and the students’ expectations for change. The theory is that academic success is maximized when students and their lecture are working towards the same changes. The theory is re-validated with each course by correlations of alignments with results, which show that in-course alignment predicts postcourse academic success. This paper describes how the data are also used during the course to determine the changes that will best align in-course student/lecture expectations. The educational importance of this alignment method is that it offers a cheap, efficient and effective alternative to the widespread problematic use of traditional SETs for quality control of teaching in tertiary institutions.
Source: https://ebookschoice.com/skills-understanding-and-attitudes/
Using Assessment that Support the Curriculum
How do I link curriculum to assessment?
Assessment for children is a critical piece of the puzzle. Curriculum, standards and assessment join together to help you provide the best learning experiences for children. Practitioners should assess children’s progress on the curriculum content that is presented to children. The information teachers gather about children’s progress helps determine how to design the classroom, the kinds of experiences, and the content that will help children learn new skills. Regular (or ongoing) assessment gives you the information you need for lesson planning and helps you create stimulating learning environments for children.
The phrase "teaching to the test" commonly means the practice of using a state-mandated test as a guide in deciding what to teach and how to teach it. However, this simple definition understates the complexity of the issue. On one hand, teaching to the test can be a case of the tail wagging the dog, where the needs of the test becomes more important than the teaching. It can even indicate an attempt to subvert the testing process, to beat the system. But seen in a positive light, teaching to the test can describe purposeful efforts to teach students knowledge and skills that have been established as important and included in mandated standards and assessments.
Why has this become an important issue?
Almost every state now has mandated tests for students. More and more, test scores are used for accountability-to make decisions about school accreditation, staff job security or pay, and student promotion and graduation. As the tests have became more high-stakes, the practice of teaching to the test has also increased dramatically. School personnel want their students to succeed and show what they know on the tests, and they often feel pressure to use any means available to raise scores. However, while families and the general public are demanding higher standards and higher scores, there is increasing concern, sometimes very vocally expressed, that the time and effort spent teaching to the test is educationally shortchanging students.
What's wrong with teaching to tests?
There's nothing wrong with teaching the general content and skills included on a test, as long as the test is assessing the "right" things and asking students to demonstrate their knowledge in ways that parallel real-world applications. The problem often develops when a test does not match standards for what students should know and be able to do, covers a very narrow set of objectives from the broader base of knowledge and skills included in standards, or includes mostly items that focus on recall of isolated facts. In cases such as these, both experts and practicing educators fear that teaching to the test may:
- narrow or distort the curriculum;
- emphasize use of short-term over long-term memory;
- discourage creative thinking;
When is teaching to the test appropriate?
In general, the better the test, the more it can be used as a guide for good instruction. There is much less controversy about teaching to the test when the test itself:
- reflects solid content standards;
- assesses a broad range of knowledge and skills;
How can we teach to the test the right way?
- Legitimate teaching to the test is not instruction targeted at specific items that will appear on the test, or that appeared on last year's version. Instruction can, however, appropriately be targeted to the general content and skills that will be assessed.
Source: https://ebookschoice.com/teaching-to-the-test/
1. 1
Hands-on
How can teachers be assisted to use learner results to inform effective
teaching?
LearningBrief38
EDUCATIONTO
READ ANDWRITE
Learning from our implementing partners
Assessment for learning or formative assessment is
based on the idea that significant improvements in
learning can take place when teachers and learners
are able to obtain and use relevant information
to help them establish where learners are in their
learning, where they need to go and how best
to close any gaps. The success of this process
depends on the effective gathering, use and flow
of information, or feedback. Although effective
feedback loops are often difficult to create, when
formative feedback is effectively used within a
system, educationists tend to agree that it leads to
positive changes in learner achievement.
The Annual National Assessment (ANA)
Following this trend, the Department of Basic
Education (DBE) introduced the Annual National
Assessment (ANA) in 2008 as part of the
Foundations for Learning Campaign. The ANA
may be described as a common task assessment.
It is a system-wide, uniform assessment tool
administeredbyteachersinspecificgradeswithina
giventimeframe.ThevalueoftheANAasacommon
task assessment lies in that, if used effectively,
it assists teachers and heads of departments to
identify areas in which remediating strategies
are needed. Furthermore, it communicates the
expected standard of learning at a specific time to
teachers and school management teams (SMTs)
alike.
The ANA report (DBE, 2011a) describes how the
national database of ANA results should be able to
provide information to assist persons working in
teacher development, development of textbooks
and workbooks or providing support to schools
andtotargettheareasmostinneedofintervention.
To this end, the DBE published Guidelines for
the Interpretation of the ANA 2011 Results (DBE,
2011c). This document, which describes how the
ANA results should be analysed at class, school,
district,provincialandnationallevel,wascirculated
to schools and districts. Even though the guideline
document is quite specific regarding the steps to
be taken by teachers and school management
JET Education Services
2. 2
Hands-on Learning Brief 38May 2013
teams (SMTs) in interpreting the results, teachers
require high levels of expertise and experience to
successfully interpret assessment data in order to
benefit their learners. Teachers need to be able to
administer the assessment, link the assessment
items to specific skills and/or knowledge, correctly
interpret the learners’ responses and results,
provide relevant and constructive feedback and
then adjust their own teaching to meet the needs
of the learners. If teachers succeed in achieving all
these steps, effective formative feedback can be
said to have occurred.
However, the ANA, because it is a large scale
summative assessment, suffers from what Looney
(2011) describes as the central drawbacks in
standardised testing, namely:
Standardised testing is designed to ensure
that the testing process is reliable, valid
and generalisable in order to make valid
comparisons and claims regarding the effect/
impact of the intervention. Thus, standardised
tests cannot capture the minutiae of
performance on complex tasks such as problem
solving. Standardised tests can provide some
diagnostic detail, but not nearly as much as
formative assessments can.
Feedback loops in standardised testing
generally suffer from a time delay of several
months, whereas formative testing requires
almost immediate feedback.
There is a strong association between
standardised testing and high-stakes
consequences1
, which may lead to “teaching-
to-the-test” and thus the narrowing of the
curriculum by only teaching what is being
tested.
The strategies DBE adopted since 2008 counteract
these concerns to a certain extent:
In an effort to shorten the time delay between
testing and receiving feedback, teachers
administered the tests in their own schools,
but not in their own classes. They then marked
the tests and reported the results of their own
1Highstakestestingcouldbedefinedasanytestwhichisperceivedbythetesteetohave
importantconsequencesforeithertheindividualortheschool.Thus,anytestthatisused
tomakedecisionsregardinganindividualorgroup,e.g.whetheraschoolisclassifiedasan
“underperformingschool”inaspecificdistrict.
classes. Thus, some feedback to the teacher
through test administration and marking did
take place almost instantly. However, how such
feedback was understood and the extent to
which it was used by teachers to structure their
teaching is largely unknown.
With regard to the analysis of the results, as
mentioned above, the DBE assumed that
by distributing the guidelines for analysis,
teachers would be able to analyse the results at
class, grade and school level, thus shortening
the feedback time. However, JET’s school
visits suggest that there was no consistent
implementation of the guidelines.
TheDBEencouragedschoolstoproducelearner
reports based on the ANA results. Again, while
this is an admirable attempt to encourage
feedback to parents, the implementation was
not monitored consistently and the quality of
reports varied from reporting single scores per
subject per learner to more substantive reports
detailing learners’scores per subject in relation
to class and grade averages. The ideal of
reporting on the weaknesses and strengths of
every learner was most definitely not reached.
Because it is extremely difficult to test higher
order skills (e.g. problem solving and creativity)
and ’soft skills’ such as conflict resolution and
time management through standardised tests,
the DBE recommended that the ANA should
be supplemented with day-to-day formative
assessment in the classroom. However, as is
usually the case with any large scale testing,
when reporting the ANA results the formative
based classroom assessment tends to be
overshadowed by the quantitative results.
How can teachers’formative use and
analyses of the ANA data improve?
Despite these limitations, having instituted the
ANA as one of the tools to improve learning and
teaching, it is important to establish whether
educators are in fact able to use the tool
effectively. In this context, JET carried out a pilot
project to determine whether teachers could
analyse and use common task assessment data,
leading them to adapt their teaching strategies to
improve their learners’ achievement. Teachers in
11 schools were given a common task assessment
to administer to their learners. The common task
3. 3
Hands-on Learning Brief 38May 2013
assessment, which was developed by JET, included
questions to assess learners’ understanding of the
concepts taught during a specific period of time
as prescribed in the DBE’s Curriculum Assessment
and Policy Statements (CAPS).
The study method
The study entailed first training the participants
in assessment procedures and techniques. An
initial six hour workshop was conducted during
which teachers and heads of departments (HODs)
were instructed on the general characteristics
of formative and summative assessments and
common tasks. This was followed by a discussion
of every item in the common task assessment
and its link to the CAPS, as well as training on
the administration, scoring, moderation and
recording of the learners’ results by item on an
assessment grid. Participants were also trained to
use a curriculum monitoring tool and to follow
the required procedures for its completion. The
anticipated outcome of the workshop was that
participantswouldbeabletopracticallyadminister,
score, moderate and record their classes’ results
on the common task assessment and critically
examine their level of curriculum implementation
by completing the curriculum monitoring tool.
In a second six hour workshop, participants were
trained to analyse learner’s results using a colour-
coding system as well as item difficulty rankings
(expressed as the percentage of correct answers
for a particular item). This system was used
since the participants were all from remote rural
communities with limited access to computers and
electricity. This system could, however, easily be
adapted for use in Excel. The principles of analysis
remain the same. The analysis assisted participants
to draw conclusions about the state of learning
in their classes. This information was linked to
both the findings regarding their own curriculum
implementation and their schools’ academic
improvement plans. Practical demonstrations
of remediation strategies and adaptations of
teaching methods were then given and discussed.
What did we learn from the workshops?
Several important observations were made during
the workshops which help to answer the question:
How can teachers be assisted to effectively
use learner results to inform their teaching?
1. During the discussion on the common task
assessment questions, participants could
easily identify which topic area in the CAPS a
question related to, but struggled to identify
the exact skill or concepts being tested.
Understanding of the finer details regarding
what was being tested by each question was
lacking, e.g. teachers could identify that the
following question tested pattern recognition,
but not that the pattern was a geometric
one involving three different shapes.
Since teachers struggled to pin down
these details, it was almost impossible for
them to decide where in the hierarchy of
difficulty this particular question lay, e.g.
teachers found it difficult to decide whether
the question above is more or less difficult
than the following question and why.
This might provide a clue as to why teachers
generally teach at a specific difficulty level and
not beyond and why teachers often seemingly
teach skills at random. This also indicates that
some teachers might find the analysis and
interpretation of the ANA results difficult;
they may merely be using the DBE’s guideline
document without understanding the link
between each question in a test and the
specific concept or skill being tested and how
Draw the next shape in the pattern
Draw the next shape in the pattern
4. 4
Hands-on Learning Brief 38May 2013
it fits into the CAPS. A document containing
this information could easily be produced
by the DBE from the test frameworks used
during the ANA test development phase.
2. The practical analysis of the results during
the workshop was time consuming, but well
worth it. Participants very easily pointed out
their classes’ weaknesses and strengths, as
well as learners in need of additional support.
Participants were forthcoming in providing
information regarding whether these
problematic topics had been taught in class,
as well as whether they were taught at the
correct difficulty level. This was an indication
that the workshop was experienced by most
participants as a safe environment in which
mutual growth and learning was taking place,
as opposed to an environment focused on
compliance. To ensure that the ANA results
are used to maximum effect, their analysis
should always be done in an atmosphere of
mutual respect and trust. This has implications
for how HODs and principals approach the
analysis of the ANA results at class, grade and
school level and also how districts use the
results to structure school level interventions.
If the benefit of an open and honest analysis
of the results is not apparent to teachers or
if the use of the results is viewed as punitive,
there is a high risk that teachers may, due
to their human nature, view the ANA as
having high stakes consequences. This will
almost certainly lead to more “teaching-to-
the-test” and the subsequent narrowing
of the curriculum that has been associated
with other large scale testing programmes.
3. A further significant observation during the
workshops was that in general, participants
expressed genuine surprise at the difficulty
level required in the second term of grade 1.
With the exception of one or two teachers, all
the teachers indicated that they teach well
below the level expected in the CAPS. This
is a disturbing indication that teachers are
not yet well versed in the expected learning
standards and levels per term as prescribed in
the CAPS. However, analysis of the common
task assessment results during the workshop
had the beneficial effect of making teachers
aware of the gap between what they teach
and what was expected; this is confirmation
that common tasks assessments such as
the ANA could be used to communicate
the expected standard of teaching. This
emphasises the great responsibility that lies
with DBE and all developers of common
task assessments to ensure that the correct
information is communicated to schools.
4. It was also observed that teachers were
much more competent in identifying what
their learners know and the how they came
to have this knowledge than in coming up
with strategies to close the gaps between
learners’ knowledge and expectations in
the CAPS. Teachers tended to answer the
questions of what to do to change the
situation very generally, by saying they
would ‘re-teach’ or ‘teach differently’.
Details regarding the teaching strategies
they would adopt were lacking, even
when teachers were prompted to provide
them. This led the workshop facilitator to
demonstrate examples of teaching strategies
in areas where most learners experienced
problems. This situation again highlights
that teachers need practical demonstrations
on how to teach certain content through
different methodologies as well as more
practical training and support on the CAPS.
5. 5
Hands-on Learning Brief 38May 2013
5. Some of the more disturbing evidence that
emerged was that teachers struggled the most
with teaching counting, number concept,
patterns and time. Counting, number concept
and patterns form the foundation for many
of the other concepts taught in mathematics
and thus weak teaching strategies in these
areas can lead to devastating consequences
for learners in later years. Although these
observations are based on a small scale study
it raises the questions: Are we training our
teachers well? Is practical teaching experience
used sufficiently in pre-service training? Do in-
serviceteachersgetsufficientin-class,practical
support?Teachers’commentscertainlyimplied
that they found the practical demonstrations
offered during the workshops useful.
6. An attendant benefit of the workshop was the
illustration of the use of practical, cheap and
accessible teaching aids (i.e. mostly home-
made). Teachers acknowledged that they
learned that no fancy, expensive equipment is
neededtoteachmathematics.Itwassurprising
how many of the participants expressed
the belief that effective teaching depends
on the use of elaborate, costly materials.
7. With regard to curriculum monitoring through
book control and class visits, the participants
wereveryawarethatthiswaspartandparcelof
the HOD’s job and a requirement in all schools.
However, among both teachers and HODs
there seemed to be uncertainty regarding why
and how to do this practically. There was also
an initial hesitancy to implement curriculum
monitoring due to a preconception that
curriculum monitoring involves “someone
checking up on someone else”. Once the
why and how of curriculum monitoring was
explained and the supportive role of the
HOD discussed, participants demonstrated
an increased willingness to play a role in the
process. They then freely engaged with the
curriculum monitoring tool and were able to
link their findings to the learners’results. Once
thelearners’resultswereanalysedandthelinks
with the findings of the curriculum monitoring
made, the next task was to capture this
informationintheacademicimprovementplan
and stipulate the steps to be taken to improve
the current state of learning and teaching.
At this stage, the workshops participants
again struggled to pin down the exact details
of what needed to be done. This finding is
supported by the general lack of detail in
and vagueness of both school improvement
plans and academic improvement plans in
schools in general. This is another indication
that educators require practical training and
supportaswellasassistanceinestablishingthe
links between what the gaps in teaching are
and how to go about fixing them. It definitely
is not enough to instruct teachers to develop
academic improvement plans (a compliance
approach), without training and supporting
them in the nitty-gritty of coming up with
remedial actions (a developmental approach).
Conclusion
Would teachers benefit from training on the
use of learner results to inform their teaching?
The only possible answer to this question is
a resounding “YES!” Not only would teachers
benefit from training on the use of common task
assessments (such as the ANA) and the effective
use of assessment results, they need practical
support and guidance in this regard. This study
demonstrated that: Some teachers struggle to
link specific items in an assessment to a detailed
understanding of the skill being tested and to
understand how they fit into the sequence of skills
described in the CAPS; some teachers struggle
to come up with alternative teaching strategies
to address gaps in learner competency; some
t
t
,
t
y
s
e
s
e
y
r
e
-
and stipulate the steps to be taken to improve
6. Hands-on Learning Brief 38May 2013
teachers struggle to summarise the gaps identified
through a common task assessment and link it to
specific actions steps to remediate gaps in schools’
academic improvement plans. Such teachers are
not able to use common task assessment data
in a formative way: feedback is not taking place
successfully. Teacher development efforts should
take cognisance of the need to remedy this
situation, especially in the light of the rollout of the
ANA by the DBE.
There is a need for higher education institutions,
non-governmental organisations and the DBE to
join hands to ensure that the rich data generated
through the ANA are used to full effect by all
teachers. This can be achieved by developing
assessment grids for the ANA tests to be
completed by teachers, detailing the specific skill
or concept tested in each item. However, this on
its own would not be sufficient to assist teachers to
address learners’ weaknesses. Information about
the common mistakes made by learners in specific
questions, as well as possible teaching strategies
to remedy this, need to be made available to
teachers.
Furthermore, all content training for teachers
should make provision for the analysis of concepts
and skills with regard to the curriculum, including
what concepts and skills precede and follow the
specific concept or skill in the curriculum, common
misconceptions, and how best to teach these
concepts and skills. This would ensure that teacher
development goes beyond content training by
making the pedagogical aspects of learning a
specific concept or skill clear, preferably through
demonstration lessons and practical activities.
Lastly,teachers,HODsandprincipalsshouldreceive
specific training on the analysis and summarising
of and reporting on the ANA data.The link between
the findings in the ANA and other common tasks
and schools’ academic improvement plans should
be emphasised.
References
Black, P. & Wiliam,D. (1998). Assessment and Classroom Learning, Assessment in Education,
March,p7-74.
DBE. (2011a). Report on the Annual National Assessment of 2011. Pretoria: Department of
BasicEducation.
DBE.(2011b).ActionPlanTowards2014:TowardstheRealisationofSchooling2025.Pretoria:
DepartmentofBasicEducation.
Department of Basic Education. (2011c). Guidelines for the Interpretation of the ANA 2011
Results.Pretoria:DepartmentofBasicEducation.
Looney, J.W. (2011). Integrating formative and summative assessment: Progress toward a
seamlesssystem?OECDEducationWorkingPapers,no.58:Paris:OECD.
Sadler,R.(1989).Formativeassessmentandthedesignofinstructionalsystems.Instructional
Science.18(2):119-144.
Wiliam,D.&Black,P.(2006).MeaningsandConsequences:abasisfordistinguishingformative
and summative functions of assessment? British Educational Research Journal, 22(5): 537-
548.
EARLY CHILDHOOD
DEVELOPMENT
EDUCATIONTO
READ ANDWRITE
CONNECTIONTO
OPPORTUNITY
LEADERSHIP FOR A
WINNING NATION
INCLUSION OFTHOSE
MOST LEFT OUT
The DG Murray Trust encourages its implementing partners to share their experiences
and learning in the form of a Hands-on learning brief. Download guidelines on writing a
Hands-on brief from http://www.dgmt.co.za/what-we-learned/.
For more information visit http://www.dgmt.co.za
Tel: +27 11 403 6401
Fax: +27 11 339 7844
Email: mmossels@jet.org.za
Web: www.jet.org.za
This learning brief tells of the
hands-on experience of:
Physical address:
5th Floor, Forum 1, Braampark,
33 Hoofd Street, Braamfontein
Postal address:
PO Box 178,
WITS, 2050