Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Using phenomenography in educational technology research from 2003 to 2017: A systematic review and content analysis

60 views

Published on

Paper presented at the ICLS 2018, UCL Institute of Education, London, June 25, 2018

Published in: Education
  • Be the first to comment

  • Be the first to like this

Using phenomenography in educational technology research from 2003 to 2017: A systematic review and content analysis

  1. 1. Using phenomenography in educational technology research from 2003 to 2017: A systematic review and content analysis Presented by: Sally Wai-Yan WAN (EdD) Faculty of Education, The Chinese University of Hong Kong Email: sallywywan@cuhk.edu.hk Website: www.sallywywan.com ICLS 2018 June 25, 2018
  2. 2. Agenda 01 Background 02 Literature review 03 Methods 04 Findings & discussion 05 Concluding remarks
  3. 3. Background
  4. 4. Definition: Education Technology “the study and ethical practice of facilitating learning and improving performance by creating, using, and managing appropriate technological processes and resources” (Januszewski & Molenda, 2008, p.1) … associated with: • instructional technology • information technology (IT) • information technology and communication (ICT) • computer-supported collaborative learning • online learning • distance education • computer education (Pelgrum & Plomp, 2002)
  5. 5. Literature review
  6. 6. Definition: Systematic Review A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. It uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can be drawn and decisions made (Antman 1992; Oxman 1993). (Cochrane Community, 2017)
  7. 7. Features of systematic review 1. explicit and transparent methods 2. a piece of research following a standard set of stages 3. accountable, replicable and updateable 4. relevant and useful to users (EPPI Centre, 2018)
  8. 8. Why Systematic Review? Literature reviews in educational technology studies … • Different applications of educational technology • hypermedia (e.g. Dillon & Gabbard, 1998) • games (e.g. Randel et al., 1992) • … • Factors affecting the effectiveness of the use of educational technology • learning environments (e.g. Winn, 2002) • professional training and development (e.g. Daly, Pachler, & Pelletier, 2009) • … • Factors affecting the implementation of educational technology (e.g. Durlak & DuPre, 2008)
  9. 9. Why Systematic Review? Literature reviews in educational technology studies … • Different applications of educational technology • hypermedia (e.g. Dillon & Gabbard, 1998) • games (e.g. Randel et al., 1992) • … • Factors affecting the effectiveness of the use of educational technology • learning environments (e.g. Winn, 2002) • professional training and development (e.g. Daly, Pachler, & Pelletier, 2009) • … • Factors affecting the implementation of educational technology (e.g. Durlak & DuPre, 2008) However … • narrowly studied or loosely organized • doubted for the trustworthiness as they might contain ‘questionable findings’ due to ‘under- examined assumptions’ (Kirkwood, & Price, 2013, p.536)
  10. 10. What is phenomenography? • Qualitative research approach: researching qualitatively the experience of learning Ø content of thinking > process of thought/perception • “Phenomenography is explained as a qualitative, nondualistic research approach that identifies and retains the discourse of research participants.” (Barnard, McCosker, & Gerber, 1999, p. 212) • “Phenomenography is less a methodology than an explorative and analytic approach to qualitative research.” (Barnard, McCosker, & Gerber, 1999, p. 223) • ”…the outcome of phenomenographic research never departs from a portrayal of the language of the participants. The outcome remains purposefully at a descriptive level and is presented in the form of categories of description and outcome space.” (Barnard, McCosker, & Gerber, 1999, p. 223)
  11. 11. Phenomenography: Origins and Usage ‘a research method for mapping the qualitatively different ways in which people experience, conceptualize, perceive, and understand various aspects of, and various phenomena in, the world around them’ (Marton, 1986, p.31) • An effective approach to investigate variations in conceptual understandings of different phenomena (Bowden, & Walsh, 2000)
  12. 12. Phenomenographic study Choose phenomenon Data collection Immersion in data Develop categories & dimensions Finalising of outcome space Devise interview protocol Select participants for diversity Conduct & transcribe interviews Interrogate data responses Interpret responses for meaning Revisit data into refine categories & dimensions Describe categories in more detail Delineate categories based on dimensions Generate outcome space Group meaning statements into themes Describe categories & dimensions Source: Herbert & Pierce (2013) second-order perspective
  13. 13. Phenomenography vs. Phenomenology Phenomenography Phenomenology The structure and meaning of a phenomenon as experienced can be found in pre-reflective and conceptual thought. A division is claimed between pre- reflective experience and conceptual thought. The aim is to describe variation in understanding from a perspective that views ways of experiencing phenomena as closed but not finite. The aim is to clarify experiential foundations in the form of a singular essence. A second-order perspective in which experience remains at the descriptive level of participants’ understanding, and research is presented in a distinctive, empirical manner. a first-order perspective that engages in the psychological reduction of experience. Analysis leads to the identification of conceptions and outcome space. Analysis leads to the identification of meaning units.
  14. 14. Phenomenography: Origins and Usage ‘potential impact that phenomenography and the relational perspectives have on pedagogical practices’ (Andretta, 2007, p.152)
  15. 15. Phenomenography: Origins and Usage ‘potential impact that phenomenography and the relational perspectives have on pedagogical practices’ (Andretta, 2007, p.152) • Little is known about to what extent phenomenography can be applied in the context of educational technology
  16. 16. Research Questions (1) How is phenomenography applied in studying educational technology? (2) What are the key limitations and possible future development in the use of phenomenography in educational technology studies as stated in the published articles in the SSCI journals?
  17. 17. Methods
  18. 18. Methods Databases: 14 SSCI journal websites Journals Australasian Journal of Educational Technology (AJET) British Journal of Educational Technology (BJET) Computer Assisted Language Learning (CALL) Computers & Education (CS) Educational Technology & Society (ETS) Educational Technology Research and Development (ETR&D) Eurasia Journal of Mathematics Science and Technology Education (EJMSTE) Internet and Higher Education (IHE) International Journal of Computer-Supported Collaborative Learning (IJCSCL) Journal of Computer Assisted Learning (JCAL) Journal of Science Education and Technology (JSET) Language Learning & Technology (LLT) Learning Media and Technology (LMT) Technology Pedagogy and Education (TPE)
  19. 19. Process of Systematic Review SSCI papers with the use of phenomenography in educational technology studies exclusion criteria Keyword search " SSCI journals in educational technology Keyword search *phenomenography* *phenomenographic method* *phenomenographic approach* Inclusion criteria • 14 SSCI journals in the field of educational technology • English text • Year of publication: 2003- 2017 • Studies with the use of phenomenography
  20. 20. Identification process Records identified with database searching (N=35) Identification Records screened (N=32) Records excluded (N=3) due to: Not a phenomenographic study ScreeningEligibility Full-text articles assessed for eligibility (N=32) Included Studies included in analyses • Higher education (N=22) • Secondary education (N=8) • Primary education (N=2)
  21. 21. Data analysis (1) Descriptive analysis (i.e. counting the number of published papers, origin of papers, targeted groups, data collection methods) with the use of MS Excel software (2) Content analysis (i.e. thematic analysis) • NVivo (version 11.0) to generate and search for patterns (Bazeley, 2013; Clarke & Braun, 2013) (3) A preset coding system to identify and construct themes • Aims of the study • Why using phenomenography? • Any details about phenomenography? • Research questions • Limitations (Challenges) • Contributions/ Significance of the study? • Suggestions for improvement (Solutions) • Forms of data collection • Is phenomenographic analytic procedure applied in the study? • Participants (sample size? Who?)
  22. 22. Findings & discussion
  23. 23. Research question 1: How is phenomenography applied in studying educational technology?
  24. 24. Findings & Discussion
  25. 25. Closer look … Computer & Education Aim: to increase knowledge and understanding of ways in which digital technology can enhance education, through the publication of high quality research, which extends theory and practice. Scope: research papers on the pedagogical uses of digital technology, where the focus is broad enough to be of interest to a wider education community. (Source: https://www.journals.elsevier.com/computers-and-education)
  26. 26. Closer look … Computer & Education Scope (cont’d): We do not publish small-scale evaluations of specific software/systems in specialist domains or particular courses in individual institutions (unless the findings have broader relevance that is explicitly drawn out in the paper). Papers that include discussions of the implementation of software and/or hardware should focus on the context of use, the user/system interface, usability issues and evaluations of the user experience and impacts on and implications for learning and teaching. Detailed information on implementation architecture should NOT be included in the paper, but may be provided via hot-links. We welcome review papers that include clear aims (research questions), a framework of analysis, and conclusions that reflect the aims of the paper. (Source: https://www.journals.elsevier.com/computers-and-education)
  27. 27. Closer look … Technology Pedagogy and Education Aim: • To serve the international education community by disseminating research findings regarding the use of information and communication technology (ICT) to improve teaching and learning. It explores the particular contribution that ICT can make to educational environments, focusing on empirical evidence derived from both quantitative and qualitative research designs, and on a critical analysis of the ways in which new technologies can support learning and teacher professional development in all phases of education. • To promote the advance of research and scholarship in its field • To provide a vehicle for the exchange and dissemination of reports regarding implementations and practices and research • To offer a forum for the debate of contemporary issues • To create an international arena for discussion of the role of ICT in education and professional development • To develop greater awareness, understanding and cooperation between educators.
  28. 28. Closer look … Technology Pedagogy and Education Scope: welcomes all contributions which extend or amplify what is already published in the field of technology and pedagogy, and which will be of interest to those involved in teacher education. It especially seeks to publish articles concerned with: • ICT practices and innovations in education – schools, teacher education, higher education and informal settings; • the theory and practice of ICT in teaching and professional development/learning/; • evaluation of the impact of ICT on teaching and learning; • social, cultural, political and economic aspects of ICT and pedagogy, particularly concerning better educational opportunities for disadvantaged learners; • pre-service teachers working with ICT in formal and informal settings; learners working with ICT; • links between schools, training institutions and the wider community. Source: https://www.tandfonline.com/action/journalInformation?show=aimsScope&journalCode=rtpe20
  29. 29. Closer look … Journal of Computer Assisted Learning Aim: To make the outcomes of contemporary research and experience accessible. During this period there have been major technological advances offering new opportunities and approaches in the use of a wide range of technologies to support learning and knowledge transfer more generally. There is currently much emphasis on the use of network functionality and the challenges its appropriate uses pose to teachers/tutors working with students locally and at a distance. Source: https://onlinelibrary.wiley.com/page/journal/13652729/homepage/productinformation.html
  30. 30. Closer look … Journal of Computer Assisted Learning Scope: covers the whole range of uses of information and communication technology to support learning and knowledge exchange. It aims to provide a medium for communication among researchers as well as a channel linking researchers, practitioners, and policy makers. JCAL is also a rich source of material for research students in areas such as collaborative learning, knowledge engineering, open, distance and networked learning, developmental psychology, and evaluation. Research themes are treated in a way which will maximise their influence on theory and practice in the learning sciences, in education, vocational training, and professional development. Source: https://onlinelibrary.wiley.com/page/journal/13652729/homepage/productinformation.html
  31. 31. General characteristics Authorship Location Title Research questions Research method Participants Single (N=8) Multiple (N=24) Australia (N=7) Taiwan (N=6) UK (N=5) USA (N=3) Cyprus (N=3) Greece=2) South Africa (N=2) Singapore(N=1) New Zealand (N=1) Malta (N=1) Spain (N=1) Conceptions (N=11)* Experiences (N=4) Approaches (N=3) No=15 Yes=17 Interviews (N=24) Open-ended questionnaires (N=9) Drawing (N=1) Tests (N=1) Field notes (N=1) Mapping study (N=1) Online reflections (N=1) Journal writings (N=1) Video records (N=1) Journal of reflections (N=1) Audio recording (N=1) Students (N=18) Teachers (N=14)
  32. 32. Research questions Key features of research questions? What – questions Examples: • What preconceptions on the phenomenon of sound attenuation do secondary school students use when explaining the acoustic behaviour of materials? (Hernández et al., 2012) • What are in-service preschool teachers’ approaches to online education? (Yang & Tsai, 2017)
  33. 33. Research questions Key features of research questions? How – questions Examples: • How do students perceive the role of ICT in research? (Markauskaite & Wardak, 2015) • How do students approach integrating different sources of knowledge (from the Internet and from their textbooks and classes)? (Ellis et al., 2011)
  34. 34. Research questions Key features of research questions? Common words used? • Conceptions • What are in-service preschool teachers’ conceptions of online education? (Yang & Tsai, 2017) • What are university students' conceptions of OCW? (Limbu & Markauskaite, 2015)
  35. 35. Research questions Key features of research questions? Common words used? • Differences / Variations • What are the possible differences between the students’ conceptions of learning and web-based learning? (Tsai, 2009) • Is there a difference in the students' approaches to online argumentation between conditions? (Tsai & Tsai, 2013) • How variation in the quality of their approaches and conceptions were related to their performance levels? (Ellis et al., 2005)
  36. 36. Research questions Key features of research questions? Common words used? • Relationship / relate • What is the relationship between preschool teachers’ conceptions of and approaches to online education? (Yang & Tsai, 2017) • How are qualitatively different conceptions and approaches related to student achievement in the class tasks? (Ellis et al., 2011)
  37. 37. Research questions Key features of research questions? Common words used? • Approaches • What are in-service preschool teachers’ approaches to online education? (Ellis et al., 2005) • What are the students' approaches to online argumentation as identified by the phenomenographic method? (Tsai & Tsai, 2013)
  38. 38. Research methods: Interviews Provision of interview guide? Provision of sample interview questions? a “specialised form of the qualitative research” (Bruce, 1994: p.49)
  39. 39. Interviews No. of studies using interviews (N=24) • With sample questions in the main text (N=15) • With an appendix (N=2) Information for conducting interviews in the use of phenomenographic study ?
  40. 40. Interviews The interviews were carried out as a dialogue between researcher and subject. The subjects were assured that it was their conceptions that were important rather than reaching the ‘‘right’’ answer. The interviewer’s role was to listen attentively, and ask questions only to further clarify views, as described in Ebenezer and Fraser (2001). Rather than asking, ‘‘What is voltage?’’ which tends to elicit a recital of textbook definitions, questions began with phrases such as, ‘‘How do you explain…’’ in order to uncover the subjects’ own ideas. This helped elicit if–then propositions from subjects, such as, ‘‘If the current is flowing in this direction, then what we should see is this bulb lighting up first.’’ (Bledsoe & Flick, 2012)
  41. 41. Interviews (Tsai et al., 2011)
  42. 42. Interviews (Stein et al., 2011)
  43. 43. Interviews No. of participants in interviews >/=7: 4 studies 8-20: 10 studies >20: 10 studies (ranging from 22 to 50 participants)
  44. 44. General characteristics Aim of phenomenography Phenomenographic analytical procedures Research rigor (Credibility) Referential (Categories of description) Structural (Outcome space) No=0 Yes=32 No=2 Yes=30 No=10 Yes=22 No=5 Yes=27 No=22 Yes=10
  45. 45. Aims of phenomenography Key features as stated … Phenomenography offers a recursive model to investigate the student experience of learning, one that allows meanings to be unpacked from interrelated concepts. It suggests that at a high level of description, the student experience of learning can be usefully understood as what the students think they are learning, their conceptions, and how they go about their learning, their approaches (Prosser & Trigwell 1999). Each of these can be unpacked further. Student conceptions of learning have a referential aspect, those aspects that reveal the meaning of the parts of the concept, and structural aspects, which are its parts. Student approaches to learning can be divided into what students do while learning, their strategies, and why they do those things, their intentions. (Ellis et al., 2011) Investigations of experiences
  46. 46. Aims of phenomenography Key features as stated … … The goal of phenomenography is to define the different ways in which individuals experience, interpret, understand, perceive, and conceptualize a given phenomenon, or aspect of reality (Marton 1986), which in this case would be the field of engineering. The result of phenomenographic research is a set of categories of description of various aspects of the individuals’ experiences of the phenomenon. Phenomenographic research involves identifying conceptions of the phenomenon and then looking for underlying meanings and relationships among these different conceptions (Orgill 2007). Marton (1981) captures the essence of phenomenography by noting that it searches for the middle-ground between the extremes of ‘‘the common’’ and ‘‘the idiosyncratic.’’ Phenomenography was deemed appropriate for this study because it allowed us to investigate similarities and differences in the perception of engineering that 6th grade students constructed as a result of their experiences. Investigations of experiences: similarities & differences
  47. 47. Aims of phenomenography Key features as stated … The purpose of phenomenographic studies is to describe variations of conceptions that people have of a particular phenomenon, i.e. their conceptual meanings of the phenomenon of interest. Such studies are suited for under- researched areas because they open up the spectrum of perceptions for further more focused studies. (Souleles et al., 2017) Descriptions of variations
  48. 48. Aims of phenomenography Key features as stated … The phenomenographic approach was selected for this study on the basis of its potential to reveal variation in ways (Bowden & Marton, 1998) novice programmers (pre-service teachers) and experienced programmers (inservice teachers) experience the act of learning to program in a new language, specifically, an object-oriented language. In deriving categories of description for students’ conceptions of learning to program, we strove to find relationships between the students and the subject matter that expressed their ways of experiencing the subject, learning to program, as they were learning it. The relational view of learning furnished by phenomenography has formed the basis of much of the research into student learning in higher education. (Govender & Grayson, 2008) Variations: exploration of relationships between subjects & conceptions
  49. 49. General characteristics Aim of phenomenography Phenomenographic analytical procedures Research rigor (Credibility) Referential (Categories of description) Structural (Outcome space) No=0 Yes=32 No=2 Yes=30 No=10 Yes=22 No=5 Yes=27 No=22 Yes=10
  50. 50. Phenomenographic analytical procedures Key commonalities? The process of analysing the student responses followed a phenomenographic procedure (Crawford et al. 1994; Cope 2000, Ellis 2004). The process of analysing one of these areas, face- to-face approaches to case-based learning, is taken as an example for the following description. 1. The section of each questionnaire relating to face-to-face approaches was analysed for variation in the response. 2. After reading all of the students’ answers about their face-to-face approaches, there was a sense of qualitative variation in the students’ responses. This variation was evident in the key features of their approaches, some suggesting an awareness of inquiry in order to understand the issues, while others revealed an awareness of more superficial aspects of inquiry, such as a way of collecting information. 3. Student responses with a clear indication of key features of their approaches were identified and used as the beginning of themes (Marton & Booth 1997). The themes were used to orient the classification process as categories of approaches began to appear. 4. The themes were grouped into logically related areas. Some overlapped and these began to form the basis of the outcome space for approaches in the face-to-face context. (Ellis et al., 2005)
  51. 51. Phenomenographic analytical procedures Key commonalities? 5. After re-reading all the student responses in relation to the overlapping draft categories and initial structural relationships, a draft set of categories that revealed meaning and structure was established. This was the initial classification. 6. The initial category descriptions required further development as different aspects of the student experience of learning through inquiry were emphasised by different students. 7. To improve the category of descriptions, representative extracts from the questionnaire were chosen and discussed with supporting researchers. This led to the reconsideration of all of the responses in relation to the redrafted categories. Both the final version of the category descriptions and the re-categorisation of the extracts were agreed upon by all researchers. 8. The redrafted categories became the final version and extracts from student responses that best represented the draft categories were selected. The draft categories and the representative extracts formed the outcome space for this part of the phenomenographic study and drew on the SOLO taxonomy as a way of structuring the hierarchy (Biggs 1999) (Ellis et al., 2005)
  52. 52. Phenomenographic analytical procedures Key commonalities? Phenomenographic data analysis incorporated nine iterations, reading through the whole collective of transcript data towards the next set of categories of description and beyond that towards an outcome space. This process was spread across more than seven months. In the earlier stages the focus was entirely on identifying the categories of description describing variation among students’ accounts of their lived NL experiences. In the later stages the concern turned more on the structural distinctions among categories and how this structuring set the categories in qualitative distinction from each other, yet bringing them together into a coherent whole outcome space (Åkerlind, 2005a). The main issues of concern during the data-analysis stage were the need to focus on the collective (Bowden, 2000), the problem of natural attitude (Marton & Booth, 1997), that is, the researcher’s unintentional assumptions, and the risk of abstraction (Bowden, 2000; Säljö, 1997), meaning going beyond what is said by the data. To reduce the possibility of abstraction and to move away as much as possible from the hazard of natural attitude, the whole transcripts were addressed throughout the data-analysis process (Bowden, 2000). Furthermore, evidence from the data collective was made a requirement to substantiate analysis at all times (Åkerlind, 2005b). To guard against the potential hazard of inadvertently shifting attention to individual transcripts, at all times during the data- analysis stage the fact was foregrounded that, just as the paragraph being read is part of a transcript as a whole, the transcript is part of the data collective as a whole. (Cutajar, 2017)
  53. 53. Phenomenographic analytical procedures Key commonalities? Data analysis employed procedures proposed by Marton (1986, 1994) and was conducted using NVivo software. The analysis started after all interviews were completed and transcribed. “Phenomenographic research aims to explore the range of meanings within a sample group, as a group, not the range of meanings for each individual within the group” (Åkerlind 2008, p. 117); accordingly, the initial pool of meanings was constructed from all the transcripts by categorising data based on common ideas expressed by all participants, without focusing on specific participants or parts of the interviews. Each quote was then interpreted in relation to the pool of meanings, to see what others had said about the same thing, and in relation to individual transcripts, to see what the person had said about other issues relating to the topic. The process took into account all meanings expressed by participants about different aspects of research, irrespective of the structure of the interview, thereby reducing the risk of taking utterances out of their original interview context. To maintain the research rigour and quality of the findings, two researchers initially separately examined the pool of meanings and assigned the participants’ expressions to the categories of descriptions. Then they discussed and refined their interpretations iteratively in several meetings. In the final outcome, each category and each dimension of variation was represented by a number of quotes from multiple participants of varying backgrounds. This result shows that this sample and procedure resulted in sufficient “data saturation” (Bowden & Green, 2005). (Markaukaite & Wardak, 2015)
  54. 54. Phenomenographic analytical procedures Key commonalities? Initial understandings Read all (materials/ transcripts) collectively Differences? Similarities? Categories Outcome space
  55. 55. General characteristics Aim of phenomenography Phenomenographic analytical procedures Research rigor (Credibility) Referential (Categories of description) Structural (Outcome space) No=0 Yes=32 No=2 Yes=30 No=10 Yes=22 No=5 Yes=27 No=22 Yes=10
  56. 56. Research rigor (credibility) There was 90% agreement between the three evaluators concerning the interpretation of the data and the categories obtained and after discussing the minor differences, consensus was established. (Levin & Wadmany, 2005)
  57. 57. Research rigor (credibility) The interviews took place before the essays were graded and returned to the students so as to avoid compromising the evaluations. The teacher of the course was also interviewed after the course but before grading the essays. All interviews were taped and transcribed. The interviews were conducted in Finnish and translated into English by the first author. (Lindblom-Ylannes & Pihlajamaki, 2003)
  58. 58. Research rigor (credibility) First, the collected data were processed and analyzed, then, categories were generated, and finally, the researchers’ insights into the investigated phenomenon were constructed. The data was jointly analyzed by two educational-researchers for establishing research trustworthiness. (Barak, 2007)
  59. 59. Research rigor (credibility) To maintain the research rigour and quality of the findings, two researchers initially separately examined the pool of meanings and assigned the participants’ expressions to the categories of descriptions. Then they discussed and refined their interpretations iteratively in several meetings. (Markaukaite & Wardak, 2015)
  60. 60. Research rigor (credibility) After the initial classifications, the students’ interview responses were classified againby two researchers, independently. The percentage of agreement was applied to calculate the reliability of these two researchers’ coding, showing an initial 82% agreement between them. Those responses without agreement were resolved by discussion between the researchers. (Tsai et al., 2011)
  61. 61. Research rigor (credibility) The reliability measure (proportion of agreement) for the qualitative analysis calculated as the agreement coefficient for the categories of students’ conceptions was 0.89. Disagreements were discussed after the reliability analysis, and those conceptions that were classified differently by the two reviewers were classified according to mutual agreement. (Zacharia, 2007)
  62. 62. Research rigor (credibility) The process of categorizing each student’s conceptions of learning and of web-based learning was performed by an expert researcher, and a second independent researcher validated the categorization of 20 participants’ interview responses. The agreement between these two researchers reached 0.88, indicating fairly acceptable reliability of classifying the students’ conceptions of learning and of web-based learning. (Tsai, 2009)
  63. 63. Research rigor (credibility) 1. An initial set of categories was identified by the first author and another colleague at the first author’s institution. This involved independent reading and classification of the entire set of journal entries and the interview transcripts. 2. These two people then compared and discussed their initial categories and agreed on a draft set of categories and sub-categories (Govender & Grayson, 2008)
  64. 64. Research rigor (credibility) After the initial categorization of the students' interview responses into the main and achieved levels, 15 interviews were randomly selected and classified by another researcher using the same coding criteria. The percentage of agreement regarding the main conceptions, achieved conceptions, main approaches, and achieved approaches was employed to determine the reliability of the two researchers' coding, showing 80%, 93%, 73% and 87%, respectively. Those interview responses without agreement were discussed by the two researchers. (Tsai & Tsai, 2013)
  65. 65. Research rigor (credibility) Throughout the analytical process, we maintained an open mind with no pre- existing categories, and constantly revisited the original transcript data in order to confirm, contradict, and modify the emerging hypotheses about the key meanings and their structural relationships to ensure that the interpretations were validly derived from the data. The use of the entire transcript or large chunks of each transcript helped to increase the accuracy of the interpretation (Åkerlind, 2005b). The analytical process did not end until a consistent set of categories of description regarding teachers’ conceptions of mobile learning was reached. After the initial categories and their hierarchical relationship were formed, we sought feedback from an experienced researcher for communicative validity check (Åkerlind, 2005c). Based on the feedback received, we then finalized the conception categories derived from the data. (Hsieh & Tsai, 2017)
  66. 66. Research rigor (credibility) Each data set was analyzed as a single unit with themes being identified and generated within the data sets. A narrative was then constructed to represent each of the student teachers’ experiences and drawing out the themes that characterized them. These written narratives were then shared with each of the student teachers to validate and authenticate the perceived meaning of their experiences. Interestingly none of the student teachers questioned or contested the narratives but in some cases offered further insights into their use of technologies beyond their professional practice. (Turvey, 2010)
  67. 67. Research rigor (credibility) An issue in this kind of transcript analysis is the extent to which the outcomes are dependent on the idiosyncratic views of a single researcher. To provide some check on this, three researchers were involved in the process. The degree of agreement between Researcher 1 and Researchers 2 and 3 is shown in Table 1. The percentages shown in Table 1 give the initial level of inter- researcher agreement as well as the level of agreement after consultation between the researchers about category definitions, etc. Final levels of agreement in the classifications after consultation range from 88% to 100%. (Ellis et al., 2006)
  68. 68. Research rigor (credibility) The interviews were transcribed and all the participants’ answers to the first question were pooled and read by the researchers. In the initial stages of the process, students’ descriptions of their learning experiences in the course (from both open-ended questionnaires and interviews) were analysed independently by two members of the research team. Specifically, emerging themes were identified and then discussed. Discussion and refinement of the categories continued until total agreement was reached (the process of independent refinement of themes followed by discussion on the main thematic categories and their hierarchy was re-iterated several times until a final decision was made). The same procedure was applied for all questions. (Bliuc et al., 2011)
  69. 69. Research rigor (credibility) To enhance reliability and validity, the design and execution of this study closely followed traditional principles and procedures of phenomenographic research (Booth, 1993). After the initial examination of the participants' expressions and assignment of expressions to the categories by the first author, the researchers met to discuss emerging findings. This was followed by the establishment of dimensions of variation and an iterative refinement of the categories. Each iteration and emerging interpretations of findings were discussed in four subsequent meetings. Once the outcome space was established, the findings were brought in relationship to the literature and presented to two external experts for peer review. A clear record of the research process was maintained. (Limbu & Markauskaite, 2015)
  70. 70. Research rigor (credibility) … validity and reliability considerations were pursued by the researcher throughout the research process. Besides, a critical ‘friend’ was brought in to check dialogic reliability of the preliminary research findings and to test pragmatic validity; and the professional service sought to verify Maltese-to-English translations completed by the researcher to some degree served to check the interpretation of participants’ accounts. Communicative validity was also sought through the presentation of the work in relevant educational and research circles. Significant effort was expended to counterbalance the potential of individual interpretation problems, and the higher degree of partiality which might result from doing phenomenography as an individual researcher (Åkerlind, 2005b). (Cutajar, 2005)
  71. 71. Research rigor (credibility) After the initial classification of student essay responses into different categories, the data of 30 essays [out of 91 participants] were chosen at random and were categorised by another researcher independently using the same categories and coding criteria. The percentage of agreement was then calculated to determine the reliability of the researchers’ coding (Hayes & Hatch, 1999). The percentages of agreement for the conceptions and approaches were 73% and 80%, respectively, for the initial classifications. These percentages were increased up to 90% and 97% respectively after consultation by the two coders. (Yang & Tsai, 2017)
  72. 72. Research rigor (credibility) The initial category descriptions required further development as different aspects of the student experience of learning through inquiry were emphasised by different students. To improve the category of descriptions, representative extracts from the questionnaire were chosen and discussed with supporting researchers. This led to the reconsideration of all of the responses in relation to the redrafted categories. Both the final version of the category descriptions and the re- categorisation of the extracts were agreed upon by all researchers. The redrafted categories became the final version and extracts from student responses that best represented the draft categories were selected. The draft categories and the representative extracts formed the outcome space for this part of the phenomenographic study and drew on the SOLO taxonomy as a way of structuring the hierarchy (Biggs 1999) (Ellis et al., 2005)
  73. 73. Research rigor (credibility) Students’ written responses from the three schools were collated and analysed by three independent researchers. The analysis was based on phenomenographic principles drawing on research by Prosser and Trigwell (1999), Crawford and colleagues (Crawford et al. 1994), and Ellis and colleagues (Ellis et al. 2004; Ellis et al. 2008). In the process of analysing the responses, the first step followed by the researchers was to independently read the materials (the collated responses coming from the three schools). This was followed by a more thorough examination of the responses with emerging themes for each question. … The researchers took the decision that the research made the most sense if the categories were derived from all the responses. The outcomes show that variation in students’ responses was identified in terms of quality but, in this study, it did not vary depending on the school or discipline. The researchers discussed emerging themes during the analysis process, which were continually refined. (Ellis et al., 2011)
  74. 74. Research rigor (credibility) The data we collected during interviews referred to each group and were usually confirmed by three or four persons (the teammates). In order to minimize the researcher’s subjectivity on the findings, participants were asked to further explain their answers and provide feedback on the researcher’s interpretations. The main question of interest underlying the qualitative analysis of interviews concerned how our group formation method affected students’ collaboration. The inner workings of the groups’ collaboration were realized during the group facilitation meetings and mainly during the students’ interviews, which helped us understand the qualitatively different ways in which students experienced, conceptualized, realized and understood collaboration. During the analysis of the collected data from students’ interviews, we followed the steps of the phenomenographic analysis (Ornek 2008) in order to identify specific categories. We identified significant elements in students’ answers and—based on their similarity—we grouped them in categories. We iteratively compiled, analyzed, segmented and sorted the students’ statements regarding their whole experience of collaboration. This process resulted to the identification of nine (9) initial subcategories [Sx] of description which were subsequently organized in four (4) main categories based on their conceptual relevance, which are presented in the next section. (Kyprianidou et al., 2012)
  75. 75. Research rigor (credibility) In phenomenography, the issue of reliability is addressed through ensuring that the categories are easily recognizable by others (Trigwell, 2006). Validity can be achieved through showing the appropriateness of the internal logic of the categories (Marton, 1986; Säljo, 1988). A group of independent judges can be used to establish reliability and validity by examining the transcripts of the responses to determine reliability of categories and the logic of the links among the categories, as illustrated through the outcome space (Åkerlind, 2005). In this research, two colleagues of the researcher (first author) performed this checking role. There was clear agreement with the researcher by these cojudges. Johansson et al (1985) report that interjudge reliability figures should be within the range of 75% to 100%. Säljo (1988) suggests that they should be in the order of 80% to 90%. Another way reliability can be enhanced is by examining the categories in the light of how they make sense amongst related studies documented in the literature. … (Stein, Shephard & Harris, 2011)
  76. 76. Research rigor (credibility) The first level of analysis involved having the researchers who had participated in the interviews independently examine a subset of one-quarter (n=5) of the transcripts of the student interviews to create a series of initial codes based on similarities and differences in the responses. The researchers then discussed their initial codes and possible categories that might emerge from the codes. The data were then subjected to a second and deeper analysis that helped the researchers develop categories that were more general. One of the goals of this process was developing internal consistency within each category. Another important goal was the development of as few general categories as were needed to describe all of the participants’ views. After the first set of categories had been developed from an analysis of five interview subjects, the remaining data were examined to look for additional categories of description. (Karatas et al., 2011)
  77. 77. Research rigor (credibility) Once a pile of transcripts was reviewed, transcripts in the other piles were interpreted for the similarities and differences in meanings described associated with using learning technology. If the themes emerging highlighted a different meaning to the earlier identified categories, then a new category was added to the initial set. Otherwise individual responses were included in one of the other categories previously identified in the analysis. Once all the transcripts had been analysed and grouped together, these formed ‘pools of data’. … To further understand the variation in lecturers’ meanings and conceptions of learning technology, quotes describing their action and intentions within one pool of data were analysed and compared with descriptions from other pools of data. This helped identify a series of illustrative ‘utterances’ from each pool of data. Quotes from original transcripts were re-read against the selected representative utterances to better understand the meanings associated with these utterances. (Hodgson & Shah, 2017)
  78. 78. Research rigor (credibility) Simple Detailed Loose Systematic Practice-based Evidence-based
  79. 79. General characteristics Aim of phenomenography Phenomenographic analytical procedures Research rigor (Credibility) Referential (Categories of description) Structural (Outcome space) No=0 Yes=32 No=2 Yes=30 No=10 Yes=22 No=5 Yes=27 No=22 Yes=10
  80. 80. Referential (Categories of description) Key features? Source: Ellis et al. (2005, p. 246))
  81. 81. Referential (Categories of description) Key features? Source: Tsai (2009)
  82. 82. Referential (Categories of description) Key features? Source:
  83. 83. Referential (Categories of description) Key features? Source: Hodgson & Shan (2017)
  84. 84. General characteristics Aim of phenomenography Phenomenographic analytical procedures Research rigor (Credibility) Referential (Categories of description) Structural (Outcome space) No=0 Yes=32 No=2 Yes=30 No=10 Yes=22 No=5 Yes=27 No=22 Yes=10
  85. 85. Structural (Outcome space) Key features?
  86. 86. Structural (Outcome space) Key features? Source: Limbu & Markauskaite (2015)
  87. 87. Structural (Outcome space) Key features? Source: Cutajar (2017)
  88. 88. Structural (Outcome space) Key features?
  89. 89. Research question 2: What are the key limitations and possible future development in the use of phenomenography in educational technology studies as stated in the published articles in the SSCI journals?
  90. 90. General characteristics Limitation Significance / Contribution No=18 Yes=14 No=15 Yes=17
  91. 91. Key limitations 1. Sampling size 2. Diversity • Background • Experiences • Abilities 3. Research methods
  92. 92. 1. Sampling size “Because of the relatively large population sample for a qualitative study, it was only possible to survey the students (rather than conduct one-on-one interviews).” (Ellis et al., 2011) “It is not possible to generalise from this study to other contexts or assume other lecturers’ conceptions of using learning technology would be the same.” (Hodgson & xx, 2017) “… it is not possible to generalise the results. Specifically, the size of the group of students participating in this study was relatively small, and the number of MST attempted during this experiment was also limited.” (Kordaki, 2015) Practicality vs Generalizability
  93. 93. 1. Sampling size “The main limitation of phenomenographic analysis relates to the generalisability and utility of findings, i.e. the ability to draw descriptive or inferential conclusions from the sample data about the wider group of art and design faculty members, vis-à-vis their perceptions about the instructional potential of the iPad.” (Souleles, 2017) Practicality vs Generalizability “… the study sample was relatively small and from one university. These students may represent more homogenous views than students selected from a range of universities.” (Markaukaite, 2015) “… we must be cautious about over generalising from the findings of this study. What we have found regarding the preschool teachers here might be specific to the context of this particular online education scenario.” (Yang, 2017)
  94. 94. 2. Diversity … the participants’ diverse backgrounds and broad range of research topics meant that they encountered ICT outside their present degree, and in their current research, in different ways. (Markaukaite & Warde, 2015) Due to the abilities of participants to express or identify their habits, approaches and behaviours (Yang & Tsai, 2017)
  95. 95. 2. Diversity An additional limitation of this study is that, since it was critical to capture the unimpeded experience of faculty, a prescriptive approach on how to use the iPads was avoided, and subsequently little is known about the different contexts of use. (Souleles et al. 2017) It may be argued that when analysing conception and approach categories, it would be very likely that each participant would have expressed experiences that could be classified within a number of the categories of description. It would be less likely that all individual participants would be able to be classified into single categories. (Yang & Tsai, 2017)
  96. 96. 3. Research methods It has a relatively small population sample (n=5133) and has used only one qualitative instrument: an open-ended questionnaire. Its aim has been to evaluate the quality of learning arising from the blended experience that leaves little room for an analysis of skills, rather focusing on the meaning underpinning the approaches to blended learning. (Ellis et al., 2005) Application of a single method
  97. 97. 3. Research methods First, the interview questions prompted participants to reflect on the major elements of the research knowledge cycle, but did not provide them with a detailed definition of ICT and did not ask about specific technologies, methodologies or applications. Thus, the software tools and ways of using ICT, as discussed by the participants, do not necessarily represent canonical notions of ICT, or a full range of possible ICT uses in educational technology research. However, these technologies are indicative of students’ experiences understanding of the scope of this domain, which was the main focus of this study. (Markaukaite & Wardak, 2015) Application of “right” interview questions
  98. 98. General characteristics Limitation Significance / Contribution No=18 Yes=14 No=15 Yes=17
  99. 99. Significance/ Contributions The qualitative categories in this study as well as the empirical relationship among these categories have provided some insights into conceptions of and approaches to learning, and have revealed aspects not considered previously. (Yang & Tsai, 2017) Understanding of learning & teaching
  100. 100. Significance/ Contributions This study presented evidence that students’ conceptions of learning differed from those of web-based learning. These findings also strengthen the need for exploring students’ conceptions of web-based learning, as they are positively related to better ways and outcomes of web-based learning. Also, relevant research for investigating students of other grade levels, such as high school, may be of importance. (Tsai & Tsai, 2009) Understanding of learning & teaching
  101. 101. Significance/ Contributions Within the research-based design paradigm, this study is relevant not only because it contributes to clarifying students’ conceptions about sound attenuation and of the acoustic properties of materials, but also because the findings can be used to adjust and support the decisions made when designing a teaching sequence, like those already developed by Pinto´ et al. (2009) on the Acoustic Properties of Materials. … the research study … gave us some insight into how to organise the content to be taught in the TLS on tasks, we designed a teaching/learning sequence drawn upon the findings presented here. The teaching sequence allows engaging students in the analysis of the relationship between the properties and internal structure of specific materials so that they are able to account for their acoustic behaviour. (Hernandez et al., 2012) Understanding of learning & teaching Pedagogical design
  102. 102. Significance/ Contributions Nonetheless, the findings in this study do offer some interesting possibilities for thinking about the use of learning technology in their daily practices. The study thus potentially offers new insights into why lecturers often feel unable or unwilling to either teach according to their pedagogical preferences or integrate well learning technology into their pedagogical practice. (Hodgson & , 2017) Understanding of learning & teaching Pedagogical design
  103. 103. Significance/ Contributions This exploratory study points to a direction of research that seems very promising: to encourage as many students as possible (and why not ALL students?) to be actively, effectively, creatively and passionately engaged in their learning. In fact, the experience of this study clearly supports the idea that the provision of tasks allowing for and demanding solutions from each student ‘in as many ways as possible’, while at the same time providing tools designed in such a way as to support the expression of the learning concepts in different representation systems, can efficiently support students in expressing themselves to their fullest extent in terms of the learning concepts in question. (Koradaki, 2015) Understanding of learning & teaching Pedagogical design
  104. 104. Significance/ Contributions This conceptualisation of the student’s NL [networked learning] experiencing gives an alternative explanation which is more constructive and potentially may prove significant to bring in closer alignment our thinking about the NL approach and supporting our students’ learning using networked technologies. (Cutajar, 2017) Understanding of learning & teaching Pedagogical design
  105. 105. Significance/ Contributions The findings of these studies may shape educators' insights into students' learning experience, and lead them to develop better teaching designs to help students engage in meaningful learning. Therefore, future study is suggested to explore the effects of conditions, conceptions and approaches on the quality of online argumentation. In addition, researchers have explored some phenomenographic investigations into students' experiences of learning at both online and face-to-face contexts, involving conceptions of and approaches to learning through discussion (Ellis et al., 2008, 2006), and conceptions of learning management (Lin & Tsai, 2011). Those studies provided some insights into students' online and face-to- face activity from students' perspectives… (Tsai & Tsai, 2013) Understanding of learning & teaching Pedagogical design
  106. 106. Significance/ Contributions This investigation sets out to identify from the perspective of students the range of perceptions and views about the use of iPads in undergraduate art and design programmes with a significant component of SBL. The uniqueness of this study is that it sought the unhindered perceptions of students—within the limits of the methodology and research design—and was not based on a specific instructional intervention. (Souleles et al., 2017) Understanding of learning & teaching Pedagogical design
  107. 107. Significance/ Contributions Despite these limitations, the findings are of significant interest and require reflection in what they may suggest for research, student learning, and educational practice. … The outcomes of the analyses in this study have identified qualitatively different experiences of case-based learning in undergraduate veterinary science…. Broadly speaking, when students think about case-based learning in fragmented ways, that is, when they conceive of it as unrelated to the development of appropriate diagnoses and the professional practices of veterinarians, then there are significant difficulties in the case-based learning experience: approaches are not closely associated with understanding, and the online part of the experience seems to be marginally related at best. This type of research outcome suggests that the research methodologies adopted from student learning research for this study are appropriate evaluation tools for the evaluation of blended learning environments. More studies investigating variation in the student blended learning experience will help to provide evidence of the quality and issues raised by the combination of face-to-face and online learning environments in higher education. (Ellis et al., 2005) Understanding of learning & teaching Evaluation
  108. 108. Significance/ Contributions This study investigated the conceptions of mobile learning held by 15 Taiwanese high school teachers, and revealed six conception categories and their hierarchical relationships. Nevertheless, the six categories obtained in this study could serve as a basis for developing a questionnaire to assess teachers’ attitudes toward mobile learning. The findings of this study could inform teacher education and professional development programs that wish to promote mobile learning in school settings. For instance, teachers could reflect on their teaching practice with mobile devices with the aim of expanding their perspectives on mobile learning. If teacher educators would like to see mobile devices used to their fullest potential, it is necessary to cultivate more sophisticated conceptions of mobile learning among teachers. (Hsieh, 2017) Understanding of learning & teaching Professional development
  109. 109. Final remarks
  110. 110. Final remarks Limitations • Coverage (publication type, year, technology, etc.) • Quality of processes (transcriptions; interviewing processes; external reviews; tools for analysis, etc.) • How to further help teachers/educators… Contributions • Methodological rigors and practices • Feasibility of using phenomenography in supporting educational technology studies Conceptions Experiences
  111. 111. Contact: sallywywan@cuhk.edu.hk

×