Chapter3

442 views

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
442
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Chapter3

  1. 1. 36 CHAPTER 3 RESEARCH DESIGN AND METHODOLOGYIntroduction The review of literature has produced reoccurring themes emphasizing theimportance of technological literacy for citizens in the 21st Century (Garfinkel, 2003;Hall, 2001; Lemke, 2003; Murray, 2003; NAE, 2002; Partnership for 21st Century Skills,2003; Rose & Dugger, 2003; Zhao & Alexander, 2002; U.S. Department of Education,2004; Technology Counts, 2005). Education is a critical component in preparing studentsfor a knowledge-based, digital society. According to Hall (2001), available technologies,our perceptions of those technologies, and how they are used will determine the shape ofour world. Citizens of the future will face challenges that depend on the development andapplication of technology. Are we preparing students, the citizens of tomorrow, for thesechallenges?Purpose of the Study This study developed and implemented a faculty survey and a student assessment.The purpose of the faculty survey was to determine what basic computer skills are neededby undergraduate students for academic success in post-secondary education. This phaseof the study examined the data collected for trends and differences between theindependent variables of subject/content area, institution, gender, and years of facultyexperience. The purpose of the student assessment was to evaluate the computercompetencies of students entering a post-secondary education. This phase of the studyexamined the data collected for trends and differences between the independent variables
  2. 2. 37of home state, number of high school computer courses taken, gender, and major field ofstudy? Data collection and analysis assisted in determining if students possess thenecessary computer/technology skills entering a post-secondary institution or if a needexists for a general education course to teach computer literacy/skills to theundergraduate student population. This study also provided valuable information inregards to the content of such a course.Research Questions 1. What technology skills do post-secondary faculty members deem important for all students to possess at the college level? 2. Are there differences between the student technology skills post-secondary faculty members deem important when grouped by subject/content area, institution/stratum, gender, years of faculty experience? 3. What technology skills can students demonstrate proficiently upon entering a post-secondary institution? 4. Are there differences between the proficiency level of students’ technology skills when grouped by home state, number of high school computer courses, gender, or major field of study? 5. Are students technologically ready entering post-secondary education or does a need exist for a computer literacy/skills course for all undergraduate students?Instrumentation Two instruments were employed for data collection in this research study: afaculty survey and a student assessment. A faculty survey was designed by the researcher
  3. 3. 38to help identify technology/computer skills deemed important for undergraduate studentsto possess in order to be successful in their post-secondary endeavors. A survey researchdesign was applied to investigate the research questions. A second instrument was developed and implemented to assess technology skillsof freshmen undergraduate students who had not yet taken a post-secondary computerliteracy/skills course. A description of the two instruments used in this study follows.Faculty SurveyIntroduction According to Leedy and Ormrod (2001), “Research is a viable approach to aproblem only when there are data to support it” (p. 94). Nesbary (2000) defines surveyresearch as “the process of collecting representative sample data from a larger populationand using the sample to infer attributes of the population” (p. 10). The main purpose of asurvey is to estimate, with significant precision, the percentage of population that has aspecific attribute by collecting data from a small portion of the total population (Dillman,2000; Wallen & Fraenkel, 2001). The researcher wanted to find out from members of thepopulation their view on one or more variables. As noted by Borg and Gall (1989),studies involving surveys comprise a significant amount of the research done in theeducation field. Data are ever-changing and survey research portrays a brief moment intime to enhance our understanding of the present (Leedy & Ormrod, 2001). Educationalsurveys are often used to assist in planning and decision making, as well as to evaluatethe effectiveness of an implemented program (McNamara, 1994; Borg & Gall, 1989).
  4. 4. 39 An online faculty survey was conducted to identify computer literacy skills thatfaculty members deem important for an undergraduate student to possess in order to beacademically successful at the post-secondary level.Population and Sample The population for this faculty survey consisted of post-secondary facultymembers at four-year public institutions in the state of Missouri. Four-year publicinstitutions were determined by visiting the Missouri Department of Higher EducationWeb site at http://www.cbhe.state.mo.us/Institutions/pubinst.htm. Private or independentinstitutions and community colleges were not included in the population. Thus thesampling frame consisted of all faculty members at thirteen institutions in Missouri, assummarized in Table 1.Table 1Summary of 4-Year Public Institutions in Missouri Central Missouri State University Harris-Stowe State College Lincoln University Missouri Southern State University Missouri Western State College Northwest Missouri State University Southeast Missouri State University Southwest Missouri State University Truman State University University of Missouri-Columbia University of Missouri-Kansas City University of Missouri-Rolla University of Missouri-St. Louis A sample population was drawn from the sampling frame. A sampling frameincludes the actual list of individuals included in the population (Nesbary, 2000) whichwas approximately 4821 faculty members. According to Patten (2004), the quality of the
  5. 5. 40sample affects the quality of the research generalizations. Nesbary (2000), suggests thelarger the sample size, the greater the probability the sample will reflect the generalpopulation. However, sample size alone does not constitute the ability to generalize.Patten (2004), states that obtaining an unbiased sample is the main criterion whenevaluating the adequacy of a sample. Patten also identifies an unbiased sample as one inwhich every member of a population has an equal opportunity of being selected in thesample. Therefore, random sampling was used in this study to help ensure an unbiasedsample population. Because random sampling may introduce sampling errors, effortswere made to reduce sampling errors, and thus increasing precision, by increasing thesample size and by using stratified random sampling. To obtain a stratified randomsample, the population was divided into strata according to institutions as shown in Table2. Typically, for stratified random sampling, the same percentage of participants, not thesame number of participants, are drawn from each stratum (Patten, 2004).Table 2Strata (subgroups) for Stratified Random Sampling Instructors and professors at Central Missouri State University Instructors and professors at Harris-Stowe State College Instructors and professors at Lincoln University Instructors and professors at Missouri Southern State University Instructors and professors at Missouri Western State College Instructors and professors at Northwest Missouri State University Instructors and professors at Southeast Missouri State University Instructors and professors at Southwest Missouri State University Instructors and professors at Truman State University Instructors and professors at University of Missouri-Columbia Instructors and professors at University of Missouri-Kansas City Instructors and professors at University of Missouri-Rolla Instructors and professors at University of Missouri-St. Louis
  6. 6. 41 Patten (2004) suggests that a researcher should first consider obtaining anunbiased sample and then seek a relatively large number of participants. Patten (2004)provides a table of recommended sample sizes. A table of recommended sample sizes (n)for populations (N) with finite sizes, developed by Krejcie and Morgan and adapted byPatten (2004), was used to determine estimated sample size. According to the table, andfor purposes of this study, the researcher used an estimated population size N = 4821 andthus a sample size goal of n = 357.Survey Procedures In 1998, according to Nesbary (2000), Web surveys were almost non-existent inthe public sector. Nesbary decided to test the waters and conduct three surveys tocompare response rate and response time of Web surveys to regular mail surveys. Surveyresults and respondent feedback of all three surveys indicated that Web surveys weremore cost effective, easier to use, had quicker response rates, and greater responses. Oneof Nesbary’s Web surveys was distributed to selected universities. Of those surveyed,respondents indicated a strong preference for use of technology to take advantage ofspeed and convenience. The researcher used a Web-based survey for the faculty survey portion of thisstudy. UNL IRB approval was obtained (Appendix A). Two approvals for change ofprotocol were also obtained, one for changing the title of the study (Appendix B) and theother for changing the survey format (Appendix C). From the original IRB request, thesurvey was condensed to reduce the number of items, shortening the survey to increaseresponse rate.
  7. 7. 42Ethical Issues McNamara (1994) identifies five ethical concerns to be considered whenconducting survey research. These guidelines deal with voluntary participation, no harmto respondents, anonymity and confidentiality, identifying purpose and sponsor, andanalysis and reporting. Each guideline will be addressed individually with explanations tohelp eliminate or control any ethical concerns. First, researchers need to make sure that participation is completely voluntary.However, voluntary participation can sometimes conflict with the need to have a highresponse rate. Low return rates can introduce response bias (McNamara, 1994). In orderto encourage a high response rate, Dillman (2000) suggests multiple contacts. For thisstudy, up to five contacts were made per potential participant. The first email contact(Appendix D) was sent a few days preceding the survey to not only verify emailaddresses, but also to inform possible participants of the importance and justification forthe study. The second email contact (Appendix E) was the actual email cover letterexplaining the study objectives in more depth. This email consisted of a link to the Web-based survey and a password to enter. By clicking on the link provided and logging intothe secure site, the participant indicated agreement to participate in the research study.The third email contact (Appendix F) was sent a week later reminding those that had notresponded. The fourth email contact (Appendix G) was sent two weeks after the actualsurvey email reemphasizing the importance of faculty expertise in providing input to thestudy. The fifth and final email contact (Appendix H) was sent three weeks after the
  8. 8. 43actual survey email to inform faculty that the study was drawing to a close and that theirinput was valuable to the results of the study. McNamara’s (1994) second ethical guideline is to avoid possible harm torespondents. This could include embarrassment or feeling uncomfortable about questions.This study did not include sensitive questions that could cause embarrassment oruncomfortable feelings. Harm could also arise in data analysis or in the survey results.Solutions to these harms will be discussed under confidentiality and report writingguidelines. A third ethical guideline is to protect a respondent’s identity. This can beaccomplished by exercising anonymity and confidentiality. A survey is anonymous whena respondent cannot be identified on the basis of a response. A survey is confidentialwhen a response can be identified with a subject, but the researcher promises not todisclose the individual’s identity (McNamara, 1994). To avoid confusion, the cover emailclearly identified the survey as being confidential in regards to responses and thereporting of results. Participant identification was kept confidential and was only used indetermining who had not responded for follow-up purposes. McNamara’s (1994) fourth ethical guideline is to let all prospective respondentsknow the purpose of the survey and the organization that is sponsoring it. The purpose ofthe study was provided in the cover email indicating a need to identify technology skillsnecessary for students to be successful in their academic coursework and to determine if ageneral education computer literacy/skills course should be required of all undergraduate
  9. 9. 44students. The cover email also explained that the results of the study would be used in adissertation as partial fulfillment for a Doctoral degree. The fifth ethical guideline, as described by McNamara (1994), is to accuratelyreport both the methods and the results of the surveys to professional colleagues in theeducational community. Because advancements in academic fields come through honestyand openness, the researcher assumes the responsibility to report problems andweaknesses experienced as well as the positive results of the study.Validity and Reliability Issues An instrument is valid if it measures what it is intended to measure and accuratelyachieves the purpose for which it was designed (Patten, 2004; Wallen & Fraenkel, 2001).Patten (2004) emphasizes that validity is a matter of degree and discussion should focuson how valid a test is, not whether it is valid or not. According to Patten (2004), no testinstrument is perfectly valid. The researcher needs some kind of assurance that theinstrument being used will result in accurate conclusions (Wallen & Fraenkel, 2001). Validity involves the appropriateness, meaningfulness, and usefulness ofinferences made by the researcher on the basis of the data collected (Wallen & Fraenkel,2001). Validity can often be thought of as judgmental. According to Patten (2004),content validity is determined by judgments on the appropriateness of the instrument’scontent. Patten (2004) identifies three principles to improve content validity: 1) use abroad sample of content rather than a narrow one, 2) emphasize important material, and3) write questions to measure the appropriate skill. These three principles were addressedwhen writing the survey items. To provide additional content validity of the survey
  10. 10. 45instrument, the researcher formed a focus group of five to ten experts in the field ofcomputer literacy who provided input and suggestive feedback on survey items. Membersof the focus group were educators at the college and/or high school level who have taughtor are currently teaching computer literacy skills. Comments from the focus groupindicated that the skills listed in the survey were basic/intermediate skills and wereappropriate for all college students to know and be able to do. Some members of thefocus group suggested that the survey might be a bit long and that skills could begeneralized and consolidated for a more concise survey. The researcher categorizedapplication skills and condensed the application component items from 20 per applicationto eight items per application. The computer concepts component was reduced from 22items to eight items. According to Patten (2004), “. . . validity is more important than reliability” (p.71). However, reliability does need to be addressed. Reliability relates to the consistencyof the data collected (Wallen & Fraenkel, 2001). Cronbach’s coefficient alpha was usedto determine the internal reliability of the instrument. The faculty survey instrument wastested in its entirety, and the subscales of the instrument were tested independently.Data Collection An informal pilot study was conducted with a small group of faculty members atthe researcher’s home institution. Conducting a local pilot study allowed the researcher toask participants for suggestive feedback on the survey and also helped eliminate authorbias. Once the pilot survey had been modified as per the educational expert’s feedback,the survey was administered online to the stratified, random sample population.
  11. 11. 46 Participants of the study were contacted by email explaining the researchobjective and asking them to participate. The objective of the research was to gatherinformation about technology skills, in particular, what technology skills should studentspossess to be successful during their post-secondary courses. The email also contained alink to the Web-based faculty survey and a password to enter the survey. Follow-up emailcontacts were sent to increase response rate. Upon completion of the survey, eachrespondent was directed to a Web page thanking them for their response and offeringthem a copy of the study results if they were interested. Screen shots of the Web-basedfaculty survey are presented in Appendix I. The Web-based survey was conducted using surveymonkey.com, a surveysoftware program offered online. For a small fee, the program offered many featuresincluding unlimited number of survey questions, ability to add a personalized logo,custom redirects, result filtering, and the capability to export data for statistical analysis.The program provided a list management tool where responses could be tracked by theiremail address which proved to be very useful for follow-up emails. The program alsoprovided security including the option to turn on SSL (Secure Sockets Layers) to utilizedata encryption and provide data protection. Responses to the survey were recorded, exported in a spreadsheet, and transferredto a statistical software package for in-depth analysis. Descriptive statistics werecalculated and data relationships were analyzed.
  12. 12. 47Variables and Measures Variables used in the survey have been summarized in Table 3. The variablesconsisted of seven independent variables that grouped respondents by commoncharacteristics and five dependent variables that grouped responses by content categories.The independent variables included professional title, institution, department/contentarea, school size, gender, number of years at current institution, and total number of yearsin education. The dependent variables included word processing, spreadsheet,presentation, database, and computer concepts.Table 3Summary of Dependent and Independent Variables in the Faculty Survey Independent Variables (n = 7) Dependent Variables (n = 5) Professional title Word processing Institution Spreadsheet Department Presentation School size Database Gender Computer concepts Years as faculty member at current institution Years in educationData Analysis Plan To begin the data analysis process, descriptive statistics were calculated on theindependent variables to summarize and describe the data collected. Survey results weremeasured by category. There were five categories (subscales), representing the fivedependent variables. Reponses to the survey items were coded from 1 to 4 depending onthe importance of each skill. One represented ‘not important’, two represented ‘somewhatimportant’, three represented ‘important’, and four represented ‘very important’. Thecode for all survey items in the same category were summed together for a composite
  13. 13. 48score per category. This category composite score was used for statistical analysis. Itemanalysis was conducted to determine the internal consistency and reliability of eachindividual item as well as each subscale. Cronbach’s Alpha test was also used to testinternal reliability. Inferential statistics were used to reach conclusions and make generalizationsabout the characteristics of populations based on data collected from the sample.Frequencies and/or percentages were used to identify computer skills that facultymembers deem important for all students to possess. Independent t-tests and/or simpleanalysis of variance (ANOVA) were used to look for significant differences between thestudent technology skills faculty members deem important when grouped bydepartment/content area, institution/stratum, gender, or years of faculty experience. Thetype of tests that were used to answer specific research questions are summarized inTable 4. A statistical software program, SPSS (Statistical Package for Social Sciences)was used for in-depth data analyses.Table 4Summary of Data Sources, Types and Measures Applied by Research Question ResearchQuestion # Data Source Response Type Data Type Analysis Plan1 Faculty Survey Responses Likert Scale Nominal f, %2 Faculty Survey Responses Likert Scale Nominal t test, ANOVAStudent AssessmentIntroduction To assist in evaluating the technology skills of students, a technology assessmentwas conducted to determine computer literacy and performance skills of students enteringa post-secondary institution prior to taking a computer course at the post-secondary level.
  14. 14. 49Population and Sample The population for the student assessment consisted of college freshmen from asmall mid-western university enrolled in a computer literacy course.Permission from students to use their scores in the study was requested through informedconsent forms. This resulted in a sample size of 164 students.Student Assessment Procedures The purpose of the student assessment was to describe specifically what a typicalstudent entering post-secondary education knows about computer operations andconcepts as well as the computer skills they can demonstrate proficiently. An additional Northwest Missouri State University IRB form (Appendix J) wasobtained and student consent forms (Appendix K) were collected from participants. Aseries of assessments were given to all students during the first few weeks of a computerliteracy course to determine the computer skills students possess prior to taking acomputer literacy course.Measurement Instrument The student assessment consisted of a few demographic questions, and two majorcomponents: 1) computer concepts and 2) computer application skills. The computer concepts component of the assessment covered six differentmodules. Module one questions covered computer and information literacy, introductionto application software, word processing concepts, and inside the system. Module twoquestions covered understanding the Internet, email, system software, and exploring theWeb. Module three questions covered spreadsheet concepts, current issues, emerging
  15. 15. 50technologies, and data storage. Module four covered presentation packages, specialpurpose programs, multimedia/virtual reality, and input/output. Module five questionscovered database concepts, telecommunications, and networks. Module six questionscovered creating a Web page, ethics, and security. The assessment for the computerconcepts component of the study consisted of 150 questions, 25 questions randomlyselected from each of six module test banks. The number of questions in each module testbank ranged from 143 to 214 questions. This portion of the study was administered usingan online program called QMark (Question Mark). The assessment was automaticallygraded and scores were recorded on a server at Northwest Missouri State University. The computer application skills component was assessed using a commercialsoftware program called SAM (Skills Assessment Manager). SAM is a uniqueperformance-based testing software program that utilizes realistic, powerful simulations.The software package works just like the actual Microsoft Word, Excel, Access, andPowerPoint applications, but without the need for preinstalled Microsoft Office software.Course Technology, the publisher of SAM software, provided the researcher with a sitelicense for use in this research. The application skills assessed for this study include wordprocessing skills, spreadsheet skills, presentation skills, and database skills.Validity and Reliability Issues Patten’s (2004) three principles to improve content validity: 1) use a broad sampleof content rather than a narrow one, 2) emphasize important material, and 3) writequestions to measure the appropriate skill, were addressed when developing assessmentitems.
  16. 16. 51 In 1998 Course Technology introduced SAM 1997 and has continued to updatethe product through SAM 2000, SAM 2003, and now SAM XP. SAM is a uniqueperformance-based testing software program that utilizes realistic, powerful simulations.The software package works just like the actual Microsoft Word, Excel, Access, andPowerPoint applications, but without the need for preinstalled Office software. Accordingto Course Technology, SAM is the most powerful testing and reporting tool available.According to Course Technology (2002), SAM is becoming the provider of the most widely-used and effective technology- based assessment product line for Microsoft Office used in educational institutions today. SAM is used at high schools, colleges, career colleges, MBA programs, and in the workplace as a screening tool for placing people in the right training courses, a ‘test-out’ tool to determine students’ proficiency before they take a course, and a seamless, in-course assessment tool to allow students to demonstrate their proficiency as they go through a course.Proficiency skills on the assessment matched categories on the faculty survey so resultscould be compared.Data Collection All students enrolled in the course completed the assessments during the first fewweeks of class using the SAM and QMark software to determine their computerliteracy/skill proficiency level prior to taking the computer literacy course. Theassessments were graded online and the results were immediate. Prior to taking theassessment, the participants provided data on a few demographic questions. A list of the
  17. 17. 52demographic questions can be found in Appendix L. Computer concepts questions wererandomly selected from a test bank of questions. Sample screen shots can be found inAppendix M. A list of proficiency skills for the computer applications component of thestudent assessment can be found in Appendix N. Student names were kept confidential toensure individual privacy.Variables and Measures Variables used in the student assessment have been summarized in Table 5. Thevariables consisted of five independent variables that group participants by commoncharacteristics and five dependent variables that group participants by content categories.The independent variables included student home state, size of high school graduatingclass, number of high school computer courses taken, gender, and post-secondary majorfield of study. The dependent variables included computer skills grouped in fivecategories including word processing, spreadsheet, presentation, database, and computerconcepts.Table 5Summary of Dependent and Independent Variables in the Student Assessment Independent Variables (n = 5) Dependent Variables (n = 5)Home state Word processingHigh school size SpreadsheetNumber of high school computer courses PresentationGender DatabaseMajor Computer conceptsData Analysis Plan Results of the student assessment were recorded in a spreadsheet and transferredto SPSS for statistical analysis. Descriptive statistics and data relationships werecalculated. Independent t-tests and simple analysis of variance (ANOVA) were used to
  18. 18. 53look for significant differences between the proficiency level of students’ technologyskills when grouped by home state, number of high school computer courses taken,gender, and major field of study. A statistical software program, SPSS (StatisticalPackage for Social Sciences) was used for in-depth data analyses. The type of tests usedto answer specific research questions are summarized in Table 6.Table 6Summary of Data Sources, Types and Measures Applied by Research Question ResearchQuestion # Data Source Title Response Type Data Type Analysis Plan3 Student assessment score Percentage Interval f, %4 Student assessment score Percentage Interval t test, ANOVA5 Student assessment score Percentage Interval f, %

×