Your SlideShare is downloading. ×
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Rmic 823 master_syllabus_2_o1o
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Rmic 823 master_syllabus_2_o1o

808

Published on

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
808
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. CARLOS ALBIZU UNIVERSITYSAN JUAN CAMPUS<br />MASTER SYLLABUS<br />RMIC-823: Experimental Design in Psychology<br />CREDITS: 3CONTACT HOURS: 45<br />COURSE DESCRIPTION<br />This course presents the basic principles and methods of scientific research in psychology. Hypothesis testing, experimental design options, sample selection, control groups strategies, and criteria measures, data analysis and interpretation will be discussed. The course prepares the students to design research by applying rigorous scientific methodology.<br />PRE-REQUISITES<br />RMIC-725: Introduction to Research, RMIC-822: Analysis of Variance and RMIC-824: Correlation and Regression<br />COURSE OBJECTIVES<br />To lean the principles and methods of experimental research in psychology. These include: generalization, explanation and prediction in experimentation, logical bases of experimental inferences, experimental control, experimental designs and their application to statistical interpretation. The student will learn to develop research projects. Also, be highlighted are the values of scientific research from a social, professional, and ethical perspective. The students will understand the importance of selecting an adequate research design to obtain significant results.<br />REQUIRED TEXT BOOKS<br />McBurney, D. H. & White, T. L. (2004). Research methods. Vermont, California: Thmson Wadsworth. ISBN: 0-534-52418-4<br />Cook, D.T. & Campbell, D. (1979). Quasi experimentation design and analysis <br />issues for setting. Boston: Houghton Mifflin Company. ISBN: 0-395-<br />307902<br />Fraenkel, J. R. & Wallen, N. E. (2003). How to Design and Evaluate Research in<br /> Education (5th Edition). Boston: McGraw Hill. ISBN: 0-07-253184-3<br />ITINERARY OF CLASS UNITS<br />Unit 1: Formulating Questions: The Decision-Making Process <br />Unit 2: The Nature of Measurement<br />Unit 3: Validity<br />Unit 4: Generalizing<br />Unit 5: The Sample Survey <br />Unit 6: Single-Case Research <br />Unit 7: Mid-term examination<br />Unit 8: Experimental Research Designs<br />Unit 9: Continue Experimental Research Designs<br />Unit 10: Multifactorial Intragroup Designs<br />Unit 11: Multigroup and Factorial. Designs<br />Unit 12: Multigroup and Factorial. Designs (Continue) <br />Unit 13: Ethical Considerations in the Conduct of Research <br />Unit 14: Final examination<br />COURSE CONTACT HOURS <br />Professors who teach the course must divide the contact hours the following way:<br />1.Face-to-face time in the classroom must not be less than 40 hours (14 units, 2.5 hours each unit. 16 sessions).<br />2.For the remaining hours (= 5 hours), students will conduct research projects or homework outside the classroom. These projects or homework will include, but are not limited to application of experimental designs to different settings.<br />METHODOLOGY<br />The professor who offers the course will select the specific methodologies. These methodologies could include but would not be limited to: conferences by the professor, conferences by invited speakers, group discussion of assigned readings, class research projects, student presentations, individual meetings with students and working sub-groups in the classroom.<br />EDUCATIONAL TECHNIQUES<br />The professor who offers the course will select the educational techniques. These techniques could include but would not be limited to: debates, practical demonstrations, films/videos, simulations, slide shows and forums.<br />EVALUATION<br />The professor who offers the course will select the specific evaluation criteria. These methodologies could include but would not be limited to: scholarly papers, class projects, literature reviews, exams and class presentations.<br />RESEARCH COMPETENCIES<br />Research competencies for Ph.D. students<br />1.Analysis of the different types of designs and their use in the research process.<br />2.Demonstrate skills to design and carry out at least one research project.<br />3.Demonstrate effective communication in the oral and written presentation of the justification, methodology, results and implications of a research work.<br />4.Capable of maintaining ethical and legal standards which promote professional responsibility and integrity in research.<br />ATTENDANCE POLICY<br />Class attendance is mandatory for all students. After two unexcused absences, the student will be dropped from the class, unless the professor recommends otherwise. When a student misses a class, he/she is responsible for the material presented in class. <br />AMERICANS WITH DISABILITIES ACT (ADA)<br />Students that need special accommodations should request them directly to the professor during the first week of class.<br />COURSE UNITS<br />UNIT 1: FORMULATING QUESTIONS: THE DECISION-MAKING PROCESS<br />Upon successful completion of this unit students will gain an overview of research designs and an understanding of the importance of asking useful and manageable questions.<br />LEARNING OBJECTIVES:<br />Upon successful completion of this unit students will be able to:<br />Determine the basic criteria to be considered in determining whether a research question is useful and manageable.<br />Formulate research questions.<br />Explain the sources of research ideas giving examples.<br />Explore literature retrieval system such as Psychological Abstract, and the Science Citation Index.<br />Explain the three criteria that determine choice of a particular research design.<br />Distinguish between internal validity and external validity.<br />Summarize the main characteristics of each major type of research design.<br />Compare and contrast the five basic research designs on the dimensions of internal validity and external validity.<br />Define confounded variable: triangulation of methods: hybrid methods.<br />ASSIGNED READINGS:<br />Kerlinger, F.N. & Lee, H. 13. (2002) <br />Chapter 2- Problems and Hypothesis (Problemas e hipótesis)<br />Kazdin, A.E. (2001) <br />Chapter4 – Selection of the problem and design (Selección del problema de investigación y del diseño)<br />UNIT 2: THE NATURE OF MEASUREMENT<br />Upon successful completion of this unit students will be aware that observation and measurement are fundamental to the empirical sciences. Students will understand know how to develop clear measurement rules, concepts of reliability and validity, scales of measurement, techniques of observation and measurement and the problem of reactivity.<br />LEARNING OBJECTIVES:<br />Upon successful completion of this unit students will be able to:<br />Define observation and measurement.<br />Recognize an ambiguous measurement rule and how it could be made<br /> unambiguous.<br />Discuss the concepts of reliability and validity to measure error.<br />Relate the concepts of reliability and validity to measurement error.<br />Summarize two methods of assessing reliability and two methods of validating a measure.<br />Know the steps to follow to maximize the reliability of measurement.<br />Compare and contrast nominal, ordinal, interval and ratio scales.<br />Discuss examples of the following techniques of observation and measurement: qualitative recording, response counting, time measures, intensity measures, ratings, and self-observation and self research.<br />Define observational reactivity and discuss why it is considered a problem in psychological research.<br />Summarize the methods of minimizing observational reactivity. <br />ASSIGNED READINGS: <br />Kerlinger, F.N. & Lee, H. B. (2002) <br />Chapter3 – Constructs, variables and definitions<br />Chapter 26 – Fundamentals of measurement <br />Chapter 27 – Reliability <br />Chapter 28 - Validity <br />UNIT 3: VALIDITY<br />Upon successful completion of this unit students will an understand confounding and the internal validity problem, common threats to internal validity, controlling confounding and the social psychology of psychology experiments.<br />LEARNING OBJECTIVES:<br />Upon successful completion of this unit students will be able to:<br />Define internal validity and confounding, and describe the relationship between the concepts.<br />Know what sources of confounding are more prevalent in single-measurement situations and in repeated measurement studies.<br />Give examples of the types of confounding: subject selection bias, testing effects, statistical regression, history, subject maturation and subject mortality.<br />Understand how statistical regression might account for the end-of-season success often shown by successful dark-horse sports teams.<br />Explain the logic behind randomization and why it is the preferred approach to the control of confounding.<br />Compare and contrast within-subjects and between-subjects research designs and the advantages and disadvantages of each.<br />Discuss why counterbalancing is ineffective in controlling for carry-over effects.<br />Describe a means of minimizing each of the following sources of data contamination: instrumentation changes, regression effects, history, subject maturation and subject mortality.<br />Discuss the experimenter-expectancy effect and the four techniques that may be used to minimize it.<br />ASSIGNED READINGS:<br />Cook, T.D. & Campbell, D.T. (1979) <br />Chapter 2 - Validity<br />Kazdin, A.E. (2001)<br />Chapter 1 - Introduction<br />Chapter 2 – Formulating valid inferences I: Internal and external validity <br />Chapter 3 – Formulating valid inferences II: Construct validity and by statistical conclusion <br />UNIT 4: GENERALIZATION OF THE RESEARCH FINDINGS<br />Upon successful completion of this unit students will understand the problems of generalizing the results of psychological research, the process of establishing the range and limits of a study's external validity, spatial and temporal generalization and the problem of demand characteristics.<br />LEARNING OBJECTIVES:<br />Upon successful completion of this unit students will be able to:<br />Distinguish among the concepts of sample, target population.<br />Discuss the four main dimensions of generalization and in what sense are three of these dimensions "spatial" and the other one "temporal".<br />Summarize the evidence suggesting that physical setting, researcher attributes, and researcher expectancies may interact with the independent variable to restrict validity.<br />Discuss some of the characteristics of laboratory experiments that are thought to make generalizing strategies to deal with these problems.<br />Define and contrast probability sampling with convenience sampling, indicating the difficulties associated with probability sampling.<br />Describe the typical subject, and the typical volunteer subject, in psychology experiments.<br />Describe obtrusiveness of measurement, pretesting effects, multi-treatment carry over effects, demand characteristics, and subject attrition.<br />Discuss the rationale underlying conceptual replication findings.<br />Discuss the procedure used to reveal assess the effects of demand <br /> characteristics.<br />Summarize the logic of statistical significance testing.<br />Discuss reasons for testing the null hypothesis rather that alternative <br /> hypothesis.<br />Distinguish between types I and type II decision errors stating how statistical probability of each type of error might be reduced.<br />Discuss and critique the statistical logic behind the practice of replication.<br />Summarize considerations in determining the quality of a replication study.<br />ASSIGNED READINGS:<br />Fraenkel, J. R. & Wallen, N. E. (2003)<br />Chapter: 6 – Sampling<br />UNIT 5: THE SAMPLE SURVEY<br />Upon successful completion of this unit students will understand the scientific sampling procedures and immediate generalization from "sample" to "population".<br />LEARNING OBJECTIVES:<br />Upon successful completion of this unit students will be able to:<br />1.Discuss the concepts of parameter, parameter estimation, target population, <br />sampled population, sample unit and sampling frame.<br />2.Create examples of research problems in which parameter sampling.<br />3.Distinguish between probability sampling and nonprobability sampling.<br />4.Discuss the circumstances in which simple random sampling, cluster sampling and multi-stage area sampling should be used.<br />5.Define and evaluate these sampling techniques: convenience sampling, haphazard sampling, representative sampling and quota sampling.<br />6.Summarize the three principal techniques used to gather survey data, and describe the advantages and disadvantages of each approach.<br />7.Discuss ways to minimize biased items, confusing items, reluctant respondents, and absent respondents in survey research.<br />8.Discuss the value of significance in constructing a questionnaire.<br />9.Give examples of possible survey items in the open-ended format.<br />10.Discuss the advantages and disadvantages of open ended vs. fixed alternative survey items.<br />11.Define: the sampling distribution of the proportion, standard error of proportions, error of estimation, confidence interval, and confidence level.<br />12.Discuss the general relationship between sample size and the size of the standard error of the proportion.<br />13.Discuss the general relationship, the sample size and the width of the confidence interval.<br />14.Describe why random sampling is necessary for the successful construction of a confidence interval.<br />ASSIGNED READINGS:<br />Weathers, P.L., Furlons, M.J. & Solorzano, D. (1993) Journal of Counseling Psychology, 40(2), 238-244.<br />Kerlinger, F.N. & Lee, H. B. (2002).<br />Chapter 23 – Non experimental research (Investigación no experimental)<br />Chapter 25 - Survey research (Investigación por encuestas)<br />UNIT 6: SINGLE-CASE RESEARCH<br />Upon successful completion of this unit students will understand the case study and the single-case experiment, and will describe uses of each method.<br />LEARNING OBJECTIVES:<br />1.Upon successful completion of this unit students will be able to:<br />2.Discuss the similarities and differences between the case study and the single-case experiment.<br />3.Determine the ethical and practical considerations that might encourage a scientist-practitioner to use single-case methods of research instead of conventional group designs.<br />4.Discuss some of the typical arguments for and against single-case methods.<br />5.Describe the general uses of the case study discussing why they are more valid or Iegitimate than others.<br />6.Discuss each of the following considerations when conducting a case study: deciding on purpose, variables and behaviors, selecting a target population, deciding on sources and types of data, recording data, and using the method of internal consistency.<br />7.Summarize the major criticisms of the case study as a method of research giving particular attention to the criteria of internal validity and external validity.<br />8.State the general rationale behind single-case experiments.<br />9.Give an example of the A-B-AB reversal, multiple-baseline and random time-series.<br />10.Discuss the concept of "baseline" and its function in single-case experimentation.<br />11.Discuss some shortcomings and. pitfalls of single-case experiments.<br />ASSIGNED READINGS:<br />Kazdin, A.E. (2001). <br />Chapter 9 – One case studies and research with unique cases <br />UNIT 7: MID-TERM EXAMINATION<br />UNIT 8: EXPERIMENTAL RESEARCH DESIGNS<br />Upon successful completion of this unit students will know the basic elements of experimentation and experimental research designs. They will also learn to test the results for statistical significance in an experiment using the analysis of variance.<br />LEARNING OBJECTIVES:<br />Upon successful completion of this unit students will be able to:<br />1.Discuss the basic rationale underlying the experimental method.<br />2.Describe three ways in which an independent variable may be manipulated.<br />3.Determine what important functions are served by a pilot study.<br />4.Distinguish between pseudo-experiments and true experiments.<br />5.Evaluate the following designs giving special attention to their vulnerability to confounding: single-group post-test design, single-group pre-test post-test design, and static-group comparison design.<br />6.Discuss the specific steps to plan and set-up a randomized control-group design.<br />7.Discuss the specific steps to plan and set-up a randomized-blocks experiment.<br />8.Describe the concepts of: between-subjects, manipulation, block randomization, independent group designs, correlated-group designs, and power of an experiment on non-parametric statistics.<br />9.Summarize the advantages and drawbacks of repeated-measures designs determining under what circumstances these designs should not be used.<br />10.Compare the theory behind analysis of variance with the rationale of the experimental method.<br />ASSIGNED READINGS:<br />Fraenkel, J. R. & Wallen, N. E. (2003)<br />Chapter: 13 – Experimental Research<br />Kerlinger, F.N. & Lee, H. B. (2002)<br />Chapter 18 – Research design: Purpose and principles (Diseños de investigación: Propósito y principio)<br />Chapter 19 – Inadequate designs and design criteria (Diseños inadecuados y criterios para el diseño<br />Chapter 20 – General designs for research (Diseños generales de investigación)<br />Kazdin, A.E. (2001). <br />Chapter 5 – Experimental research: Group designs (Investigación experimental: Diseños de grupos<br />Chapter 6 – Control and Comparison groups (Grupos control y de comparación)<br />Chapter 7 – Evaluation of the impact of experimental manipulation (Evaluación del impacto de la manipulación experimental)<br />UNIT 9: CONTINUE EXPERIMENTAL RESEARCH DESIGNS<br />LEARNING OBJECTIVES:<br />Upon successful completion of this unit students will be able to:<br />1.Discuss the basic rationale underlying the experimental method.<br />2.Describe three ways in which an independent variable may be manipulated.<br />3.Determine what important functions are served by a pilot study.<br />4.Distinguish between pseudo-experiments and true experiments.<br />5.Evaluate the following designs giving special attention to their vulnerability to confounding: single-group post-test design, single-group pre-test post-test design, and static-group comparison design.<br />6.Discuss the specific steps to plan and set-up a randomized control-group design.<br />7.Discuss the specific steps to plan and set-up a randomized-blocks experiment.<br />8.Describe the concepts of: between-subjects, manipulation, block randomization, independent group designs, correlated-group designs, and power of an experiment on non-parametric statistics.<br />9.Summarize the advantages and drawbacks of repeated-measures designs determining under what circumstances these designs should not be used.<br />10.Compare the theory behind analysis of variance with the rationale of the experimental method.<br />ASSIGNED READINGS:<br />No new readings<br />UNIT 10: MULTIFACTORIAL INTRAGROUP DESIGNS<br />Upon successful completion of this unit students will know the basic elements of Intragroup Designs. They will also be able to test the results for statistical significance in these designs using the covariance correlation, F and t test 1.<br />LEARNING OBJECTIVES:<br />Upon successful completion of this unit students will be able to:<br />1.Define and after an example of an intragroup design.<br />2.Discuss the advantages and disadvantages of Intragroup Designs.<br />3.Describe and after one example of each kind of intragroup design.<br />4.Compare the advantages and disadvantages between intragroup and intergroup designs.<br />5.Discuss how to control "progressive error" in a repeated-measure design. <br />ASSIGNED READINGS:<br />Cook, T.D. & Campbell, D.T. (1979) <br />Chapter 5 – Quasi-Experiments: Interrupted Time Series Designs <br />Fraenkel, J. R. & Wallen, N. E. (2003)<br />Chapter: 13 – Experimental Research<br />Kerlinger, F.N. & Lee, H. B. (2002)<br />Chapter 22 – Quasi-Experimental designs and with n = 1 (Diseños cuasi experimentales y con n = 1)<br />UNIT 11: MULTIGROUP AND FACTORIAL DESIGNS<br />Upon successful completion of this unit students will understand complicated experimental models in which two or more independent variables are considered. Students will become familiar with the set up to accommodate different classes of variables and will learn how to perform tests of statistical significance on data from factorial experiments.<br />LEARNING OBJECTIVES:<br />Upon successful completion of this unit students will be able to:<br />1.Define and offer an example of a randomized factorial experiment.<br />2.Discuss the advantages of factorial experiments relative to single-variable experiments.<br />3.Define the concepts of: main effect, interaction, Solomon Four Group design, statistical main effect, and pairwise comparisons.<br />4.Describe and make up one example of each kind of "mixed factorials".<br />5.Discuss how to control "progressive error" in a repeated-measures factorial.<br />6.Create line graphs representing different outcomes in experiments.<br />7.Discuss the component parts in a factorial ANOVA.<br />8.Discuss in what sense the ANOVA performed on the data from a single-variable correlated-groups experiment is the same as a factorial ANOVA.<br />9.Discuss why it is necessary to conduct pairwise comparisons subsequent to obtaining a significant F ratio.<br />ASSIGNED READINGS:<br />Fraenkel, J. R. & Wallen, N. E. (2003)<br />Chapter: 13 – Experimental Research<br />UNIT 12: CONTINUE UNIT 11<br />LEARNING OBJECTIVES:<br />Upon successful completion of this unit students will be able to:<br />1.Define and offer an example of a randomized factorial experiment.<br />2.Discuss the advantages of factorial experiments relative to single-variable experiments.<br />3.Define the concepts of: main effect, interaction, Solomon Four Group design, statistical main effect, and pairwise comparisons.<br />4.Describe and make up one example of each kind of "mixed factorials".<br />5.Discuss how to control "progressive error" in a repeated-measures factorial.<br />6.Create line graphs representing different outcomes in experiments.<br />7.Discuss the component parts in a factorial ANOVA.<br />8.Discuss in what sense the ANOVA performed on the data from a single-variable correlated-groups experiment is the same as a factorial ANOVA.<br />9.Discuss why it is necessary to conduct pairwise comparisons subsequent to obtaining a significant F ratio.<br />ASSIGNED READINGS:<br />No new readings.<br />UNIT 13:ETHICAL CONSIDERATIONS IN THE CONDUCT OF RESEARCH<br />Upon successful completion of this unit students will understand the various ethical issues and dilemmas that arise in psychological research projects.<br />LEARNING OBJECTIVES:<br />Upon successful completion of this unit students will be able to:<br />1.Discuss the universal contract that emerges when people enter into interpersonal relationships.<br />2.Describe the value conflicts that give rise to ethical dilemmas in psychological research.<br />3.Appreciate the ethical responsibilities of investigators.<br />4.Discuss the principle of harm avoidance.<br />5.Explain the concepts of "debriefing sessions", "dehoaxing", and "desensitizing" the subjects.<br />6.Describe the concepts of informed consent, consent forms, and consent competence.<br />7.Explain the rationale for using deception in psychological research and the alternatives to deception.<br />7.8.Describe the principles of curtailment of personal freedom and confidentiality. <br />ASSIGNED READINGS:<br />Kazdin, A.E. (2001). <br />Chapter 16 – Ethical problems and guidelines for research (Problemas éticos y guías para la investigación)<br />UNIT 14: FINAL EXAMINATION<br />REFERENCES<br />Andrews-Necnn, F. (2000). El efecto del método de aprendizaje cooperativo con computadora(MACC) en la adquisición de destrezas de español, auto percepción y comportamiento social para estudiantes del tercer grado. Disertación doctoral no publicada, Universidad Carlos Albizu, San Juan, Puerto Rico.<br />Asociación Americana de Psicología. (2002). Manual de estilo de publicaciones. (Chavez, M., Padilla, G. & Inzunza, M. Trads.) México: Editorial El Manual Moderno. (Trabajo original publicado en 2001).<br />Ardila, R. (1986). Significado y necesidad de la psicología comparada. Revista latinoamericana de psicología, 18, 157-169.<br />Campbell, D.T. & Stanley, J.C. (1966). Experimental and quasi-experimental design, for research. Chicago: McNally College Publishing Co.<br />Cook, T.D. & Campbell, D. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston: Houghton Mifflin Company.<br />Cook, D.T. & Campbell, D. (1976). The design and conduct of quasi-experiments and true experiments in field setting in M.D. Dinnette (Ed.), Handbook of indutrial and organizational psychology. (p. 223-326). Chicago: Rand McNally College Publishing Co.<br />Crabtree, B. & Miller, W. (1992) Doing qualitative research. California: Sage Publications.<br />Creswell, J.W. (2002). Research Design: Qualitative and Quantitative Approaches. California: Sage Publications.<br />Cuevas, S.M. (1996). Diseno e implantación de un modelo e intervención psicológica:<br />programa de técnicas de relajación y patrones de frecuencias de sonidos musicales para el manejo de la ansiedad. Disertación doctoral no publicada, Universidad Carlos Albizu, San Juan, Puerto Rico.<br />Edwards, A.L. (1972). Experimental design in psychological research. New York: Holt, Rinehart and Winston, inc.<br />Edwards, A.L. (I985). Multiple regression and the analysis of variance and covariance.New York: W.H. Freeman and Company.<br />Ferran, M. (2001). SPSS para windows: análisis estadístico. México: McGraw Hill.<br />García, J.A. (1999). Effects of group therapy with individuals diagnosed with schizotypal personality functioning. Disertación doctoral no publicada, Universidad Carlos Albizu, San Juan, Puerto Rico.<br />Heiman, G. W. (1999). Research methods in psychology. New York: Houghton Mifflin Company.<br />Henry, G.T. (1990). Practical sampling. California: Sage Publications.<br />Hernández, R., Fernández, C. & Baptista, P. (2002). Metodología de la investigación. México: McGraw-Hill.<br />Isaac, S. & Michael. W.B. (1995). Handbook in research and evaluation. California: Edits Publishers.<br />Kazdin, A.E. (2001). Métodos de investigación en psicología clínica. (Gutiérrez, M.G. Trads.) México: Pearson Educación. (Trabajo original publicado en 1998).<br />Kappel, C. (1991). Design and analysis: A researcher's handbook (3rd Edition) New Jersey: Prentice Hall, Inc.<br />Kcrlinger, F.N. & Lee, H. B. (2002). Investigación del comportamiento: Métodos de investigación en ciencias sociales. (Pineda, L.E., Mora, I., Diez, C.B. & Vadillo, G. Trads.) México: McGraw Hill. (Trabajo original publicado en 1986).<br />Leedy, P.D. (1989). Practical research: Planning and design. New York: McMillan Publishing Company.<br />León, O.G. & Montero, I. (1993). Diseño de investigaciones. Esparza: McGraw-Hill.<br />Matheson. D.W., Bruce, R.L. & Beauchamp, K.L. (1978). Experimental psychology: Research designs and analysis (3rd ed.) New York: Holt, Rinehart and Winston.<br />Miles, M. & Michael-Huberman, A. (1994). Qualitative data analysis. California: Sage Publications.<br />Morgan, D.L. (1993). Successful focus groups. California: Sage Publications.<br />Mullen, B.. (1989). Advanced basic meta-analysis. New Jersey: Lawrence Erlbaum Associates.<br />Patton, M.Q. (1990). Qualitative evaluation and research methods. California: Sage Publications.<br />Pérez, V.P. (1997). Diseño e implantación de un adiestramiento a maestros/as para identificar y prevenir el abuso sexual en niños/as. Disertación doctoral no publicada, Universidad Carlos Albizu, San Juan, Puerto Rico.<br />Rodríguez-Irlanda, D. (2001). Medición "assessment " y evaluación. Puerto Rico: Publicaciones Puertorriqueñas Editores.<br />Runyon, R. P., Coleman, K.A. & Pittenger, D.J. (2000). Fundamentals of behavior statistics. New York: McGraw Hill.<br />Shaughnessy, J.J., Zechmeister, E.B. & Zechmeister, J.S. (2000). Research methods in psychology. New York McGraw Hill.<br />Santiago, E.F. (2001). Desarrollo e implantación de un modelo de intervención psico-músico-Terapéutico para pacientes con problemas de hipertensión en una clínica de cardiología. Disertación doctoral no publicada, Universidad Carlos Albizu, San Juan, Puerto Rico.<br />Schmidt, N.B. & Woolaway-Bickel, K. (2000). The effects of treatment compliance on outcome in cognitive-behavioral therapy for panic disorder: Quality versus quantity. Journal of Consulting and Clinical Psychology, (68), 1, 13-18.<br />Stewart, D.W. & Shamdasani, P. (1990). Focus groups: Theory and practice. California: Sage Publications.<br />Viera, S. (1998). Fundamentos del Razonamiento Estadístico. Puerto Rico: Universidad Carlos Albizu.<br />Weathers, P.L., Furlons, M.J. & Solorzano, D. (1993). Mail Survey Research in Counseling Psychology: Current Practice and Suggested Guidelines. Journal of counseling Psychology, 40 (2), 238-244.<br />Yin, R. K. (1994). Case study research: Design and Methods. California: Sage <br />Publications. <br />Revised by: Juan A. Nogueras, Ph.D. (August, 2008)<br />

×