How successful are discovery tools in providing relevant results? Can these single search boxes really displace/replace the library catalog? This presentation presents the results of an analysis of user queries sampled from the University of Southern California's discovery tool and how results using these same queries compare when entered into Google and Google Scholar in terms of relevancy. This session will provide insight into user search behavior and examine if discovery tools are actually delivering on their promise to deliver relevant and expected results.
Finding Our Value in Lower Usage Numbers: An Examination of Reference Servic...Elizabeth Namei
Paper presented at the California Association of Research Libraries Conference, April 2016
Analysis of 2 cases studies - Demand Driven Acquisitions impact on ILL and Self-service reference impact on traditional reference services
Slides of Clean Interviewing workshop given at UNITEC, Aukland, 3 Feb 2017.
Workshop description
James Lawley will show how the wording of interview questions can unintentionally and unknowingly bias answers, how ‘leading’ questions cast doubt on the authenticity of the data collected, and how you can avoid this by asking ‘clean’ questions.
The aim of this workshop is to learn and integrate the principles of Clean Interviewing, and to develop your ability to design and frame clean questions during practice interviews. You will learn how to interview using Clean Language so your interviewees are given maximum opportunity to provide reliable information, ‘uncontaminated’ by an interviewer’s framing, presuppositions and metaphors.
You will also learn a new process for validating the ‘cleanness’ of an interview thereby increasing the robustness of your methodology.
Research methods can generally be divided into two main categories: Quantitative and Qualitative. This webinar will provide an overview of quantitative methods with a brief distinction between quantitative and qualitative methods. We will focus on when and how to use quantitative research and discuss type of variables and statistical analysis.
Presentation will be led by Dr. Carlos Cardillo.
About CORE:
The Culture of Research and Education (C.O.R.E.) webinar series is spearheaded by Dr. Bernice B. Rumala, CORE Chair & Program Director of the Ph.D. in Health Sciences program in collaboration with leaders and faculty across all academic programs.
This innovative and wide-ranging series is designed to provide continuing education, skills-building techniques, and tools for academic and professional development. These sessions will provide a unique chance to build your professional development toolkit through presentations, discussions, and workshops with Trident’s world-class faculty.
For further information about CORE or to present, you may contact Dr. Bernice B. Rumala at Bernice.rumala@trident.edu
Finding Our Value in Lower Usage Numbers: An Examination of Reference Servic...Elizabeth Namei
Paper presented at the California Association of Research Libraries Conference, April 2016
Analysis of 2 cases studies - Demand Driven Acquisitions impact on ILL and Self-service reference impact on traditional reference services
Slides of Clean Interviewing workshop given at UNITEC, Aukland, 3 Feb 2017.
Workshop description
James Lawley will show how the wording of interview questions can unintentionally and unknowingly bias answers, how ‘leading’ questions cast doubt on the authenticity of the data collected, and how you can avoid this by asking ‘clean’ questions.
The aim of this workshop is to learn and integrate the principles of Clean Interviewing, and to develop your ability to design and frame clean questions during practice interviews. You will learn how to interview using Clean Language so your interviewees are given maximum opportunity to provide reliable information, ‘uncontaminated’ by an interviewer’s framing, presuppositions and metaphors.
You will also learn a new process for validating the ‘cleanness’ of an interview thereby increasing the robustness of your methodology.
Research methods can generally be divided into two main categories: Quantitative and Qualitative. This webinar will provide an overview of quantitative methods with a brief distinction between quantitative and qualitative methods. We will focus on when and how to use quantitative research and discuss type of variables and statistical analysis.
Presentation will be led by Dr. Carlos Cardillo.
About CORE:
The Culture of Research and Education (C.O.R.E.) webinar series is spearheaded by Dr. Bernice B. Rumala, CORE Chair & Program Director of the Ph.D. in Health Sciences program in collaboration with leaders and faculty across all academic programs.
This innovative and wide-ranging series is designed to provide continuing education, skills-building techniques, and tools for academic and professional development. These sessions will provide a unique chance to build your professional development toolkit through presentations, discussions, and workshops with Trident’s world-class faculty.
For further information about CORE or to present, you may contact Dr. Bernice B. Rumala at Bernice.rumala@trident.edu
Mentorship in Child & Adolescent Psychiatry - AACAP Two Day Mentorship Progra...Université de Montréal
"Mentorship in Child & Adolescent Psychiatry"
Invited presentation at the AACAP Two Day Mentorship Program
AACAP Annual Meeting
Seattle, WA, USA
26.10.2018
Presentation to 2016 Evidence Based School Counseling Conference, University ...Toby Cunningham
Presentation by Toby Cunningham to the 2016 Evidence Based School Counseling Conference at University of Georgia. Covers scientific basis for aptitude based career counseling, and how that science was applied to the development of the YouScience platform.
Assignment 2 RA Annotated BibliographyThe foundation of a rese.docxrock73
Assignment 2: RA: Annotated Bibliography
The foundation of a research study comes from an understanding of the theory and from knowledge that is set forth by the literature in the field. Before a researcher can develop a sound and needed research design, he or she must first determine what is already known, how the topic has previously been studied, and where there are gaps in the knowledge and/or techniques that have been used to study the research problem.
In this module, you will further explore the research topic that you chose in M1 Assignment 2. Additionally, this will be the time to make any needed changes to your research question before you submit your proposal in M5 Assignment 2 RA 2. For this assignment, you will create a 3- to 4-page document following the directions given below.
Using the Argosy University online library resources and the Internet, locate 6 peer-reviewed articles that could be used for the literature review portion of the research proposal in M5 Assignment 2 RA 2. The articles should fall into the following categories:
· 2 quantitative studies
· 2 qualitative studies
· 1 mixed-methods study
· 1 theoretical or research design of your choice
For each article, cover the following points in 250–350 words:
· The problem to be studied
· The rationale for the study
· The type of research that was conducted (qualitative, quantitative, or mixed-methods)
· The data collection strategy
· The data analysis tools that were used
· A summary of the findings
· A statement of how this article will support your proposed study
Present your work in a 3- to 5-page Microsoft Word document that follows the following format:
· Reference the source in APA format.
· Follow the reference with a single block paragraph of 250–350 words comprising your annotation (summary, evaluation, and reflection).
· The whole block should be double-spaced and indented.
· Repeat for the next article—don’t forget that your articles should be listed in alphabetical order just as you would on a standard APA reference page.
All written assignments and responses should follow APA rules for attributing sources.
Submission Details:
· By Wednesday, September 20, 2017, save your document as M3_A2_Lastname_Firstname.doc and submit it to the M3 Assignment 2 RA Dropbox.
RA is worth 200 points and will be graded according to the following rubric.
Running head: EVALUATING QUANTITATIVE DESIGN 1
EVALUATING QUANTITATIVE DESIGN 2
Evaluating Quantitative Design
Sherry L. Crowe
Dr. Cynthia Palmisano
Research and Evaluation
FP6030
September 13, 2017
Evaluating Quantitative Research
PART 1: Comparing Methods
Experimental Research Method
In the experimental research method, research is approached or viewed as a systematic and scientific process through which a researcher manipulates one or more variables while controlling and measuring any variations in other variables (Stangor, 2011). The variables being manipulated are called the independent variables while ...
Study notesSome of the information below may be repetitive of wh.docxhanneloremccaffery
Study notes
Some of the information below may be repetitive of what you have read in Creswell. In chapter 10, Singleton addressed field research, which overlaps with some qualitative designs, but for Singleton it is different from qualitative research because field research often involves quantification and more than simply observation. (Sometimes qualitative research also involves quantification) What Singleton addressed as field research is out the traditions of sociology and anthropology. Field research is probably more like ethnography than it is like other qualitative designs.
In a previous unit, we mentioned the use of existing data for research. Sometimes using data that are available lessens the data gathering task because you do not have to be dependent on others to return a survey or agree to an interview. Here is a good example of the use of existing data in a causal-comparative design. A former Princeton student who was in the Education program and is an assistant principal did her dissertation using existing data. She wanted to know if the reading scores on a standardized test (ITBS) were different after a new approach to teaching reading than before the new approach began. She went back to 1991 and recorded scores of 1st and 2nd graders for a five-year period before the intervention in 1996. Then she obtained scores of 1st and 2nd graders for five years after the new program and then did a number of statistical comparisons. She found significant differences on the comparisons so it would appear that the new approach to reading was effective. She could have set up a quasi-experimental design, but unless she did it for a number of years, she would not have had nearly as much data. This is a case in which it was not feasible to do an experimental design, but she obtained useful data.
Not all research using available data is causal-comparative. Much is descriptive. Probably the use of available data for research is among the top three types of designs used. Think of all the studies that come from the U.S. Census every ten years. You may have some good data stored at your place of employment. One researcher in Arizona has studied the trash/garbage of people for 25 years to find out how they live. Can you imagine sifting through someone's trash for 25 years? He has, however, learned a great deal about how the people whose trash he has swiped in the Tucson area live.
Moving back now to Chapter 10 in Singleton. While qualitative research is simply not acceptable to some researchers, in many ways, it can be more valuable than quantitative research when specificity and correctness are not necessary. Probably about 40% of Princeton students do some type of qualitative research for their dissertations.
Singleton refers to qualitative research as field research. He simply uses a broad category to cover various kinds because qualitative research is done in the real world (field).
One primary difference between quantitative and quali.
iKNEER (Interactive Knowledge Network for Engineering Education Research) User Study: Conduct 6 interviewers with novice researchers to understand conceptual hurdles in their research, and how computer tools influence their research decision-making.
These slides come from a presentation given as part of the session "Learning from the evidence: improving microbiology teaching through educational research" at the Society for General Microbiology conference in Nottingham, September 2010.
Running head: CMGT 555 WK 2 DQ 1
1
CMGT 555 WK 2 DQ 1
4
CMGT 555 Wk 2 DQ 1
Student’s Name
Institution
Main Post
What are 2 key attributes to well-written requirements?
There are several key attributes to well-written requirements for a system project. Two core attributes to well-written requirements are clarity and risk. A well-written requirement for a system project should be able to identify risks and threats that may compromise the project at hand. Having a clear and straightforward system project will enable the project team members to have an understanding of everything that is needed for the success of the project (Wiegers, K., & Beatty, 2013).
How do these attributes impact the quality of requirements? How might you assess system requirements based off these attributes?
Clarity will create awareness on what should be done regarding the project thus promoting its effectiveness for success. Identification of risks will also allow you to select the most viable alternative for preventing threats that may compromise the entire project. Through this, the project team will know the needs required for the project. I will analyze all the possible risks that are likely to affect the project and whether we as a team are ready to counter any threats that might arise.
Response to Peer 1
Hello, thank you for your significant contribution of the key attributes to well-written assignments and their impacts on the quality of requirements. I have read through your post and not only have I found it resourceful but it also captivating. Reading through the post has completely changed my perception, and this has made me understand the significance of ensuring well-written requirements. It is true that to ensure well-written requirements, some of the considerations have to be ensured. I conquer with you on the first attribute of clarity. A well-written requirement should be free from unnecessary information to ensure accuracy and validity. I think clarity also promotes soundness and makes the requirements easily understandable.
Another important quality that should be considered is feasibility. That is to say that, your requirement should be achievable within the available resources and budget. Clarity and Feasibility will enable the project team members and various stakeholders to understand what exactly is needed regarding a given project. It will also allow the project team to know the requirements for a particular project regardless of the situation.
Response to Peer 2
Hello, such a fantastic post made by you on the qualities t to well-written assignments. Requirement qualities are vital in ensuring that the person understands the information needed for a project to be undertaken. It is true that a key quality to a requirement that is well written is the one that makes good use of its resources. I conquer with you that risk is also another vital attribute to well-written requirements as this identifies what is at stake in case requirement.
2 / 3
Discussion Board 2: Learning Styles/Personality
After reading Chapter 7: Strategic Learning and Studying & chapter 8: Test-Taking Skills and Strategies, and looking at the Learning Style Youtube clip in this module, I would like for you to answer the following questions in the Discussion Board:
1) What is your preferred learning style?
2) What is your preferred learning environment (sound, temperature, lighting, lecture vs. hands-on vs. discussion, working with others or not, etc.)?
3) What are some strategies (according to your learning style) you use to study?
Preferred Leaning Styles
Please respond to the following questions, use 12 front times new roman, proper citation 300 to 500 words
Collapse
1. Learning Styles - Discussion Board
1) What is your preferred learning style? I am definitely a kinesthetic learner. I can hear something or study something but will not feel comfortable with it until I have hands on experience with it.
2) What is your preferred learning environment (sound, temperature, lighting, lecture vs. hands-on vs. discussion, working with others or not, etc.)? My preferred learning environment is in a classroom setting with others. Working with other classmates on projects really seems to help me. Good lighting is always helpful.
3) What are some strategies (according to your learning style) you use to study? I like to take notes during instruction. Since that is not possible through online classes participating in the discussions with other classmates is also a good way to study and learn. Their perspective on a topic can be a different way at looking at something that I may not have learned on my own.
2. Learning Styles - Discussion Board
My proffered learning style is visual and kinesthetic. I like studying in a bright cold room because it is harder to get tired because we all know studying is tiring. I usually just were headphones and study alone as well. Strategies I use to study include reading the content over and over again, writing down notes on the material multiple times, and using flash cards to help me.
Required Resources Week 2
Required Text
Read from the course text, Applied project: Capstone in psychology:
a. Chapter 3: Between and Within Groups Research Designs
b. Chapter 6: Survey and Questionnaire Research
Book
American Psychiatric Association. (2013). The Diagnostic and Statistical Manual of Mental Disorders. (5th ed.). Washington, D.C.: American Psychiatric Publishing.
· This is the manual of psychiatric diagnostic criteria used by mental health professionals.
Articles
Bauer, R.M. (2007). Evidence-based practice in psychology: Implications for research and research training.Journal of Clinical Psychology, 63(7), 685–694. Retrieved from the EBSCOhost database.
· This article discusses the implications of evidence-based practice (EBP) for research and research training in clinical psychology. Bauer argues that EBP provides a useful framework for addressing some heretofore ig ...
Dna Research Paper
Essay about Organizational Structures
Research Methods Essay
Educational Research
Methodology of Research Essay examples
Structure and Agency Essay
Sampling Methods Essay
Research Paper On Pcos
Essay about Structuralism
Fundamentals of Research Essay
Strategies on How to Infer & Explain Patterns and Themes from DataNoMore2020
A research that we presented and submitted to our teacher, Mrs. Lopez. I uploaded this because I wanted to help other students in the ABM track especially to Senior High Students who have Reseach in Daily Life in their subjects.
I need about 150 words for each questionPlease answer questions karinorchard1
I need about 150 words for each question
Please answer questions individually and provide each with its own references
Topic 1
Qualitative Research and Theories/Paradigms of Research
DQ 1
Over the past 16 years, only two building permits for new housing construction have been issued in Sedgwick County, Colorado. This is consistent with the depopulation (particularly of younger persons) and economic diminution that is attributed to a declining rainfall since the 1970s. These are quantitative details. It is unclear how much of the depopulation was due to perceived opportunities elsewhere, to copycat or fad behavior, and to perceived change in local economic opportunity. Why is qualitative analysis more likely to identify the leading cause of Sedgwick County's out-migration than quantitative methods? Which characteristics of qualitative research most influenced your response to this question?
DQ 2
In the GCU library, locate four empirical studies you have not used before on a topic you are interested in researching (Use the Empirical Research Checklist to determine if a study is empirical). List the theoretical foundation and Permalink for each study, and add the studies to your RefWorks list. Then determine a theoretical foundation (laws, theories, models, concepts) for a study that you might like to research for your dissertation. Why did you select this foundation? Finally, comment on other learners' theoretical foundations and if/how they can be improved.
Topic 2
Designing Qualitative Studies; Relationships Among Researchers, Subjects, and Institutions
DQ 1
Critics of qualitative research often posit that it is subjective due to the fact that the researcher collects the data. Therefore, the researcher's own prior experiences, prejudices, and attitudes may bias the data, and therefore, the results of the study. How would you respond to someone who presents this criticism to you?
DQ 2
Drawing on your prior knowledge, the studies and literature research you have completed, and the readings for this topic, reflect on the role of ethics in the research process. Discuss strategies a doctoral learner or researcher might employ to protect participants and the institutions (GCU/data collection site) in a study. Explain any concerns/uncertainties you have regarding ethical conduct during dissertation research.
Topic 3
Generating and Collecting Qualitative Data: Procedures and Ethical Considerations
DQ 1
Suppose you are interested in the behaviors of college professors that have high ratings of student satisfaction. The research goal is to identify the teaching behaviors of these successful professors so that these behaviors can be built into the curricula of doctoral leadership programs. The sample for this study will consist of 10 randomly selected professors who received high end-of-course survey scores. You want to use a case study design that requires at least two sources of data. What data collection instruments will be the mo ...
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Mentorship in Child & Adolescent Psychiatry - AACAP Two Day Mentorship Progra...Université de Montréal
"Mentorship in Child & Adolescent Psychiatry"
Invited presentation at the AACAP Two Day Mentorship Program
AACAP Annual Meeting
Seattle, WA, USA
26.10.2018
Presentation to 2016 Evidence Based School Counseling Conference, University ...Toby Cunningham
Presentation by Toby Cunningham to the 2016 Evidence Based School Counseling Conference at University of Georgia. Covers scientific basis for aptitude based career counseling, and how that science was applied to the development of the YouScience platform.
Assignment 2 RA Annotated BibliographyThe foundation of a rese.docxrock73
Assignment 2: RA: Annotated Bibliography
The foundation of a research study comes from an understanding of the theory and from knowledge that is set forth by the literature in the field. Before a researcher can develop a sound and needed research design, he or she must first determine what is already known, how the topic has previously been studied, and where there are gaps in the knowledge and/or techniques that have been used to study the research problem.
In this module, you will further explore the research topic that you chose in M1 Assignment 2. Additionally, this will be the time to make any needed changes to your research question before you submit your proposal in M5 Assignment 2 RA 2. For this assignment, you will create a 3- to 4-page document following the directions given below.
Using the Argosy University online library resources and the Internet, locate 6 peer-reviewed articles that could be used for the literature review portion of the research proposal in M5 Assignment 2 RA 2. The articles should fall into the following categories:
· 2 quantitative studies
· 2 qualitative studies
· 1 mixed-methods study
· 1 theoretical or research design of your choice
For each article, cover the following points in 250–350 words:
· The problem to be studied
· The rationale for the study
· The type of research that was conducted (qualitative, quantitative, or mixed-methods)
· The data collection strategy
· The data analysis tools that were used
· A summary of the findings
· A statement of how this article will support your proposed study
Present your work in a 3- to 5-page Microsoft Word document that follows the following format:
· Reference the source in APA format.
· Follow the reference with a single block paragraph of 250–350 words comprising your annotation (summary, evaluation, and reflection).
· The whole block should be double-spaced and indented.
· Repeat for the next article—don’t forget that your articles should be listed in alphabetical order just as you would on a standard APA reference page.
All written assignments and responses should follow APA rules for attributing sources.
Submission Details:
· By Wednesday, September 20, 2017, save your document as M3_A2_Lastname_Firstname.doc and submit it to the M3 Assignment 2 RA Dropbox.
RA is worth 200 points and will be graded according to the following rubric.
Running head: EVALUATING QUANTITATIVE DESIGN 1
EVALUATING QUANTITATIVE DESIGN 2
Evaluating Quantitative Design
Sherry L. Crowe
Dr. Cynthia Palmisano
Research and Evaluation
FP6030
September 13, 2017
Evaluating Quantitative Research
PART 1: Comparing Methods
Experimental Research Method
In the experimental research method, research is approached or viewed as a systematic and scientific process through which a researcher manipulates one or more variables while controlling and measuring any variations in other variables (Stangor, 2011). The variables being manipulated are called the independent variables while ...
Study notesSome of the information below may be repetitive of wh.docxhanneloremccaffery
Study notes
Some of the information below may be repetitive of what you have read in Creswell. In chapter 10, Singleton addressed field research, which overlaps with some qualitative designs, but for Singleton it is different from qualitative research because field research often involves quantification and more than simply observation. (Sometimes qualitative research also involves quantification) What Singleton addressed as field research is out the traditions of sociology and anthropology. Field research is probably more like ethnography than it is like other qualitative designs.
In a previous unit, we mentioned the use of existing data for research. Sometimes using data that are available lessens the data gathering task because you do not have to be dependent on others to return a survey or agree to an interview. Here is a good example of the use of existing data in a causal-comparative design. A former Princeton student who was in the Education program and is an assistant principal did her dissertation using existing data. She wanted to know if the reading scores on a standardized test (ITBS) were different after a new approach to teaching reading than before the new approach began. She went back to 1991 and recorded scores of 1st and 2nd graders for a five-year period before the intervention in 1996. Then she obtained scores of 1st and 2nd graders for five years after the new program and then did a number of statistical comparisons. She found significant differences on the comparisons so it would appear that the new approach to reading was effective. She could have set up a quasi-experimental design, but unless she did it for a number of years, she would not have had nearly as much data. This is a case in which it was not feasible to do an experimental design, but she obtained useful data.
Not all research using available data is causal-comparative. Much is descriptive. Probably the use of available data for research is among the top three types of designs used. Think of all the studies that come from the U.S. Census every ten years. You may have some good data stored at your place of employment. One researcher in Arizona has studied the trash/garbage of people for 25 years to find out how they live. Can you imagine sifting through someone's trash for 25 years? He has, however, learned a great deal about how the people whose trash he has swiped in the Tucson area live.
Moving back now to Chapter 10 in Singleton. While qualitative research is simply not acceptable to some researchers, in many ways, it can be more valuable than quantitative research when specificity and correctness are not necessary. Probably about 40% of Princeton students do some type of qualitative research for their dissertations.
Singleton refers to qualitative research as field research. He simply uses a broad category to cover various kinds because qualitative research is done in the real world (field).
One primary difference between quantitative and quali.
iKNEER (Interactive Knowledge Network for Engineering Education Research) User Study: Conduct 6 interviewers with novice researchers to understand conceptual hurdles in their research, and how computer tools influence their research decision-making.
These slides come from a presentation given as part of the session "Learning from the evidence: improving microbiology teaching through educational research" at the Society for General Microbiology conference in Nottingham, September 2010.
Running head: CMGT 555 WK 2 DQ 1
1
CMGT 555 WK 2 DQ 1
4
CMGT 555 Wk 2 DQ 1
Student’s Name
Institution
Main Post
What are 2 key attributes to well-written requirements?
There are several key attributes to well-written requirements for a system project. Two core attributes to well-written requirements are clarity and risk. A well-written requirement for a system project should be able to identify risks and threats that may compromise the project at hand. Having a clear and straightforward system project will enable the project team members to have an understanding of everything that is needed for the success of the project (Wiegers, K., & Beatty, 2013).
How do these attributes impact the quality of requirements? How might you assess system requirements based off these attributes?
Clarity will create awareness on what should be done regarding the project thus promoting its effectiveness for success. Identification of risks will also allow you to select the most viable alternative for preventing threats that may compromise the entire project. Through this, the project team will know the needs required for the project. I will analyze all the possible risks that are likely to affect the project and whether we as a team are ready to counter any threats that might arise.
Response to Peer 1
Hello, thank you for your significant contribution of the key attributes to well-written assignments and their impacts on the quality of requirements. I have read through your post and not only have I found it resourceful but it also captivating. Reading through the post has completely changed my perception, and this has made me understand the significance of ensuring well-written requirements. It is true that to ensure well-written requirements, some of the considerations have to be ensured. I conquer with you on the first attribute of clarity. A well-written requirement should be free from unnecessary information to ensure accuracy and validity. I think clarity also promotes soundness and makes the requirements easily understandable.
Another important quality that should be considered is feasibility. That is to say that, your requirement should be achievable within the available resources and budget. Clarity and Feasibility will enable the project team members and various stakeholders to understand what exactly is needed regarding a given project. It will also allow the project team to know the requirements for a particular project regardless of the situation.
Response to Peer 2
Hello, such a fantastic post made by you on the qualities t to well-written assignments. Requirement qualities are vital in ensuring that the person understands the information needed for a project to be undertaken. It is true that a key quality to a requirement that is well written is the one that makes good use of its resources. I conquer with you that risk is also another vital attribute to well-written requirements as this identifies what is at stake in case requirement.
2 / 3
Discussion Board 2: Learning Styles/Personality
After reading Chapter 7: Strategic Learning and Studying & chapter 8: Test-Taking Skills and Strategies, and looking at the Learning Style Youtube clip in this module, I would like for you to answer the following questions in the Discussion Board:
1) What is your preferred learning style?
2) What is your preferred learning environment (sound, temperature, lighting, lecture vs. hands-on vs. discussion, working with others or not, etc.)?
3) What are some strategies (according to your learning style) you use to study?
Preferred Leaning Styles
Please respond to the following questions, use 12 front times new roman, proper citation 300 to 500 words
Collapse
1. Learning Styles - Discussion Board
1) What is your preferred learning style? I am definitely a kinesthetic learner. I can hear something or study something but will not feel comfortable with it until I have hands on experience with it.
2) What is your preferred learning environment (sound, temperature, lighting, lecture vs. hands-on vs. discussion, working with others or not, etc.)? My preferred learning environment is in a classroom setting with others. Working with other classmates on projects really seems to help me. Good lighting is always helpful.
3) What are some strategies (according to your learning style) you use to study? I like to take notes during instruction. Since that is not possible through online classes participating in the discussions with other classmates is also a good way to study and learn. Their perspective on a topic can be a different way at looking at something that I may not have learned on my own.
2. Learning Styles - Discussion Board
My proffered learning style is visual and kinesthetic. I like studying in a bright cold room because it is harder to get tired because we all know studying is tiring. I usually just were headphones and study alone as well. Strategies I use to study include reading the content over and over again, writing down notes on the material multiple times, and using flash cards to help me.
Required Resources Week 2
Required Text
Read from the course text, Applied project: Capstone in psychology:
a. Chapter 3: Between and Within Groups Research Designs
b. Chapter 6: Survey and Questionnaire Research
Book
American Psychiatric Association. (2013). The Diagnostic and Statistical Manual of Mental Disorders. (5th ed.). Washington, D.C.: American Psychiatric Publishing.
· This is the manual of psychiatric diagnostic criteria used by mental health professionals.
Articles
Bauer, R.M. (2007). Evidence-based practice in psychology: Implications for research and research training.Journal of Clinical Psychology, 63(7), 685–694. Retrieved from the EBSCOhost database.
· This article discusses the implications of evidence-based practice (EBP) for research and research training in clinical psychology. Bauer argues that EBP provides a useful framework for addressing some heretofore ig ...
Dna Research Paper
Essay about Organizational Structures
Research Methods Essay
Educational Research
Methodology of Research Essay examples
Structure and Agency Essay
Sampling Methods Essay
Research Paper On Pcos
Essay about Structuralism
Fundamentals of Research Essay
Strategies on How to Infer & Explain Patterns and Themes from DataNoMore2020
A research that we presented and submitted to our teacher, Mrs. Lopez. I uploaded this because I wanted to help other students in the ABM track especially to Senior High Students who have Reseach in Daily Life in their subjects.
I need about 150 words for each questionPlease answer questions karinorchard1
I need about 150 words for each question
Please answer questions individually and provide each with its own references
Topic 1
Qualitative Research and Theories/Paradigms of Research
DQ 1
Over the past 16 years, only two building permits for new housing construction have been issued in Sedgwick County, Colorado. This is consistent with the depopulation (particularly of younger persons) and economic diminution that is attributed to a declining rainfall since the 1970s. These are quantitative details. It is unclear how much of the depopulation was due to perceived opportunities elsewhere, to copycat or fad behavior, and to perceived change in local economic opportunity. Why is qualitative analysis more likely to identify the leading cause of Sedgwick County's out-migration than quantitative methods? Which characteristics of qualitative research most influenced your response to this question?
DQ 2
In the GCU library, locate four empirical studies you have not used before on a topic you are interested in researching (Use the Empirical Research Checklist to determine if a study is empirical). List the theoretical foundation and Permalink for each study, and add the studies to your RefWorks list. Then determine a theoretical foundation (laws, theories, models, concepts) for a study that you might like to research for your dissertation. Why did you select this foundation? Finally, comment on other learners' theoretical foundations and if/how they can be improved.
Topic 2
Designing Qualitative Studies; Relationships Among Researchers, Subjects, and Institutions
DQ 1
Critics of qualitative research often posit that it is subjective due to the fact that the researcher collects the data. Therefore, the researcher's own prior experiences, prejudices, and attitudes may bias the data, and therefore, the results of the study. How would you respond to someone who presents this criticism to you?
DQ 2
Drawing on your prior knowledge, the studies and literature research you have completed, and the readings for this topic, reflect on the role of ethics in the research process. Discuss strategies a doctoral learner or researcher might employ to protect participants and the institutions (GCU/data collection site) in a study. Explain any concerns/uncertainties you have regarding ethical conduct during dissertation research.
Topic 3
Generating and Collecting Qualitative Data: Procedures and Ethical Considerations
DQ 1
Suppose you are interested in the behaviors of college professors that have high ratings of student satisfaction. The research goal is to identify the teaching behaviors of these successful professors so that these behaviors can be built into the curricula of doctoral leadership programs. The sample for this study will consist of 10 randomly selected professors who received high end-of-course survey scores. You want to use a case study design that requires at least two sources of data. What data collection instruments will be the mo ...
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
1. Measuring Our Relevancy:
Comparing Results in a Web-Scale Discovery Tool,
Google & Google Scholar
Elizabeth Namei, University of Southern California Libraries, namei@usc.edu
and
Christal Young, University of Southern California Libraries, youngc@usc.edu
#summonrelevancy ACRL 2015 Conference
3. “I could throw an author
and title in [Summon] but
I know that works very
erratically….I often get
inundated by irrelevant
things…”
“[Summon] just doesn’t
generally turn up reliable results
for me. I’ll search a very
obvious keyword or a very
specific keyword and it won’t
turn up the most relevant results
first even though I know the
highly relevant results are in
there...”
English
professor
Senior
music
major
6. We pulled
out a random
sample of 384 queries
There were 1.2 million searches
entered in Summon in Fall 2014.
7. From the 384 queries we eliminated:
● 63 queries for known items that USC did not
own
● 22 queries that had unrecognizable or
foreign characters
● 21 queries that were for “non-scholarly”
formats
Our
final
sample
size =
278
8. defining relevance
1. Relevant: a match for the
known item shows up 1st or 2nd
in the list of results.
2. Partially Relevant: a match for
the known item is found 3rd-
10th in the list of results.
3. Not relevant: known item is not
listed in the first 10 results or no
results were returned.
16. Partial Citation queries (with 2 metadata elements)
that did not find relevant results in Summon
and then there were none agatha christie
palladio four books on architecture
showed up 74th
shows up 75th
17. Quotes were used in only 6%
of the queries (16/278)
but……
100% of these queries
returned relevant results
18.
19. Formatted citations (with 3 or more metadata
elements) that did not find relevant results in Summon
Block G Hartman AM Dresser CM Carroll
MD Gannon J Gardner L. A data-based
approach to diet questionnaire design and
testing. Am J Epidemiol 1986; 124:453-469.
does not show up in first 10
results (6 metadata elements)
no results returned
(9 metadata elements)
Brooks M. K. (2010). Hospice services: The
dilemmas of technology at the end of life. In
T. S. Kerson J. L. M. McCoyd & Associates
(Eds.) Social work in health settings: Practice
in context (3rd ed. pp. 235-246). New York
NY: Routledge.
20. if fewer metadata elements had been used...
Block A data-based approach to diet
questionnaire design and testing.
shows up 1st
(2 metadata elements)
Shows up 1st
(5 metadata elements)
Brooks M. K. (2010). Hospice services: The
dilemmas of technology at the end of life. In
T. S. Kerson J. L. M. McCoyd & Associates
(Eds.) Social work in health settings: Practice
in context
22. Continue teaching users to be more strategic
searchers
● Explain the whys of strategic
searching and not just the hows
● Explain the organization of
information systems and the
benefits field searching (especially
for one word/common word titles)
● Provide troubleshooting tips for
how to deal with the inevitable
failed searches
24. Push for In-house solutions
● Using APIs
● Developing programs to
“clean” problematic queries
Lean on vendors
• to improve relevancy
• to improve “Did you mean”
suggestions
• To provide query
reformulation suggestions
Create Smarter Library Systems
Get Catalog usage numbers for 2014 as a point of comparison?
The problem: we were hearing from our users they they were getting “unexpected” results when searching USC’s Summon instance.
Google has set people’s expectations, and so have our experiences figuring out how to use our complex systems, but it also due to the fact that some advanced/power users had grown accustomed to the precision searching available in OPACs and library databases
Mention the data about users not looking beyond the first page of results
We’ve had a wide variety of users express frustration with Summon’s results. These are just 2 examples (gathered when we did usability testing of our website in Fall 2013?)
This problem of getting “unexpected” or irrlevant results is significant because research has found that if users have a negative experience with an online search system they will quickly abandon it. (footnote 20)
Recently more and more librarians and faculty are commenting about discovery layers ability to deal with known item searches
the current landscape - do we go with discovery tools or not? Better understand the cause of unexpected results
“One additional challenge lies in the ability of discovery services to find known items. Especially when searching for resources with one-word titles or common words, such as Nature or Time, relevancy-based retrieval may not always return the expected results. Each of the discovery services has improved its handling of known-item searching, but this continues as a point of criticism of performance.”
Marshall Breeding, “The Future of Library Resource Discovery” (February 2015)
These search tools are pervasive on libraries websites, often as the default search option for discovering or accessing materials, and some libraries have made the shift to replacing their OPACs with them.
MOTIVATION OF THE STUDY was to get a clearer picture of why users were getting unexpected results. Highlight our role as reference/instruction librarian
Why does this matter to libraries? - discovery services are becoming the ubiquitous default search option
HYPOTHESIS
We wanted to know:
When and why USC’s discovery service (Summon) returned “unexpected” or irrelevant results when users search for known items.
How Summon’s results compare to Google and Google Scholar in terms of relevancy of known item queries? since
We wanted to know:
Are discovery services, and Summon in particular, up to the task?
When and why USC’s discovery service (Summon) returning “unexpected” or irrelevant results, especially for known items queries.
Because Google has set user expectations for online searching experience, we wanted to compare the relevancy of results in Summon to Google and Google Scholar.
Lastly we wanted to gain insight into how users are searching for known items in our discovery system.
Transaction log analysis
Random sample of 384, Ended up with 278 since we eliminated queries with unrecognizable characters, and non-scholarly and non-USC owned items.
Recreated searches and
In Fall 2014 there were close to 1.2 million searches entered into our Summon instance. Of these, 433,863 were unique searches.
We pulled out a random sample of 384 search queries, in order to give us a 95% confidence level, 5% margin of error
For Google & GS the locating the full text of an item was not required for the results to be deemed relevant.
Thanks Christal
This slide hints at our results of our study with Watson, IBM’s supercomputer (standing in for Google), and crushing the competition
This chart show how the 278 known queries faired across all 3 search tools.
As you can see, Google lives up to its reputation for providing overwhelmingly relevant results. What did surprise us was Google Scholar’s middling performance compared to its big brother.
Our findings also confirm our hypothesis that Summon would underperform compared to the two Googles. Even though 76% is not a stellar we were still surprised at how well Summon did at finding relevant results for 3/4 of the queries.
Summon: 212/278 = 76%
Google: 252/278 = 91% of all scholarly known item queries found relevant results in Google
GS: 220/278 = 79%
----- Meeting Notes (3/28/15 06:35) -----
We were wanted to better understand why these 66 (24%) searches did not lead to what we categorized as relevant results. We have informally labeled these 66 searches as “FAILED” searches since users have an expectation of finding the known item at the top of the list, showing up 3rd-10th is often comparable to showing up 40th
To translate/contextualize this for you → next slide
47/278 = 17% of searches totally failed (these are for things we own
66/278 = 24% (¼ of searches failed in Summon)
(compared to 6/278 = 2%; 26/278 = 9% failed searches)
48/278 totally failed = 17%
58/278 failed in GS = 21%
HOW MANY OF THESE “RELEVANT RESULTS” had LINKING PROBLEMS?
What proportion of these failed searches were caused by user errors? (next slide have new pie chart with the numbers w/out errors??)
66 irrelevant queries
Partial Citation = 36 - 55%
Title search = 18 - 27%
Advanced = 1 - 2%
Numeric = 2 - 3%
Formatted Citation= 9 - 14%
From the 1.2 million searches entered into Summon in the Fall semester 35% are known items = (420,000) 24% of these = OVER 100,000 searches resulted in users not finding the known item they were searching for
That is 1 out of every 4 known item search
36% of 1.2 million =, then 24% of that number did not find the known item being searched for (should we also account for the number of known items we don’t own?) - THIS SLIDE MIGHT WORK BEFORE LAST SLIDE
As instruction librarians we are particularly interested in how user search behavior impacts the relevancy of results in our discovery system.
So for this paper we looked at the types of searches users were entering when looking for known items – We found that there were five categories.
As you can see, the majority of queries only included the title. the 2nd most frequently used search type were partial citations w/2 metadata elements
Taken together these two search types comprise 91% of all searches
Only 3% (9/278) the searches used the Advanced search form
Partial Citations -
63/69 were author+title
36 - author’s full name + full title
27 = author’s last name + title
The rest:
2 = author + date
1 = title + series title
1 = title + url
1 = authors + date
1 = title + journal title
Here is a side by side comparison of the percentage of relevant results returned by all three search tools according to search type. Because of time constraints we will only focus on the two worst performing ones
Partial citation searches, which comprise the 2nd most common search type in Summon were among the worst at providing relevant results, which was somewhat surprising/counterintiutive sinc e you would think that by providing more information (but not too much) the query would be more successful .
(the most common are author title, but there were also title+date, and even one title+URL)
52% (22+____/69) of partial citation searches did not find relevant results
21/22 (95%) of the failed searches were author title combinations (there was one failed search that included the title+URL)
91% (63/69) of the partial citation queries were made up of a variation of author+title
32% (22/69) of Partial Citations failed to find relevant results (doesn’t include partially relevant) -
Here are 2 examples: of partial citation searches that experienced catastrophic failure
ASK AUDIENCE: WHAT ADVICE WOULD YOU GIVE TO IMPROVE THESE SEARCHES?
Summon’s Product Manager in an email: “many problems can be solved by using facets”
“The challenge…is what fields to use to trigger this known item searching behavior. Exact matches are the easiest to solve for, especially if you have an exact match across two different fields (such as title and author). The logic around queries that aren’t exact matches (misspellings, omitted words, terms that fall into different fields) is more challenging.”
We re-ran the failed title and partial citation searches but this time with quotes to see how they would fair. This matches findings from other studies.
We found that quotes improved the relevancy of partial citation queries 74% of the time!
Quotes would have corrected 74% (20/27) partial citation searches with two metadata elements that did not find relevant results (this would give Summon 232/278 relevant results - 84%
Search types for queries w/quotes:
9/16 - (56%) were title searches
5/16 - (31%) was a partial citation (2 elements)
1/16 - (6%) was an advanced/field search
1/16 - (6%) was a copy/pasted citation (it didn’t have formatting though, like many of the other queries in this category) a manually typed book chapter: Pine Lisa. "The Jewish Family." Nazi Family Policy.
(quotes were used one other time, for a video/non-scholarly source, but it was a one word title, so was incorrectly used and didn’t help or hurt the search)
The worst performing search type are ones that we labeled “formatted citation searches.” These often included abbreviations and formatting and most appeared to be copy and pasted from course syllabi or reference lists
Although these types of searches comprised among the smallest of our sample (11), they were THE MOST likely to lead to a failed search with no results returned, and this is true ACROSS all 3 search tools. Many studies have reported an uptick in these types of searches.
There were only 11 queries that had 3 or more metadata elements, most often copy and pasted and so included formatting.
Of these 2 were successful in finding relevant results, 1 was partially successful and 8 did not find relevant results.
82% that did not find relevant results
Here are 2 examples from our sample, the first is a journal article that, the 2nd is a book chapter, and as you can see they did not fair well. (I gave up looking for the first query…
In Google, the Brooks chapter fails
Block article shows up first in Google, but does not show up in first 10 in GS
We experimented with removing some of the metadata elements (while leaving some formatting) and we found that in most cases this improved the relevancy, leading us to believe that the number of metadata elements entered clearly has an impact on the relevancy of known item searches
In Google, the Brooks chapter also fails
Block article shows up first in Google, but does not show up in first 10 in GS
So, what are the implications of our findings and our recommendations.
First, we want to make it clear that there are many more questions to be addressed and that there are certainly other contributors to poor relevancy besides user search behavior.
4 main causes of not getting relevant results, our paper only focused on how user search behavior influenced the relevancy of results,
user errors (typos)/poor Did you mean suggestions (##??)
users entering too much information (formatted citations, urls, full names etc.) (###?) - But Summon should be able to do something about these (clearn them)
bad relevancy (when quotes work to make the results show up higher
bad metatdata or indexing problems (###??) - Metadata issue on the listserv: - a discovery tool is only as good as the underlying indexing and metadata
Obviously Christal and I believe in the importance and value of teaching users to develop search strategies and to understand how information systems are organized.
EXPLAIN WHY copying and pasting an entire formatted citation is not likely to work in any search tool, which also means explaining why all search systems/search boxes are NOT THE SAME and so must be approached differently.
This will in turn give users more sophisticated mental models for online searching, they will in turn learn how to troubleshoot failed searches (facets, removing formatting and abbreviations).
But, the answers to improving the outcome of a search cannot ONLY/ALL be found in teaching users to more strategic searhers.
Compared to the open web (the fields, full text vs. A&I, naming conventions – abbreviated names and titles, etc.).
“Explain underlying metadata structures and how they can be leveraged to improve search efficacy” (Townsend)
Mental models of search: help users “conceive of information [in library databases] as something with an organization and underlying system, rather than a mysterious cloud of data” (Townsend)
Google is a success engine, it works for ages 4-104. Academic search tools are more sophisticated and allow for more control and precision and don’t offer (as much) fuzzy searching (the training wheels are off);
Framework language:
“understand how information systems (i.e., collections of recorded information) are organized in order to search for and access relevant information”; understand the potential of each type of strategy;
Find Metaliteracy article about teaching
Focusing on teaching the user to be smarter searches or……..Don’t fight user behavior, it’s an unwinnable battle
----- Meeting Notes (3/28/15 08:04) -----
FIX THIS
We want to propose that instead of complaining about the lazy search habits undergraduates or getting frustrated about users being content with “good enough results” or dismissing Discovery services and Google for dumbing down the research process, that we try to empathize with these expectations and behaviors
The part of the research process that should be (and is) challenging is the analyzing, evaluating and synthesizing of information sources not the finding/searching.
Libraries need to focus more resources and lobby for smarter, more intuitive and more adaptive systems.
Many libraries have come up with In house solutions: For instance is California State University at Fullerton came up with a “scrubbing program” that identifies and then “cleans” APA formatted citations entered into their discovery layer, stripping out everything except the title which as we saw are the most successful search types
Leaning on vendors - to improve overall relevancy and help features
field weighting and term proximity
“How close together are the query terms found in the record? If closer together, then a record is more relevant”
providing better “Did you mean?” suggestions
adding query reformulation suggestions for searches that get no results
Out of 39 queries with user errors a “Did You Mean” suggestion showed up 7 times but only 2 of those times did the link take you to the known item <<DOUBLE CHECK THIS
“this system is stupid. there are some systems that learn, Google learns” [43:40] - Jeff Edmunds Video;
Discovery tools should adapt so that search results will not suffer no matter how many metadata elements are included
learn how to better deal with 2 or more metadata elements
Smarter systems
Google is smart and adaptable
DIscovery vendors need to smarter and more adaptable
Lastly we want to leave you with what we realize may be potentially controversial suggestion. Everyone identifies Google’s simple interface and relevance algorithm as the keys to its success. But since 2009 it has been incorporating personalization features into its presentation of results which are further shaping user expectations.
More and more of our users—including faculty and librarians—are calling on libraries to harness the vast amounts of user data we collect to provide results that are tailored to individual research interests and needs, recognizing that presenting the same relevancy ranked results to everyone no longer works.
Maybe this is missing link libraries need to “cautiously and conscientiously explore” in order to provide users with “expected” results for all searches.
Make “serendipitous discovery work” - from yesterday’s “Limited by search” session (“relationship methodologies - relationships between content).
One recent proposal, made by David Weinberger from Harvard’s Berkman Center for Internet and Society (October 2014 issue of Chronicle of higher education), was to create a “stackscore” which would signify “how relevant an item is to the library’s patrons as measured by how they’ve used it.” (footnote) 68
He lists numerous datasets that are either already being collected or that could be easily obtained, which could be factored into developing this score, from renewals and recalls to readings listed on a syllabus. Commercial interests motivated the initial push towards offering personalized services, but bringing personalization technologies into libraries holds the promise of enhancing the breadth, depth, and reach of scholarship and scholarly communication in new and exciting ways.
Part of Google’s success is due to its use of personal data to enhance the relevancy of search results for each individual user. Libraries will never succeed in providing a truly Google-like search experience without moving in this direction. By offering personalized search systems, libraries will be better able to serve their users, not just in leading them to relevant content, but in anticipating and meeting their future information needs.
----- Meeting Notes (3/28/15 08:04) -----
fix this