This document discusses different types of research: pure research, original research, and secondary research. Pure research is done simply to gain knowledge without a particular purpose, exploring various topics and sources. Original research aims to discover new information not yet found. Secondary research examines existing research from others to draw new conclusions or relationships between studies. Research can also be directed, with a specific focus or goal, or non-directed for general learning without an objective. Research challenges preconceptions and requires defining terms and considering evidence objectively.
Human and natural science – compare & contrastMaho Tachibana
Human science is the study and interpretation of human experiences, activities, constructs, and artifacts. It attempts to expand knowledge of human existence. There are four key aspects of natural science applied to human science: observation, measurement, experiments, and laws. However, directly observing minds, measuring concepts like thoughts, running controlled experiments on humans, and establishing predictive laws are challenging in human science due to its complex social contexts and moral considerations. As a result, human sciences seem to lack the strong explanatory power of natural sciences and may never be reduced to natural science since human behavior is best explained by meaning and purpose.
The natural sciences involve studying objects and processes observable in nature, such as biology and physics. The scientific method involves making observations, developing hypotheses, making predictions based on hypotheses, and experimentally testing predictions. A key part of the scientific method is that hypotheses can be proven false through experimentation. While scientific knowledge cannot be absolutely proven true, theories that withstand challenges are considered valid within their domain. The development of science involves imagination to develop theories to explain observations. Scientific progress values expanding knowledge, though some argue there should be regulation of controversial areas.
Human sciences deal with studying human behavior and differ from natural sciences in several key ways:
1) It is difficult to conduct controlled experiments on humans due to the need to either ask them questions or observe their behavior directly.
2) Experiments in human sciences cannot be easily repeated and the scientist cannot isolate or control for all variables that may influence human behavior.
3) Predictions in human sciences are less certain compared to natural sciences due to the non-universal and imprecise nature of hypotheses about human behavior.
4) The language used in human sciences is inherently vague compared to natural sciences.
The document discusses research methods and the scientific method. It provides an overview of key figures in the development of science like Galileo, Popper, Kuhn, and Lakatos. It describes Galileo's experiment dropping objects from the Leaning Tower of Pisa to test hypotheses. It also summarizes Popper's concept of falsifiability, Kuhn's idea of paradigms, and Lakatos' attempt to find common ground between Popper and Kuhn.
The document provides an overview of climate change data and statistics concepts. It includes 3 figures showing land surface temperature data from the Berkeley Earth Surface Temperature study with different timeframes and trend lines applied. It also lists topics to be covered in an intro to statistics course on climate change, including how to collect and interpret data ethically and reduce bias. Finally, it provides the reading list for the course, which covers evolution, Charles Darwin, and more.
This document discusses the relationship between philosophy and science, and the role of philosophy of science. It makes three main points:
1. Philosophy of science analyzes and comments on scientific processes and results, but cannot generate new knowledge or predict future scientific advances in the way that science can.
2. Philosophy of science can offer conceptual analysis of scientific methods and ontology, as well as "claim checking" of scientific approaches, but its contributions are limited.
3. If biology and other sciences require a metaphysical foundation, then metaphysics should be treated as an explicit branch of those sciences and approached scientifically rather than by amateur philosophers.
This document outlines key concepts related to the scientific method and philosophy of science. It discusses various models of scientific inquiry including the classical, pragmatic, and logical empiricism models. It also covers types of reasoning like deduction, induction, and abduction. Examples are provided to illustrate abductive reasoning techniques like the duck test and elephant test. Biases, effects, and criticisms of science are also referenced.
Human and natural science – compare & contrastMaho Tachibana
Human science is the study and interpretation of human experiences, activities, constructs, and artifacts. It attempts to expand knowledge of human existence. There are four key aspects of natural science applied to human science: observation, measurement, experiments, and laws. However, directly observing minds, measuring concepts like thoughts, running controlled experiments on humans, and establishing predictive laws are challenging in human science due to its complex social contexts and moral considerations. As a result, human sciences seem to lack the strong explanatory power of natural sciences and may never be reduced to natural science since human behavior is best explained by meaning and purpose.
The natural sciences involve studying objects and processes observable in nature, such as biology and physics. The scientific method involves making observations, developing hypotheses, making predictions based on hypotheses, and experimentally testing predictions. A key part of the scientific method is that hypotheses can be proven false through experimentation. While scientific knowledge cannot be absolutely proven true, theories that withstand challenges are considered valid within their domain. The development of science involves imagination to develop theories to explain observations. Scientific progress values expanding knowledge, though some argue there should be regulation of controversial areas.
Human sciences deal with studying human behavior and differ from natural sciences in several key ways:
1) It is difficult to conduct controlled experiments on humans due to the need to either ask them questions or observe their behavior directly.
2) Experiments in human sciences cannot be easily repeated and the scientist cannot isolate or control for all variables that may influence human behavior.
3) Predictions in human sciences are less certain compared to natural sciences due to the non-universal and imprecise nature of hypotheses about human behavior.
4) The language used in human sciences is inherently vague compared to natural sciences.
The document discusses research methods and the scientific method. It provides an overview of key figures in the development of science like Galileo, Popper, Kuhn, and Lakatos. It describes Galileo's experiment dropping objects from the Leaning Tower of Pisa to test hypotheses. It also summarizes Popper's concept of falsifiability, Kuhn's idea of paradigms, and Lakatos' attempt to find common ground between Popper and Kuhn.
The document provides an overview of climate change data and statistics concepts. It includes 3 figures showing land surface temperature data from the Berkeley Earth Surface Temperature study with different timeframes and trend lines applied. It also lists topics to be covered in an intro to statistics course on climate change, including how to collect and interpret data ethically and reduce bias. Finally, it provides the reading list for the course, which covers evolution, Charles Darwin, and more.
This document discusses the relationship between philosophy and science, and the role of philosophy of science. It makes three main points:
1. Philosophy of science analyzes and comments on scientific processes and results, but cannot generate new knowledge or predict future scientific advances in the way that science can.
2. Philosophy of science can offer conceptual analysis of scientific methods and ontology, as well as "claim checking" of scientific approaches, but its contributions are limited.
3. If biology and other sciences require a metaphysical foundation, then metaphysics should be treated as an explicit branch of those sciences and approached scientifically rather than by amateur philosophers.
This document outlines key concepts related to the scientific method and philosophy of science. It discusses various models of scientific inquiry including the classical, pragmatic, and logical empiricism models. It also covers types of reasoning like deduction, induction, and abduction. Examples are provided to illustrate abductive reasoning techniques like the duck test and elephant test. Biases, effects, and criticisms of science are also referenced.
1. The document discusses different aspects of how mental maps and beliefs are formed.
2. It explains that mental maps are influenced by a variety of sources like teachers, friends, family, books, and culture.
3. Mental maps can distort reality and be influenced by biases without us realizing it, so common sense and intuition cannot always be trusted.
philosophy of science, Falsification theory, Karl popperKhalid Zaffar
The document discusses falsification and its importance in philosophy of science. [1] Falsification proposes that for a theory to be considered scientific, it must be possible to prove it false through testing or observation. [2] Karl Popper introduced the principle of falsification, stating that a theory is scientific if we can identify potential evidence that could show it is incorrect. [3] Being able to falsify theories allows them to be rigorously tested and improved in science, distinguishing science from non-falsifiable claims.
This document discusses ways to distinguish science from pseudoscience by asking questions about claims. It outlines 10 questions one can ask to "detect baloney", such as whether a claim has been verified by independent sources, fits with established knowledge, and considers evidence that contradicts the claim. These questions help determine the reliability of sources, identify biases, and establish the validity of evidence. The questions also help solve the "boundary problem" of determining where to draw the line between science and pseudoscience when exploring borderline cases.
1. The document outlines the agenda for a mathematics class, including readings on history of mathematics, a podcast, and activities on mind reading, the Monty Hall problem, and coloring shapes.
2. It discusses definitions of mathematics, axioms, theorems, and the relationship between math and reality. Concepts like a priori synthetic knowledge and the certainty of mathematical statements are examined.
3. On Wednesday, students will discuss how statistics and probability relate to their Extended Essay topics and how different interpretations of data affect understanding. They will pose questions about the mathematical aspects of their topics.
This document discusses the demarcation of science from pseudoscience and the criterion of falsifiability. It explores how theoretical sciences like cosmology and theoretical physics deal with phenomena that are unobservable and difficult to falsify. While mathematics and theoretical constructs are useful for developing scientific understanding, overreliance on interpretation of data without direct observation can compromise objectivity and falsifiability. Determining what constitutes science versus pseudoscience or non-science is a complex problem with no definitive answers.
Karl Popper proposed that scientific knowledge is provisional and falsifiable rather than absolutely certain or proven true. He rejected the traditional view that science discovers descriptive laws through induction from facts. Instead, he argued that scientific theories can never be proven true but can be tested by attempting to falsify them through experiments and observations. This view resolved issues with the logical problem of induction and provided a rationale for how scientific knowledge advances through falsification of theories.
This document discusses several key ideas and debates within the human sciences. It compares the human sciences, history, and natural sciences, noting that while human sciences seek generalizations like the natural sciences, studying humans is more complex due to changing societies and individuals. It also discusses the debate between naturalist and interpretivist approaches, and some of the challenges of achieving certainty in the human sciences, such as the complexity of human behavior and societies. Key ideas discussed in more depth include the distinction between correlation and causation, the concept of path dependence, the nature vs nurture debate, and issues around determinism and free will.
Popper rejected inductive reasoning and verificationism, which were approaches used by positivists. He argued that inductive reasoning, which involves generalizing from specific observations, is flawed because a single counter-example can falsify the generalization. Popper proposed falsificationism instead, where a scientific theory must be capable of being proven wrong through empirical testing. A good theory, according to Popper, must be falsifiable but withstand attempts to falsify it. No theory can ever be proven absolutely true, only withstand falsification attempts so far. Popper also argued that science thrives in open societies that allow criticism and debate, while closed societies dominated by rigid orthodoxies tend to stifle scientific progress.
This document discusses the concept of inductive theory building in social sciences. It argues that theory building should be inductive rather than deductive. It critiques contemporary philosophy of science, such as Popper's falsifiability theory, for rejecting induction and embracing deduction. The document provides historical examples of successful inductive theory building in sciences, including Aristotle, Bacon, Newton, and theories in psychology. It concludes by suggesting guidelines for inductive theory building and policies journal editors could adopt to encourage this approach.
The document discusses Karl Popper's theory of falsification and its evolution over time. It explains that Popper argued scientific theories are never truly verified, but can be falsified by a single contradictory observation. Theories should aim to be falsifiable to be considered scientific. Later, Popper acknowledged natural selection as testable despite initial doubts. The document also examines criticisms of falsification, such as that theories may not be falsified even when observations contradict them, depending on how the theory is modified in response.
Scientific method vs. hollow earth theoryMarcus 2012
http://marcusvannini2012.blogspot.com/
http://www.marcusmoon2022.org/designcontest.htm
Shoot for the moon and if you miss you'll land among the stars...
This document provides an overview and introduction for a philosophy course titled "Introduction to Philosophy" being offered in the fall 2017 semester. It outlines the following key points:
- Philosophy involves seeking to understand the nature of reality and questioning common assumptions and perspectives. It can challenge everyday ideas and undermine common sense.
- The course will cover traditional areas of philosophy like logic, epistemology, metaphysics, and ethics. Students will read and discuss Stephen Law's book "The Philosophy Gym" in class.
- Studying philosophy establishes a foundation for other disciplines by addressing foundational questions about knowledge, existence, morality, and more. Philosophical issues underlie debates in science, religion, and ethics.
This document discusses several perspectives on the nature of morality:
1. Moral skepticism argues that morality is subjective and there is no objective moral truth. Moral statements are merely expressions of preference.
2. Moral relativism claims that morality is determined by one's society or culture and there are no universal moral values. However, this view faces issues with tolerating intolerant practices.
3. Some philosophers like Kant have argued that morality can be known through reason and deriving universal moral rules and duties. However, critics argue this view faces counterexamples where following one's duty seems to lead to immoral outcomes.
4. Utilitarianism holds that the morally right action is one
Lessons learned from 25 years of battling creationists, Scientologists, and f...Jim Lippard
Suggested rules of thumb for online debate based on experience arguing with fundamentalists, creationists, and Scientologists. Given at the American Humanist Association conference, Tempe, AZ, June 4, 2009.
This document provides an introduction and summary of the book "Anatomy of a Phenomenon" by Jacques Vallee. It discusses recent UFO sightings across the United States and calls for further scientific study of the phenomenon. The author has studied UFO files from the US Air Force and other sources. He believes UFOs deserve rational investigation and aims to understand why the idea of extraterrestrial intelligence provokes strong reactions. The book will examine the UFO phenomenon from scientific, military, philosophical and public perspectives.
1. The document discusses three main forms of skepticism: global vs restricted, academic vs Pyrrhonian, and methodological skepticism.
2. It provides examples of famous skeptics like Rene Descartes who employed mitigated skepticism and arrived at his famous conclusion "I think therefore I am".
3. The document also examines some common conspiracy theories and attempts to debunk claims about events like the moon landing, 9/11, and the origins of AIDS using scientific evidence.
Scientific skepticism involves critically evaluating claims using scientific methods and requiring adequate evidence. It is important in areas prone to pseudoscience like UFOs, ghosts, and alternative medical treatments. Skepticism seeks to understand the world through empirical evidence and requires independent confirmation of facts. Popular modern skeptics who promote critical thinking include Carl Sagan, James Randi, and Richard Dawkins. Skepticism is important for consumers, educators, politicians and the media to separate fact from fiction.
The document summarizes the results of a questionnaire about dreams. It finds that most respondents were ages 22-27, watch TV in the evenings, and enjoy documentaries. Over half reported dreaming every night and feeling like dreams are real. Common dream themes included relationships, falling, and fears. The results indicate dreams will be a relatable topic for the target audience of the documentary.
Apigee Edge is a platform for API management that allows organizations to securely publish, monitor, and manage APIs. It provides API services including security, traffic management, analytics, and developer services. Apigee Edge handles the full lifecycle of APIs from development to publishing to consumption. It offers capabilities for access control, analytics, monitoring, documentation and more to help organizations maximize the value of their APIs.
1. The document discusses different aspects of how mental maps and beliefs are formed.
2. It explains that mental maps are influenced by a variety of sources like teachers, friends, family, books, and culture.
3. Mental maps can distort reality and be influenced by biases without us realizing it, so common sense and intuition cannot always be trusted.
philosophy of science, Falsification theory, Karl popperKhalid Zaffar
The document discusses falsification and its importance in philosophy of science. [1] Falsification proposes that for a theory to be considered scientific, it must be possible to prove it false through testing or observation. [2] Karl Popper introduced the principle of falsification, stating that a theory is scientific if we can identify potential evidence that could show it is incorrect. [3] Being able to falsify theories allows them to be rigorously tested and improved in science, distinguishing science from non-falsifiable claims.
This document discusses ways to distinguish science from pseudoscience by asking questions about claims. It outlines 10 questions one can ask to "detect baloney", such as whether a claim has been verified by independent sources, fits with established knowledge, and considers evidence that contradicts the claim. These questions help determine the reliability of sources, identify biases, and establish the validity of evidence. The questions also help solve the "boundary problem" of determining where to draw the line between science and pseudoscience when exploring borderline cases.
1. The document outlines the agenda for a mathematics class, including readings on history of mathematics, a podcast, and activities on mind reading, the Monty Hall problem, and coloring shapes.
2. It discusses definitions of mathematics, axioms, theorems, and the relationship between math and reality. Concepts like a priori synthetic knowledge and the certainty of mathematical statements are examined.
3. On Wednesday, students will discuss how statistics and probability relate to their Extended Essay topics and how different interpretations of data affect understanding. They will pose questions about the mathematical aspects of their topics.
This document discusses the demarcation of science from pseudoscience and the criterion of falsifiability. It explores how theoretical sciences like cosmology and theoretical physics deal with phenomena that are unobservable and difficult to falsify. While mathematics and theoretical constructs are useful for developing scientific understanding, overreliance on interpretation of data without direct observation can compromise objectivity and falsifiability. Determining what constitutes science versus pseudoscience or non-science is a complex problem with no definitive answers.
Karl Popper proposed that scientific knowledge is provisional and falsifiable rather than absolutely certain or proven true. He rejected the traditional view that science discovers descriptive laws through induction from facts. Instead, he argued that scientific theories can never be proven true but can be tested by attempting to falsify them through experiments and observations. This view resolved issues with the logical problem of induction and provided a rationale for how scientific knowledge advances through falsification of theories.
This document discusses several key ideas and debates within the human sciences. It compares the human sciences, history, and natural sciences, noting that while human sciences seek generalizations like the natural sciences, studying humans is more complex due to changing societies and individuals. It also discusses the debate between naturalist and interpretivist approaches, and some of the challenges of achieving certainty in the human sciences, such as the complexity of human behavior and societies. Key ideas discussed in more depth include the distinction between correlation and causation, the concept of path dependence, the nature vs nurture debate, and issues around determinism and free will.
Popper rejected inductive reasoning and verificationism, which were approaches used by positivists. He argued that inductive reasoning, which involves generalizing from specific observations, is flawed because a single counter-example can falsify the generalization. Popper proposed falsificationism instead, where a scientific theory must be capable of being proven wrong through empirical testing. A good theory, according to Popper, must be falsifiable but withstand attempts to falsify it. No theory can ever be proven absolutely true, only withstand falsification attempts so far. Popper also argued that science thrives in open societies that allow criticism and debate, while closed societies dominated by rigid orthodoxies tend to stifle scientific progress.
This document discusses the concept of inductive theory building in social sciences. It argues that theory building should be inductive rather than deductive. It critiques contemporary philosophy of science, such as Popper's falsifiability theory, for rejecting induction and embracing deduction. The document provides historical examples of successful inductive theory building in sciences, including Aristotle, Bacon, Newton, and theories in psychology. It concludes by suggesting guidelines for inductive theory building and policies journal editors could adopt to encourage this approach.
The document discusses Karl Popper's theory of falsification and its evolution over time. It explains that Popper argued scientific theories are never truly verified, but can be falsified by a single contradictory observation. Theories should aim to be falsifiable to be considered scientific. Later, Popper acknowledged natural selection as testable despite initial doubts. The document also examines criticisms of falsification, such as that theories may not be falsified even when observations contradict them, depending on how the theory is modified in response.
Scientific method vs. hollow earth theoryMarcus 2012
http://marcusvannini2012.blogspot.com/
http://www.marcusmoon2022.org/designcontest.htm
Shoot for the moon and if you miss you'll land among the stars...
This document provides an overview and introduction for a philosophy course titled "Introduction to Philosophy" being offered in the fall 2017 semester. It outlines the following key points:
- Philosophy involves seeking to understand the nature of reality and questioning common assumptions and perspectives. It can challenge everyday ideas and undermine common sense.
- The course will cover traditional areas of philosophy like logic, epistemology, metaphysics, and ethics. Students will read and discuss Stephen Law's book "The Philosophy Gym" in class.
- Studying philosophy establishes a foundation for other disciplines by addressing foundational questions about knowledge, existence, morality, and more. Philosophical issues underlie debates in science, religion, and ethics.
This document discusses several perspectives on the nature of morality:
1. Moral skepticism argues that morality is subjective and there is no objective moral truth. Moral statements are merely expressions of preference.
2. Moral relativism claims that morality is determined by one's society or culture and there are no universal moral values. However, this view faces issues with tolerating intolerant practices.
3. Some philosophers like Kant have argued that morality can be known through reason and deriving universal moral rules and duties. However, critics argue this view faces counterexamples where following one's duty seems to lead to immoral outcomes.
4. Utilitarianism holds that the morally right action is one
Lessons learned from 25 years of battling creationists, Scientologists, and f...Jim Lippard
Suggested rules of thumb for online debate based on experience arguing with fundamentalists, creationists, and Scientologists. Given at the American Humanist Association conference, Tempe, AZ, June 4, 2009.
This document provides an introduction and summary of the book "Anatomy of a Phenomenon" by Jacques Vallee. It discusses recent UFO sightings across the United States and calls for further scientific study of the phenomenon. The author has studied UFO files from the US Air Force and other sources. He believes UFOs deserve rational investigation and aims to understand why the idea of extraterrestrial intelligence provokes strong reactions. The book will examine the UFO phenomenon from scientific, military, philosophical and public perspectives.
1. The document discusses three main forms of skepticism: global vs restricted, academic vs Pyrrhonian, and methodological skepticism.
2. It provides examples of famous skeptics like Rene Descartes who employed mitigated skepticism and arrived at his famous conclusion "I think therefore I am".
3. The document also examines some common conspiracy theories and attempts to debunk claims about events like the moon landing, 9/11, and the origins of AIDS using scientific evidence.
Scientific skepticism involves critically evaluating claims using scientific methods and requiring adequate evidence. It is important in areas prone to pseudoscience like UFOs, ghosts, and alternative medical treatments. Skepticism seeks to understand the world through empirical evidence and requires independent confirmation of facts. Popular modern skeptics who promote critical thinking include Carl Sagan, James Randi, and Richard Dawkins. Skepticism is important for consumers, educators, politicians and the media to separate fact from fiction.
The document summarizes the results of a questionnaire about dreams. It finds that most respondents were ages 22-27, watch TV in the evenings, and enjoy documentaries. Over half reported dreaming every night and feeling like dreams are real. Common dream themes included relationships, falling, and fears. The results indicate dreams will be a relatable topic for the target audience of the documentary.
Apigee Edge is a platform for API management that allows organizations to securely publish, monitor, and manage APIs. It provides API services including security, traffic management, analytics, and developer services. Apigee Edge handles the full lifecycle of APIs from development to publishing to consumption. It offers capabilities for access control, analytics, monitoring, documentation and more to help organizations maximize the value of their APIs.
The document contains a feedback questionnaire evaluating various design elements of an indie magazine, including the catchiness of the name, suitability of the color scheme, appeal to teenagers, price, front cover design, contents page organization, choice of images, readability, and engagement of articles. Respondents are asked to answer questions about each element and explain their perspectives.
The document discusses the history and services provided by health spas. Health spas have existed for thousands of years, originating from ancient societies that believed natural springs had healing properties. Today, health spas promote overall fitness and wellness through services like water therapies, massage, exercise classes, and detoxification treatments. Common goals of visiting a health spa include stress relief, relaxation, and improving overall health and wellness.
Question 2; How effective is the combination of our main product and ancillar...meganfellowes
The student created a documentary, poster, and radio trailer that work together to promote and reinforce the topic of dreams. Specifically:
1) The same fonts, voiceovers, music, and clips are used across the products to link them together.
2) The poster uses still images from the documentary and emphasizes the themes of dreams, combining the visual elements.
3) The radio trailer uses clips and a similar script to the documentary to grab listeners' attention and remind them to watch the airing on TV.
4) Together the products target and audience of 20-30 year olds, promote the scheduling, and complement each other's reinforcement of the topic of dreams.
Channel 4 has branding requirements for print advertisements including logos, titles, hashtags and websites to promote documentaries. Megan researched these conventions like using black and white logos and contrasting colors for readability. The main image must be eye-catching, relevant to the topic, and unique to stand out from other documentaries.
The DSR must launder microfiber pads before demonstrating tools to clients. They should research client needs by observing current procedures and documenting issues. The DSR must practice using the tools themselves before instructing clients. During demonstrations, the DSR explains each step and has clients practice the techniques to build confidence. The closing involves reviewing results versus current methods and setting up the next meeting.
Lilit owns a makeup studio where she provides makeup services to customers. She has 5-10 appointments daily and is well-known for her friendly, personalized customer service. As a professional makeup artist, Lilit has a variety of specialized brushes, palettes, and other tools to create unique looks for each customer. She is dedicated to her clients and their satisfaction, even reapplying makeup for brides at their weddings. Lilit also teaches makeup seminars on Sundays to share her expertise with students.
The document discusses several key elements of documentary filmmaking including the use of a standard English narrator to drive the narrative, incorporating music or sound effects to provide emotion, and having interviews that follow rules like the rule of thirds. It also addresses different types of documentaries that can have open or closed narratives and linear vs non-linear structures. Proper use of mise-en-scene, shot selection, captions, subtitles and text are emphasized to effectively communicate information to viewers.
Megan Fellowes is a senior manager at a marketing firm based in Chicago. She has over 15 years of experience in marketing and communications. Megan received her bachelor's degree in business administration from the University of Illinois.
Slide Share allows for web-based presentation sharing with small or no file size required, keeping presentations always up to date online. However, presentations uploaded to Slide Share cannot be edited once shared.
The document discusses feedback from an audience questionnaire about a documentary called "Awake Inside a Dream". The summary is:
The questionnaire received positive feedback on engaging audiences in the first 5 minutes. Respondents also felt the documentary had qualities of professional documentaries in terms of effects, transitions and music matching the visuals. Feedback confirmed the voiceover successfully represented the topic of dreams. Some feedback suggested including more stories from people about their dreams. Overall, the documentary was well-received based on the questionnaire responses.
In what ways does your media product use, develop, or challenge forms and con...meganfellowes
The document discusses the codes and conventions used in the author's media products for an indie magazine, including the front cover, contents page, and double page article spread. Key elements mentioned are using colors like red, white, black, and yellow that are recognized in indie magazines. Images, headings, and promotional elements are positioned following conventions. Fonts and layout are also used consistently across pages to tie the magazine together as a brand. The double page spread further develops conventions with colors, direct address, drop caps and quotes to engage readers.
Dokumen tersebut membahas tentang cahaya, termasuk sumber cahaya alami dan buatan, sifat sumber cahaya, satuan-satuan terkait cahaya seperti kuat cahaya dan arus cahaya, alat pengukuran cahaya, serta penggunaan cahaya dalam bidang kedokteran seperti endoskop dan sinar-sinar seperti ungu ultra, merah infra, dan biru.
Este documento describe las diferentes fuentes de información digital, incluyendo fuentes primarias, secundarias y terciarias. Las fuentes primarias contienen información original de libros, artículos y publicaciones científicas. Las fuentes secundarias incluyen publicaciones periódicas, enciclopedias y diccionarios que resumen fuentes primarias. Las fuentes terciarias son herramientas como clasificaciones y tesauros utilizadas por bibliotecarios. Finalmente, las bibliotecas virtuales proveen acceso a contenidos digitales de
This document introduces the philosophical problem of skepticism and our ability to know anything beyond our own minds. It argues that while we assume an external world exists based on our senses, we cannot prove this from within our own minds as all evidence comes through our experiences and thoughts. This leads to the possibility that nothing exists beyond our minds, a view called solipsism, or that we cannot know anything beyond our present thoughts, a form of skepticism. The document considers various responses to these arguments but finds no conclusive way to prove our knowledge of an external world.
This document provides an overview and introduction to research methods for social science. It discusses the basic elements and assumptions of the scientific method as applied to social research. Specifically, it notes that social science research is empirical, relying on observation and data, and requires replication of findings. It also outlines four main types of social research: applied empirical research, theory-building, normative philosophy, and formal theory. The document is intended to serve as a brief introduction and handbook for students conducting social science research.
Psychology is defined as the scientific study of behavior and mental processes. Psychologists use empirical research methods like observation and experimentation to systematically study topics related to memory, emotion, learning, development, intelligence and more. While common sense beliefs are often wrong, psychological research tests theories and collects data to develop valid conclusions about human behavior.
WHERE TO START CHP. 2LEARNING OBJECTIVES· Discuss how a hypo.docxphilipnelson29183
WHERE TO START CHP. 2
LEARNING OBJECTIVES
· Discuss how a hypothesis differs from a prediction.
· Describe the different sources of ideas for research, including common sense, observation, theories, past research, and practical problems.
· Identify the two functions of a theory.
· Summarize the fundamentals of conducting library research in psychology, including the use of PsycINFO.
· Summarize the information included in the abstract, introduction, method, results, and discussion sections of research articles.
Page 21THE MOTIVATION TO CONDUCT SCIENTIFIC RESEARCH DERIVES FROM A NATURAL CURIOSITY ABOUT THE WORLD. Most people have their first experience with research when their curiosity leads them to ask, “I wonder what would happen if …” or “I wonder why …,” followed by an attempt to answer the question. What are the sources of inspiration for such questions? How do you find out about other people's ideas and past research? In this chapter, we will explore some sources of scientific ideas. We will also consider the nature of research reports published in professional journals.
RESEARCH QUESTIONS, HYPOTHESES, AND PREDICTIONS
The result of curiosity is a question. Researchers use research questions to identify and describe the broad topic that they are investigating, and then conduct research in order to answer their research questions. A good research question identifies the topic of inquiry specifically enough so that hypotheses and predictions can be made. A hypothesis is also a question; it makes a statement about something that may be true. Hypotheses are more specific versions of research questions; they are directly testable whereas a research question may not be. Thus, a hypothesis is a tentative idea or question that is waiting for evidence to support or refute it. Once a hypothesis is proposed, data must be gathered and evaluated in terms of whether the evidence is consistent or inconsistent with the hypothesis. Researchers also make specific predictions concerning the outcome of research. Where a research question is broad and a hypothesis is more specific, a prediction is a guess at the outcome of a hypothesis. If a prediction is confirmed by the results of the study, the hypothesis is supported. If the prediction is not confirmed, the researcher will either reject the hypothesis or conduct further research using different methods to study the hypothesis. It is important to note that when the results of a study confirm a prediction, the hypothesis is only supported, not proven. Researchers study the same hypothesis using a variety of methods, and each time this hypothesis is supported by a research study, we become more confident that the hypothesis is correct.
Figure 2.1 shows the relationships among research questions, hypotheses, and predictions graphically. As an example, consider Cramer, Mayer, and Ryan (2007). They had general questions about college students’ use of cell phones while driving: “Are there differences among gro.
Here are potential revisions to improve the LOI:
1. Focus on a specific aspect or element of the education system to change rather than the broad question of whether the entire system should change. For example:
Should high-stakes standardized testing be reduced or replaced in K-12 public education?
2. Provide context or rationale for why the change is being proposed. For example:
How might reducing emphasis on standardized testing in public schools allow for a more well-rounded, student-centered approach to learning?
3. Suggest exploring multiple perspectives on the issue rather than taking a definitive stance. For example:
What are the arguments for and against proposed reforms to reduce standardized testing in U.S
Value of Science Essay
The Scientific Method Essay
scientific literacy Essay
Essay on Forensic Science
Computer Science Essay
Science Essay
Scientific Theory Essay
Environmental Science Essay
Science Honor Society Essay
Reflective Essay On Science
My Passion For Science
Essay about Life Science
This document provides an overview of topics covered in a peer counseling session, including reviewing homework and introducing psychology's main areas of analysis. It discusses how psychology has developed from fields like philosophy and biology and how early pioneers came from many disciplines and countries. Contemporary psychology takes a biopsychosocial approach and considers the interaction between nature and nurture. It also outlines psychology's main areas of analysis including biological, psychological, and social-cultural influences.
Lecture 1 in the Research Methods series.
See also notes for the Research Methods series: http://www.slideshare.net/lenallis/research-methods-lectures-notes
This lecture series aims to cover the basics of research methods for undergraduate students. By the end of the series students should understand:
-Why research is important
-How to identify good and bad sources of information
-How read critically
-How to write clearly
-Quantitative and Qualitative research
-The basics of experimental method
The overall point should be for students to take the activity of research seriously, but also to be motivated to go and conduct research and engage critically with material.
This document discusses key aspects of science including its methods, assumptions, and types of reasoning. It notes that science involves systematic, documented investigation of natural phenomena through observation and experimentation. Both deductive and inductive reasoning are used in science to develop theories from data or deduce expectations. The scientific method includes observing, generalizing, reasoning, and reevaluating findings. Methodology, or the approach used, is also discussed in relation to political science. Both quantitative and qualitative methods are outlined.
This document is the preface to James Mark Baldwin's book "The Story of the Mind". It discusses Baldwin's goals and approach in writing the book. He aimed to make psychology more accessible while maintaining scientific accuracy. He incorporates some of his previous work to reach a broader audience. The preface also addresses Baldwin's view that evolution theory can be useful when applied to the mind, helping to explain its development over time according to natural laws.
This document discusses the definitions and purposes of history, philosophy, and science. It provides:
- History is the study of the past, specifically how it relates to humans. Philosophy comes from the Greek word for "love of wisdom" and investigates the most general questions about existence, knowledge, values, and meaning.
- Science is a disciplined attempt to find out what exists, how things work, why they work that way, what could exist, how things could work if they did exist, what cannot exist and why. It progresses from craft to establishing theories through representation, ontology, and techniques for modeling.
- The boundaries between craft, science and engineering are blurred. Philosophy of science is concerned with
Class Notes - Critical Thinking and The Nature of Knowledgeestice
This document discusses the nature of philosophy and critical thinking. It begins by defining philosophy as the love of wisdom and lists some common philosophical questions. It then outlines the main branches of philosophy: metaphysics, epistemology, logic, and axiology. For each branch, it provides examples of the types of questions examined. The document also discusses the five pillars of critical thinking: logic, argument, rhetoric, background knowledge, and attitudes/values. It defines key terms like claims, truth, knowledge, and belief. Overall, the document provides a high-level overview of philosophy and critical thinking concepts.
This document discusses methods for conducting social research. It explains that social scientists use both quantitative and qualitative methods. Quantitative methods are used to study large populations and establish relationships between variables, but cannot capture the richness of individual experiences. Qualitative methods focus on understanding meanings and interpretations through techniques like interviews and observation. The document also notes that social research aims to move beyond common sense understandings and challenge prejudices by taking a scientific approach.
This document discusses the definitions and relationships between history, philosophy, science, and their various subfields. It provides definitions of history as the study of the past as it relates to humans. Philosophy is defined as the study of fundamental problems regarding existence, knowledge, values, reason, language, and more. Science is defined as a disciplined attempt to understand what exists, how and why things work, what could exist but doesn't, and more. The document also discusses the relationships between craft, science, and engineering over time. It provides overviews of various philosophical and scientific concepts and debates.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Mba year 1_571216_nol
1. 571216- BUSINESS RESEARCH METHODS
Research is finding out what you don't already know. No one knows everything, but
everybody knows something. However, to complicate matters, often what you know, or
think you know, is incorrect.
There are two basic purposes for research: to learn something, or to gather evidence. The
first, to learn something, is for your own benefit. It is almost impossible for a human to
stop learning. It may be the theory of relativity or the RBIs of your favorite ball player,
but you continue to learn. Research is organized learning, looking for specific things to
add to your store of knowledge. You may read SCIENTIFIC AMERICAN for the latest
research in quantum mechanics, or the sports section for last night's game results. Either
is research.
What you've learned is the source of the background information you use to communicate
with others. In any conversation you talk about the things you know, the things you've
learned. If you know nothing about the subject under discussion, you can n either
contribute nor understand it. (This fact does not, however, stop many people from joining
in on conversations, anyway.) When you write or speak formally, you share what you've
learned with others, backed with evidence to show that what you've lear ned is correct. If,
however, you haven't learned more than your audience already knows, there is nothing
for you to share. Thus you do research.
THREE TYPES OF RESEARCH
There are three types of research, pure, original, and secondary. Each type has the goal of
finding information and/or understanding something. The difference comes in the
strategy employed in achieving the objective.
Pure Research
Pure research is research done simply to find out something by examining anything. For
instance, in some pure scientific research scientists discover what properties various
materials possess. It is not for the sake of applying those properties to a nything in
particular, but simply to find out what properties there are. Pure mathematics is for the
sake of seeing what happens, not to solve a problem.
The fun of pure research is that you are not looking for anything in particular. Instead,
anything and everything you find may be joined with anything else just to see where that
combination would lead, if anywhere.
Let's take an example. I was reading a variety of books and magazines once. There were a
some science fiction novels, Jean Auel's THE CLAN OF THE CAVE BEAR, Carl
Sagan's BROCA'S BRAIN, several Isaac Asimov collections of science essays and two
of h is history books, ADVERTISING AGE and AD WEEK magazines, some programs
on PBS, a couple of advertising textbooks I was examining for adoption in my class, and
2. several other things I can't even remember now. This was pure research; I was reading
and watch ing television for the sake of reading and watching about things I didn't know.
Relating all of the disparate facts and opinions in all of these sources led me to my
opinions on stereotyping and pigeonholing as vital components of human thought, now a
major element in my media criticism and advertising psychology classes. When I started I
had no idea this pure research would lead where it did. I was just having fun.
Original Research
Original, or primary research is looking for information that nobody else has found.
Observing people's response to advertising, how prison sentences influence crime rates,
doing tests, observations, experiments, etc., are to discover something new.
Orginal research requires two things: 1) knowing what has already been discovered,
having a background on the subject; and 2) formulating a method to find out what you
want to know. To accomplish the first you indulge in secondary research (see bel ow).
For the second, you decide how best to find the information you need to arrive at a
conclusion. This method may be using focus groups, interviews, observations,
expeditions, experiments, surveys, etc.
For example, you can decide to find out what the governmental system of the Hittite
Empire was like on the basis of their communication system to determine how closely the
empire could be governed by a central bureaucracy. The method to do this orgi nal
research would probably require that you travel to the Middle East and examine such
things as roads, systems of writing, courier systems without horses, archeological
evidence, actual extent of Hittite influence (commercial, military, laws, language,
religion, etc.) and anything else you can think of and find any evidence for.
Secondary Research
Secondary research is finding out what others have discovered through original research
and trying to reconcile conflicting viewpoints or conclusions, find new relationships
between normally non-related research, and arrive at your own conclusion bas ed on
others' work. This is, of course, the usual course for college students.
An example from recent years was the relating of tectonic, geologic, biologic,
paleontologic, and astronomic research to each other. Relating facts from these
researches led to the conclusion that the mass extinctions of 65 million years ago,includi
ng the dinosaurs, was the result of an asteroid or comet striking the earth in the North
Atlantic at the site of Iceland. (For a full explanation see THE GREAT EXTINCTION by
Michael Allaby and James Lovelock.) Later research based on the above has found a
potential crater for the impact on the Yucatan Peninsula.
3. Secondary research should not be belittled simply because it is not original research.
Fresh insights and viewpoints, based on a wide variety of facts gleaned from original
research in many areas, has often been a source of new ideas. Even more, it has provided
a clearer understanding of what the evidence means without the influence of the original
researcher's prejudices and preconceptions.
DIRECTED AND NONDIRECTED RESEARCH
Research can be directed or non-directed. Non-directed research is finding out things for
the sheer fun of finding them out. Reading a newspaper or the entire Encyclopedia
Britannica, or asking several people how they feel about something is non-di rected
research. It has no specific purpose beyond increasing your store of knowledge about the
world (or everything in general). Watching television is non-directed research, as is
reading a magazine, science fiction, mysteries, historical fiction, or anything else.
Everything you don't think of yourself contains information you don't have, and is thus
research.
Directed research, on the other hand, is done with a specific purpose in mind. The
purpose could be to make a point, write a paper or speech, or simply know more about a
specific thing. It is directed since it deals with something specific, and som eone decides
what to try next. It simply doesn't have a specific outcome in mind. For example, directed
research in microelectronics is not trying to achieve a specific goal. It does, however,
deal specifically with microelectronics, be it the conducting properties of alloys and
compounds, electron etching, or dual bonding. It does not concern itself with
anthropology. There is also a researcher or project director who decides what is worth
pursuing and what is not.
Directed research is what you want to do when you are preparing a report. You have a
specific goal in mind, to communicate what you want your audience to know about your
topic. Thus, you direct your research toward finding what you can about your topic, not
to find out what there is to know about whatever you come across.
#
Research, pure, original or secondary, carries with it an inherent danger to those who are
close-minded or comfortable in their preconceptions and prejudices. In case you're
wondering, that includes everybody. However, there are people who, having arrived at a
conclusion by whatever means, reject anything that contradicts, or at least doesn't
support, their preconceptions and prejudices. Research has at its essence the shakeup of
what you already know (if you already know it, it isn't research, it 's self-congratulation
for perspicacity). Let's take a look at how this works.
Research may show that what you already know isn't correct. This is a hard thing for
many people to accept. You will, on occasion, come across a piece of evidence that
contradicts your a priori assumptions (those that you hold as self-evident, some thing is
simply because it is), and that is at best disconcerting and at worst traumatic. For
4. example, you may hold an a priori assumption "all men are created equal". You may then
find an article that states "it is a basic fact of life that all men are inherently unequal"
(people raised in the caste system in India would find that statement so true it wouldn't
need to be said). Which statement is correct? Think about it for a moment.
. . .
If you've actually thought about it, you should have come to the conclusion that both
statements, "all men are created equal," and "all men are unequal," are correct. They are
also both incorrect. They are also both meaningless noises as evidence. They are, by
nature, unprovable and thus not evidence.
What is evidence in this case? Your first step must lie in defining your terms.
What are "men"? Do you mean the male sex of the human species? Do you mean human
beings in general: male, female, regardless of age, race, economic or social position, all
socio-economic systems and governments?
What do you mean by "all"? All "men" (whatever that means) that are like you? That are
not like you? That are like anything at all? The word "all" connotes "without limit". You
put no limits on what are "men"? Are women "men"? Are children, whatever sex, "men"?
Are you discussing sociology, biology, politics, historicity, economics? In what context?
Are you discussing war, voting, pay rates, restrooms?
What do you mean by "created"? Born through biological processes? Through
technological procedures (test tube babies, cloning, genetic engineering)? By some
supernatural intervention with universal entropy? By government decree?
What do you mean by "equal"? Under the law? Under the sun? Under the divinity of your
choice? Equal to what? You? Others?
If you find these questions confusing, good. You're thinking about them.
If you find these questions irritating and/or ridiculous ("everyone know what "All men
are created equal" means!"), then you're being close-minded and will limit your research
to only what agrees with your own prejudices and will discount or totally ignore anything
that contradicts your own narrow ideas. (If you find the above sentence insulting, you
either have an over-developed sense of empathy or you prove my point.)
Let us assume that you define "All men are created equal" as "Every human being,
without exception, is born exactly the same as every other human being" ("all" as in
totality, "men" as human beings, "created" as born, "equal" as in 2 + 2 = 4). Is th at what
you mean by "All men are created equal"? All humans are born physically, biologically,
socially, economically, politically, geographically, intellectually, etc., the same? One
needs only enter a maternity ward to realize that such a case is ridiculous.
5. Let us change the definition slightly. "Every human being, without exception, is
spontaneously invented by God exactly the same as every other human being". The
question becomes, "Which God?" Yahveh, the Christian God, Allah, Zeus, Wodin,
Osiris, etc.? This definition also leaves the above questions intact.
Perhaps the word that needs defining is "equal". "Every human being, without exception,
is born evenly balanced with every other human being." Does this mean that for every
poor human there's a wealthy? For every fat human there's a thin? For eve ry tall human
there's a short? Is any of those what you mean by the phrase?
What has happened to the phrase "All men are created equal" as evidence to prove a point
you wish to make? The answer to this question is, "It's disappeared." The sentiment is
just that, a sentiment. Semantically, it's meaningless. Emotionally, it's extremely
effective. As evidence, it doesn't exist.
#
The research you do is designed to give you the ammunition you need to back up what
you have to say even with those that disagree with you and question what you say. That
ammunition is evidence that your opponent can, or has no choice except to agre e with.
You will, of course, have those that disagree with what you say; nobody agrees with
anybody on everything. Thus, if you make a point, you must back it up with evidence that
even those that disagree must accept. Such evidence must be what is termed o bjective;
that is, evidence that even those that disagree can discover for themselves. For example,
Galileo said that objects, regardless of their weight, fell at the same speed. Aristotle said
that heavy objects fell faster than light objects. Galileo did experiments that demonstrated
his ideas. Those that disagreed with him finally stopped arguing "common sense" and ran
the same experiments -- and demonstrated Galileo's ideas. Such objective evidence could
not be argued away and thus the evidence w as accepted.
The objectives of research
There are 5 general objectives that research - in general and more specifically about
processes - may attempt to achieve. They are
1. description
2. explanation
3. forecasting
4. control
5. modelling
These objectives are not completely independent from each other, for the explanation of a
phenomenon relies in part on its description, its forecast requires a detailed explanation,
and so on. But researchers may concentrate on one or the other aspect. Most important,
the objective pursued will affect the tools and techniques employed for the analyses.
6. The two most frequent objectives are description and explanation. Description is most
often an exploratory phase undertaken using graphical representations and statistical
measures that are not inferential, while explanation involves precise hypotheses to be
confronted and employs inferential statistical tests.
Modelling is the latest, broadest objective It requires that the descriptive and explanatory
phases brought sufficient information and knowledge about the system, so to build a
model that synthetically gathers the various variables in a coherent and parsimonious
way.
Control is an objective rarely set in psychological research (for it brings important ethical
considerations), and forecasting is just a little more frequent. We will not address these
two objectives in this work.
What Are The Characteristics Of The Research You Would Like To Have Funded
There are many transportation research programs, each with distinct focus and
characteristics. To strengthen your chances of success in being funded, this chapter is
intended to help you consider the characteristics of the research statement you would like
to see funded. Research characteristics are important for two reasons: 1) they help you
identify which research programs are the best fit for your research statement, and 2)
clearly addressing these characteristics in your research statement increases your chances
of selection. Important characteristics to consider when writing a research statement
include geographic relevance, transportation mode or topic, funding required, urgency,
type of research needed, and partnership and cost-sharing interests.
GEOGRAPHIC RELEVANCE
How widespread is the problem you are trying to address? Is it experienced in countries
around the world (i.e. intersection design questions or air quality issues)? Is it strictly a
problem in the United States (i.e. how to meet U.S. DOT planning requirements)? Is it
shared by a region or several organizations (i.e. deicing concerns or design in seismic
zones)? Or is it an even more specific problem that exists only in a small number of
locations (i.e. specific species or geology)?
Geographic relevance will affect the programs to which you submit your research
statement, and will also affect the details that need to be included in the statement.
National research programs, such as the National Cooperative Highway Research
Program, focus on research statements that address problems experienced in a majority of
the states. However, a research statement focused on a more localized problem while
explaining how the research product could benefit a national audience can be successful.
TRANSPORTATION MODE OR TOPIC
If your research focuses on a specific mode of transportation, your decision about the
funding source may be simplified, because many research programs focus on such
modes. If, on the other hand, your research need focuses on policy, administration, or
7. other non-modal transportation issues, the appropriate program may be less clear cut. In
this case, contacting potential research program staff may be necessary.
In addition, some research programs fund only certain topics. Some examples include the
Hazardous Materials Cooperative Research Program and the National Cooperative
Freight Research Program.
FUNDING REQUIRED
Research programs vary widely in the maximum amount of money provided for each
project. It is important to understand the funding-level guidelines and limitations of a
research program when considering a research statement submittal. Proposing a $400,000
project to a program that funds projects of $100,000 or less will not get your research
statement funded.
URGENCY
Research programs vary in their time frame for delivery. Finding a research program that
matches the urgency of your research statement is critical. In some programs, it may take
up to 3 years from the submission of a research statement to publish a research report.
Other programs address needs that can be met within 6 months.
TYPE OF RESEARCH NEEDED
The term research is used very broadly in this web page because the work conducted in
the interest of advancing the transportation profession cuts across a number of activities.
A more formal definition and classification of transportation research is provided in
Appendix A. Transportation research can be as fundamental as testing materials for
transportation infrastructure or as detailed as a statistical analysis of large data sets to
identify the public’s response to rising gas prices. Applied research exists somewhere in
the middle of the spectrum, using fundamental research to solve transportation problems.
PARTNERSHIP/OPPORTUNITIES FOR COST SHARING
Some programs require cost sharing or a local match. The selection of your project may
require that your research statement include information on where additional funding is
available. For other research programs, cost sharing may not be required but could
enhance the project’s chances for success.
Hypothesis Vs Theory
Hypothesis is an educated guess. A prediction about the relationship between two
or more variables.
A prediction as to what you expect to find.
Hypotheses are more specific than theories.
Theories have many different hypotheses.
Results of a single research study will not prove or disprove a theory.
8. ◦ If the hypotheses offered by the theory are confirmed, the theory is
supported (not proved).
◦ If lots of studies reveal that many of the hypotheses generated by the
theory are false, the theory must be reevaluated.
What makes a good theory
1. Falsifiability - The theory must make sufficiently precise predictions that we
can at least imagine evidence that would contradict the theory.
Examples: Frustration-aggression theory
Freud’s theory of repression.
Theory of psychic ability
If something is not falsifiable, it doesn’t mean it is wrong, simply that it has no
place in science.
2. Parsimony – simplicity
The best theory is the one that makes the fewest number of assumptions
All things being equal, the simplest theory is the best theory.
Also known as Ockham’s razor
The simplest of two or more competing theories is preferable and the unknown
should first be explained in terms of the known
E.g., theories of intelligence
Theories of UFO’s
Magic acts
Warning: simple theories are not always right.
3. Generativity - A good theory doesn’t just explain results that have been found,
but it also generates predictions that can be tested
Research is promoted by the offering of a good theory.
9. E.g., frustration-aggression – little evidence for the theory initially, but it
generated a lot.
4. Precision – the theory makes precise predictions.
Ambiguity is bad for a theory.
Predictions must have consistency: there cannot be internal contradictions.
5. Good track record – the theory holds up to research results. Studies have tested
the hypotheses and have provided support.
Research Design
Research design can be thought of as the structure of research -- it is the "glue" that holds
all of the elements in a research project together. We often describe a design using a
concise notation that enables us to summarize a complex design structure efficiently.
What are the "elements" that a design includes? They are:
• Observations or Measures
These are symbolized by an 'O' in design notation. An O can refer to a single measure
(e.g., a measure of body weight), a single instrument with multiple items (e.g., a 10-item
self-esteem scale), a complex multi-part instrument (e.g., a survey), or a whole battery of
tests or measures given out on one occasion. If you need to distinguish among specific
measures, you can use subscripts with the O, as in O1, O2, and so on.
• Treatments or Programs
These are symbolized with an 'X' in design notations. The X can refer to a simple
intervention (e.g., a one-time surgical technique) or to a complex hodgepodge program
(e.g., an employment training program). Usually, a no-treatment control or comparison
group has no symbol for the treatment (some researchers use X+ and X- to indicate the
treatment and control respectively). As with observations, you can use subscripts to
distinguish different programs or program variations.
• Groups
Each group in a design is given its own line in the design structure. if the design notation
has three lines, there are three groups in the design.
• Assignment to Group
Assignment to group is designated by a letter at the beginning of each line (i.e., group)
that describes how the group was assigned. The major types of assignment are:
10. • R = random
assignment
• N = nonequivalent
groups
• C = assignment by
cutoff
• Time
Time moves from left to
right. Elements that are
listed on the left occur
before elements that are
listed on the right.
Design Notation
Examples
It's always easier to explain design notation through examples than it is to describe it in
words. The figure shows the design notation for a pretest-posttest (or before-after)
treatment versus comparison group randomized experimental design. Let's go through
each of the parts of the design. There are two lines in the notation, so you should realize
that the study has two groups. There are four Os in the notation, two on each line and two
for each group. When the Os are stacked vertically on top of each other it means they are
collected at the same time. In the notation you can see that we have two Os that are taken
before (i.e., to the left of) any treatment is given -- the pretest -- and two Os taken after
the treatment is given -- the posttest. The R at the beginning of each line signifies that the
two groups are randomly assigned (making it an experimental design). The design is a
treatment versus comparison group one because the top line (treatment group) has an X
while the bottom line (control group) does not. You should be able to see why many of
my students have called this type of notation the "tic-tac-toe" method of design notation
-- there are lots of Xs and Os! Sometimes we have to be more specific in describing the
Os or Xs than just using a single letter. In the second figure, we have the identical
research design with some subscripting of the Os. What does this mean? Because all of
the Os have a subscript of 1, there is some measure or set of measures that is collected for
both groups on both occasions. But the design also has two Os with a subscript of 2, both
taken at the posttest. This means that there was some measure or set of measures that
were collected only at the posttest.
With this simple set of rules for describing a research design in notational form, you can
concisely explain even complex design structures. And, using a notation helps to show
common design sub-structures across different designs that we might not recognize as
easily without the notation.
The Marketing Research Process
11. Once the need for marketing research has been established, most marketing research
projects involve these steps:
1. Define the problem
2. Determine research design
3. Identify data types and sources
4. Design data collection forms and questionnaires
5. Determine sample plan and size
6. Collect the data
7. Analyze and interpret the data
8. Prepare the research report
Research Design
Marketing research can classified in one of three categories:
• Exploratory research
• Descriptive research
• Causal research
These classifications are made according to the objective of the research. In some cases
the research will fall into one of these categories, but in other cases different phases of
the same research project will fall into different categories.
• Exploratory research has the goal of formulating problems more precisely,
clarifying concepts, gathering explanations, gaining insight, eliminating
impractical ideas, and forming hypotheses. Exploratory research can be
performed using a literature search, surveying certain people about their
experiences, focus groups, and case studies. When surveying people, exploratory
research studies would not try to acquire a representative sample, but rather, seek
to interview those who are knowledgeable and who might be able to provide
insight concerning the relationship among variables. Case studies can include
contrasting situations or benchmarking against an organization known for its
excellence. Exploratory research may develop hypotheses, but it does not seek to
test them. Exploratory research is characterized by its flexibility.
• Descriptive research is more rigid than exploratory research and seeks to describe
users of a product, determine the proportion of the population that uses a product,
or predict future demand for a product. As opposed to exploratory research,
descriptive research should define questions, people surveyed, and the method of
analysis prior to beginning data collection. In other words, the who, what, where,
when, why, and how aspects of the research should be defined. Such preparation
allows one the opportunity to make any required changes before the costly
process of data collection has begun.
There are two basic types of descriptive research: longitudinal studies and cross-
sectional studies. Longitudinal studies are time series analyses that make repeated
12. measurements of the same individuals, thus allowing one to monitor behavior
such as brand-switching. However, longitudinal studies are not necessarily
representative since many people may refuse to participate because of the
commitment required. Cross-sectional studies sample the population to make
measurements at a specific point in time. A special type of cross-sectional
analysis is a cohort analysis, which tracks an aggregate of individuals who
experience the same event within the same time interval over time. Cohort
analyses are useful for long-term forecasting of product demand.
• Causal research seeks to find cause and effect relationships between variables. It
accomplishes this goal through laboratory and field experiments.
Data Types and Sources
Secondary Data
Before going through the time and expense of collecting primary data, one should check
for secondary data that previously may have been collected for other purposes but that
can be used in the immediate study. Secondary data may be internal to the firm, such as
sales invoices and warranty cards, or may be external to the firm such as published data
or commercially available data. The government census is a valuable source of secondary
data.
Secondary data has the advantage of saving time and reducing data gathering costs. The
disadvantages are that the data may not fit the problem perfectly and that the accuracy
may be more difficult to verify for secondary data than for primary data.
Some secondary data is republished by organizations other than the original source.
Because errors can occur and important explanations may be missing in republished data,
one should obtain secondary data directly from its source. One also should consider who
the source is and whether the results may be biased.
There are several criteria that one should use to evaluate secondary data.
• Whether the data is useful in the research study.
• How current the data is and whether it applies to time period of interest.
• Errors and accuracy - whether the data is dependable and can be verified.
• Presence of bias in the data.
• Specifications and methodologies used, including data collection method,
response rate, quality and analysis of the data, sample size and sampling
technique, and questionnaire design.
• Objective of the original data collection.
13. • Nature of the data, including definition of variables, units of measure, categories
used, and relationships examined.
Primary Data
Often, secondary data must be supplemented by primary data originated specifically for
the study at hand. Some common types of primary data are:
• demographic and socioeconomic characteristics
• psychological and lifestyle characteristics
• attitudes and opinions
• awareness and knowledge - for example, brand awareness
• intentions - for example, purchase intentions. While useful, intentions are not a
reliable indication of actual future behavior.
• motivation - a person's motives are more stable than his/her behavior, so motive is
a better predictor of future behavior than is past behavior.
• behavior
Primary data can be obtained by communication or by observation. Communication
involves questioning respondents either verbally or in writing. This method is versatile,
since one needs only to ask for the information; however, the response may not be
accurate. Communication usually is quicker and cheaper than observation. Observation
involves the recording of actions and is performed by either a person or some mechanical
or electronic device. Observation is less versatile than communication since some
attributes of a person may not be readily observable, such as attitudes, awareness,
knowledge, intentions, and motivation. Observation also might take longer since
observers may have to wait for appropriate events to occur, though observation using
scanner data might be quicker and more cost effective. Observation typically is more
accurate than communication.
Personal interviews have an interviewer bias that mail-in questionnaires do not have. For
example, in a personal interview the respondent's perception of the interviewer may
affect the responses.
Questionnaire Design
The questionnaire is an important tool for gathering primary data. Poorly constructed
questions can result in large errors and invalidate the research data, so significant effort
should be put into the questionnaire design. The questionnaire should be tested
thoroughly prior to conducting the survey.
Measurement Scales
Attributes can be measured on nominal, ordinal, interval, and ratio scales:
14. • Nominal numbers are simply identifiers, with the only permissible mathematical
use being for counting. Example: social security numbers.
• Ordinal scales are used for ranking. The interval between the numbers conveys
no meaning. Median and mode calculations can be performed on ordinal numbers.
Example: class ranking
• Interval scales maintain an equal interval between numbers. These scales can be
used for ranking and for measuring the interval between two numbers. Since the
zero point is arbitrary, ratios cannot be taken between numbers on an interval
scale; however, mean, median, and mode are all valid. Example: temperature
scale
• Ratio scales are referenced to an absolute zero values, so ratios between numbers
on the scale are meaningful. In addition to mean, median, and mode, geometric
averages also are valid. Example: weight
Validity and Reliability
The validity of a test is the extent to which differences in scores reflect differences in the
measured characteristic. Predictive validity is a measure of the usefulness of a measuring
instrument as a predictor. Proof of predictive validity is determined by the correlation
between results and actual behavior. Construct validity is the extent to which a measuring
instrument measures what it intends to measure.
Reliability is the extent to which a measurement is repeatable with the same results. A
measurement may be reliable and not valid. However, if a measurement is valid, then it
also is reliable and if it is not reliable, then it cannot be valid. One way to show reliability
is to show stability by repeating the test with the same results.
Attitude Measurement
Many of the questions in a marketing research survey are designed to measure attitudes.
Attitudes are a person's general evaluation of something. Customer attitude is an
important factor for the following reasons:
• Attitude helps to explain how ready one is to do something.
• Attitudes do not change much over time.
• Attitudes produce consistency in behavior.
• Attitudes can be related to preferences.
Attitudes can be measured using the following procedures:
• Self-reporting - subjects are asked directly about their attitudes. Self-reporting is
the most common technique used to measure attitude.
15. • Observation of behavior - assuming that one's behavior is a result of one's
attitudes, attitudes can be inferred by observing behavior. For example, one's
attitude about an issue can be inferred by whether he/she signs a petition related to
it.
• Indirect techniques - use unstructured stimuli such as word association tests.
• Performance of objective tasks - assumes that one's performance depends on
attitude. For example, the subject can be asked to memorize the arguments of both
sides of an issue. He/she is more likely to do a better job on the arguments that
favor his/her stance.
• Physiological reactions - subject's response to a stimuli is measured using
electronic or mechanical means. While the intensity can be measured, it is
difficult to know if the attitude is positive or negative.
• Multiple measures - a mixture of techniques can be used to validate the findings,
especially worthwhile when self-reporting is used.
There are several types of attitude rating scales:
• Equal-appearing interval scaling - a set of statements are assembled. These
statements are selected according to their position on an interval scale of
favorableness. Statements are chosen that has a small degree of dispersion.
Respondents then are asked to indicate with which statements they agree.
• Likert method of summated ratings - a statement is made and the respondents
indicate their degree of agreement or disagreement on a five point scale (Strongly
Disagree, Disagree, Neither Agree Nor Disagree, Agree, Strongly Agree).
• Semantic differential scale - a scale is constructed using phrases describing
attributes of the product to anchor each end. For example, the left end may state,
"Hours are inconvenient" and the right end may state, "Hours are convenient".
The respondent then marks one of the seven blanks between the statements to
indicate his/her opinion about the attribute.
• Stapel Scale - similar to the semantic differential scale except that 1) points on the
scale are identified by numbers, 2) only one statement is used and if the
respondent disagrees a negative number should marked, and 3) there are 10
positions instead of seven. This scale does not require that bipolar adjectives be
developed and it can be administered by telephone.
• Q-sort technique - the respondent if forced to construct a normal distribution by
placing a specified number of cards in one of 11 stacks according to how
desirable he/she finds the characteristics written on the cards.
16. Sampling Plan
The sampling frame is the pool from which the interviewees are chosen. The telephone
book often is used as a sampling frame, but have some shortcomings. Telephone books
exclude those households that do not have telephones and those households with unlisted
numbers. Since a certain percentage of the numbers listed in a phone book are out of
service, there are many people who have just moved who are not sampled. Such sampling
biases can be overcome by using random digit dialing. Mall intercepts represent another
sampling frame, though there are many people who do not shop at malls and those who
shop more often will be over-represented unless their answers are weighted in inverse
proportion to their frequency of mall shopping.
In designing the research study, one should consider the potential errors. Two sources of
errors are random sampling error and non-sampling error. Sampling errors are those due
to the fact that there is a non-zero confidence interval of the results because of the sample
size being less than the population being studied. Non-sampling errors are those caused
by faulty coding, untruthful responses, respondent fatigue, etc.
There is a tradeoff between sample size and cost. The larger the sample size, the smaller
the sampling error but the higher the cost. After a certain point the smaller sampling error
cannot be justified by the additional cost.
While a larger sample size may reduce sampling error, it actually may increase the total
error. There are two reasons for this effect. First, a larger sample size may reduce the
ability to follow up on non-responses. Second, even if there is a sufficient number of
interviewers for follow-ups, a larger number of interviewers may result in a less uniform
interview process.
Data Collection
In addition to the intrinsic sampling error, the actual data collection process will
introduce additional errors. These errors are called non-sampling errors. Some non-
sampling errors may be intentional on the part of the interviewer, who may introduce a
bias by leading the respondent to provide a certain response. The interviewer also may
introduce unintentional errors, for example, due to not having a clear understanding of
the interview process or due to fatigue.
Respondents also may introduce errors. A respondent may introduce intentional errors by
lying or simply by not responding to a question. A respondent may introduce
unintentional errors by not understanding the question, guessing, not paying close
attention, and being fatigued or distracted.
Such non-sampling errors can be reduced through quality control techniques.
17. Data Preparation: Questionnaire Editing
In our continuing review of data preparation, we will now look further in to the topic
of questionnaire editing. Editing a questionnaire can greatly enhance both the number of
survey responses that a researcher may receive in a study as well as the quality of the
responses to individual questions.
It is important to limit the size of a study so that potential respondents do not
lose motivation to participate. But if we do a good job at limiting the length of a survey
to only the most necessary questions, then we must also make absolutely sure that we get
the absolute most from each of the questions. One of the best tools in fine tuning a
question comes from conducting a pre-test. Pre-tests involve having a limited number
of people answer survey questions and then studying the responses to make sure that
the results are what we might normally expect.
Potential flaws in questionnaires include ambiguous questions, double barreled
questions (asking for two pieces of information in one question), overlapping answers
and offering choices that are not inclusive of all possible answers. These problems should
be handled by the researcher before a questionnaire is ever fielded. But too often,
researchers do not take the time and effort to pre-test surveys.
Once a questionnaire has been carefully crafted and fielded for data collection, problems
can also arise from the respondent side. These potential problems include
illegible, incomplete, ambiguous and inconsistent answers. When this occurs, the
researcher is then faced with the problem of how to remedy such problems. Solutions can
include returning to field for further data collection, assigning missing values or
discarding some or all of the unsatisfactory answers. There is much debate regarding the
proper handling of unsatisfactory responses so it is well worthwhile for researchers to
invest time up front in order to field the best possible questionnaires.
Data Analysis - Preliminary Steps
Before analysis can be performed, raw data must be transformed into the right format.
First, it must be edited so that errors can be corrected or omitted. The data must then be
coded; this procedure converts the edited raw data into numbers or symbols. A codebook
is created to document how the data was coded. Finally, the data is tabulated to count the
number of samples falling into various categories. Simple tabulations count the
occurrences of each variable independently of the other variables. Cross tabulations, also
known as contingency tables or cross tabs, treats two or more variables simultaneously.
However, since the variables are in a two-dimensional table, cross tabbing more than two
variables is difficult to visualize since more than two dimensions would be required.
Cross tabulation can be performed for nominal and ordinal variables.
18. Cross tabulation is the most commonly utilized data analysis method in marketing
research. Many studies take the analysis no further than cross tabulation. This technique
divides the sample into sub-groups to show how the dependent variable varies from one
subgroup to another. A third variable can be introduced to uncover a relationship that
initially was not evident.
Conjoint Analysis
The conjoint analysis is a powerful technique for determining consumer preferences for
product attributes.
Hypothesis Testing
A basic fact about testing hypotheses is that a hypothesis may be rejected but that the
hypothesis never can be unconditionally accepted until all possible evidence is evaluated.
In the case of sampled data, the information set cannot be complete. So if a test using
such data does not reject a hypothesis, the conclusion is not necessarily that the
hypothesis should be accepted.
The null hypothesis in an experiment is the hypothesis that the independent variable has
no effect on the dependent variable. The null hypothesis is expressed as H0. This
hypothesis is assumed to be true unless proven otherwise. The alternative to the null
hypothesis is the hypothesis that the independent variable does have an effect on the
dependent variable. This hypothesis is known as the alternative, research, or experimental
hypothesis and is expressed as H1. This alternative hypothesis states that the relationship
observed between the variables cannot be explained by chance alone.
There are two types of errors in evaluating a hypotheses:
• Type I error: occurs when one rejects the null hypothesis and accepts the
alternative, when in fact the null hypothesis is true.
• Type II error: occurs when one accepts the null hypothesis when in fact the null
hypothesis is false.
Because their names are not very descriptive, these types of errors sometimes are
confused. Some people jokingly define a Type III error to occur when one confuses Type
I and Type II. To illustrate the difference, it is useful to consider a trial by jury in which
the null hypothesis is that the defendant is innocent. If the jury convicts a truly innocent
defendant, a Type I error has occurred. If, on the other hand, the jury declares a truly
guilty defendant to be innocent, a Type II error has occurred.
Hypothesis testing involves the following steps:
19. • Formulate the null and alternative hypotheses.
• Choose the appropriate test.
• Choose a level of significance (alpha) - determine the rejection region.
• Gather the data and calculate the test statistic.
• Determine the probability of the observed value of the test statistic under the null
hypothesis given the sampling distribution that applies to the chosen test.
• Compare the value of the test statistic to the rejection threshold.
• Based on the comparison, reject or do not reject the null hypothesis.
• Make the marketing research conclusion.
In order to analyze whether research results are statistically significant or simply by
chance, a test of statistical significance can be run.
Tests of Statistical Significance
The chi-square ( 2
) goodness-of-fit test is used to determine whether a set of
proportions have specified numerical values. It often is used to analyze bivariate cross-
tabulated data. Some examples of situations that are well-suited for this test are:
• A manufacturer of packaged products test markets a new product and wants to
know if sales of the new product will be in the same relative proportion of
package sizes as sales of existing products.
• A company's sales revenue comes from Product A (50%), Product B (30%), and
Product C (20%). The firm wants to know whether recent fluctuations in these
proportions are random or whether they represent a real shift in sales.
The chi-square test is performed by defining k categories and observing the number of
cases falling into each category. Knowing the expected number of cases falling in each
category, one can define chi-squared as:
2
= ( Oi - Ei )2
/ Ei
where
Oi = the number of observed cases in category i,
Ei = the number of expected cases in category i,
k = the number of categories,
thesummation runs from i = 1 to i = k.
Before calculating the chi-square value, one needs to determine the expected frequency
for each cell. This is done by dividing the number of samples by the number of cells in
the table.
To use the output of the chi-square function, one uses a chi-square table. To do so, one
20. needs to know the number of degrees of freedom (df). For chi-square applied to cross-
tabulated data, the number of degrees of freedom is equal to
( number of columns - 1 ) ( number of rows - 1 )
This is equal to the number of categories minus one. The conventional critical level of
0.05 normally is used. If the calculated output value from the function is greater than the
chi-square look-up table value, the null hypothesis is rejected.
ANOVA
Another test of significance is the Analysis of Variance (ANOVA) test. The primary
purpose of ANOVA is to test for differences between multiple means. Whereas the t-test
can be used to compare two means, ANOVA is needed to compare three or more means.
If multiple t-tests were applied, the probability of a TYPE I error (rejecting a true null
hypothesis) increases as the number of comparisons increases.
One-way ANOVA examines whether multiple means differ. The test is called an F-test.
ANOVA calculates the ratio of the variation between groups to the variation within
groups (the F ratio). While ANOVA was designed for comparing several means, it also
can be used to compare two means. Two-way ANOVA allows for a second independent
variable and addresses interaction.
To run a one-way ANOVA, use the following steps:
1. Identify the independent and dependent variables.
2. Describe the variation by breaking it into three parts - the total variation, the
portion that is within groups, and the portion that is between groups (or among
groups for more than two groups). The total variation (SStotal) is the sum of the
squares of the differences between each value and the grand mean of all the values
in all the groups. The in-group variation (SSwithin) is the sum of the squares of the
differences in each element's value and the group mean. The variation between
group means (SSbetween) is the total variation minus the in-group variation (SStotal -
SSwithin).
3. Measure the difference between each group's mean and the grand mean.
4. Perform a significance test on the differences.
5. Interpret the results.
This F-test assumes that the group variances are approximately equal and that the
observations are independent. It also assumes normally distributed data; however, since
this is a test on means the Central Limit Theorem holds as long as the sample size is not
too small.
ANOVA is efficient for analyzing data using relatively few observations and can be used
with categorical variables. Note that regression can perform a similar analysis to that of
ANOVA.
21. Discriminant Analysis
Analysis of the difference in means between groups provides information about
individual variables, it is not useful for determine their individual impacts when the
variables are used in combination. Since some variables will not be independent from one
another, one needs a test that can consider them simultaneously in order to take into
account their interrelationship. One such test is to construct a linear combination,
essentially a weighted sum of the variables. To determine which variables discriminate
between two or more naturally occurring groups, discriminant analysis is used.
Discriminant analysis can determine which variables are the best predictors of group
membership. It determines which groups differ with respect to the mean of a variable,
and then uses that variable to predict new cases of group membership. Essentially, the
discriminant function problem is a one-way ANOVA problem in that one can determine
whether multiple groups are significantly different from one another with respect to the
mean of a particular variable.
A discriminant analysis consists of the following steps:
1. Formulate the problem.
2. Determine the discriminant function coefficients that result in the highest ratio of
between-group variation to within-group variation.
3. Test the significance of the discriminant function.
4. Interpret the results.
5. Determine the validity of the analysis.
Discriminant analysis analyzes the dependency relationship, whereas factor analysis and
cluster analysis address the interdependency among variables.
Factor Analysis
Factor analysis is a very popular technique to analyze interdependence. Factor analysis
studies the entire set of interrelationships without defining variables to be dependent or
independent. Factor analysis combines variables to create a smaller set of factors.
Mathematically, a factor is a linear combination of variables. A factor is not directly
observable; it is inferred from the variables. The technique identifies underlying structure
among the variables, reducing the number of variables to a more manageable set. Factor
analysis groups variables according to their correlation.
The factor loading can be defined as the correlations between the factors and their
underlying variables. A factor loading matrix is a key output of the factor analysis. An
example matrix is shown below.
Factor 1 Factor 2 Factor 3
22. Variable 1
Variable 2
Variable 3
Column's Sum of Squares:
Each cell in the matrix represents correlation between the variable and the factor
associated with that cell. The square of this correlation represents the proportion of the
variation in the variable explained by the factor. The sum of the squares of the factor
loadings in each column is called an eigenvalue. An eigenvalue represents the amount of
variance in the original variables that is associated with that factor. The communality is
the amount of the variable variance explained by common factors.
A rule of thumb for deciding on the number of factors is that each included factor must
explain at least as much variance as does an average variable. In other words, only factors
for which the eigenvalue is greater than one are used. Other criteria for determining the
number of factors include the Scree plot criteria and the percentage of variance criteria.
To facilitate interpretation, the axis can be rotated. Rotation of the axis is equivalent to
forming linear combinations of the factors. A commonly used rotation strategy is the
varimax rotation. Varimax attempts to force the column entries to be either close to zero
or one.
Cluster Analysis
Market segmentation usually is based not on one factor but on multiple factors. Initially,
each variable represents its own cluster. The challenge is to find a way to combine
variables so that relatively homogenous clusters can be formed. Such clusters should be
internally homogenous and externally heterogeneous. Cluster analysis is one way to
accomplish this goal. Rather than being a statistical test, it is more of a collection of
algorithms for grouping objects, or in the case of marketing research, grouping people.
Cluster analysis is useful in the exploratory phase of research when there are no a-priori
hypotheses.
Cluster analysis steps:
1. Formulate the problem, collecting data and choosing the variables to analyze.
2. Choose a distance measure. The most common is the Euclidean distance. Other
possibilities include the squared Euclidean distance, city-block (Manhattan)
distance, Chebychev distance, power distance, and percent disagreement.
3. Choose a clustering procedure (linkage, nodal, or factor procedures).
23. 4. Determine the number of clusters. They should be well separated and ideally they
should be distinct enough to give them descriptive names such as professionals,
buffs, etc.
5. Profile the clusters.
6. Assess the validity of the clustering.
Marketing Research Report
The format of the marketing research report varies with the needs of the organization.
The report often contains the following sections:
• Authorization letter for the research
• Table of Contents
• List of illustrations
• Executive summary
• Research objectives
• Methodology
• Results
• Limitations
• Conclusions and recommendations
• Appendices containing copies of the questionnaires, etc.