This document presents a poster on meta-analysis and the ipdforest command in Stata. It discusses meta-analysis approaches including one-stage and two-stage methods. It outlines the use of mixed-effects regression models to combine individual patient data from multiple randomized controlled trials in a one-stage approach. It then presents an example hypothetical study and model to analyze individual patient data using the xtmixed command in Stata, generating results that can be displayed in a forest plot using ipdforest. The poster serves as a practical guide on meta-analysis with individual patient data.
A Multi-Criteria Evaluation of Environmental databases using Hasse diagram technique-It is a multi-criteria evaluation method which can be used as a tool to rank objects and is hence also applicable to decision making.
The HDT reveals the best and the worst databases and conflicts among them, due to different information content.
Presented at a workshop about the preparation of the thesis proposal for graduate studies in Public Administration program and local development program. The workshop was held in the Department of Public Administration, College of Business, The University of Jordan, Amman. This presentation provides guidelines about the preparation of the thesis proposal.
Urban Neighbourhood Analysis (UNA) using Mixed Method Research DesignProf Ashis Sarkar
This presentation emphasizes on identification and analysis of 'urban neighbourhood'. Of the several methods of research, the 'mixed method' design has been discussed with examples.
This document provides an introduction to statistics, including defining key terms and concepts. It discusses what statistics is, the difference between populations and samples, parameters and statistics. It also outlines the two main branches of statistics - descriptive statistics, which involves organizing and summarizing data, and inferential statistics, which uses samples to draw conclusions about populations. The document then discusses different types of data, such as qualitative vs. quantitative, and the four levels of measurement for quantitative data. Finally, it discusses methods for designing statistical studies and collecting data, such as interviews, questionnaires, observation, and using registration data or mechanical devices.
This document discusses various patterns of organization that can be found in written texts, including problem-solution, general-specific, and claim-counterclaim patterns. It provides examples of each pattern and explains how they sequence information. The problem-solution pattern typically describes a problem, process or cause, and proposed solution. The general-specific pattern moves from broad generalizations to increasingly specific details. The claim-counterclaim pattern presents a claim, evidence to support it, a counterclaim, and evidence to rebut the counterclaim. Patterns are important for text structure but can change and be combined in various ways within a single document.
This document presents a poster on meta-analysis and the ipdforest command in Stata. It discusses meta-analysis approaches including one-stage and two-stage methods. It outlines the use of mixed-effects regression models to combine individual patient data from multiple randomized controlled trials in a one-stage approach. It then presents an example hypothetical study and model to analyze individual patient data using the xtmixed command in Stata, generating results that can be displayed in a forest plot using ipdforest. The poster serves as a practical guide on meta-analysis with individual patient data.
A Multi-Criteria Evaluation of Environmental databases using Hasse diagram technique-It is a multi-criteria evaluation method which can be used as a tool to rank objects and is hence also applicable to decision making.
The HDT reveals the best and the worst databases and conflicts among them, due to different information content.
Presented at a workshop about the preparation of the thesis proposal for graduate studies in Public Administration program and local development program. The workshop was held in the Department of Public Administration, College of Business, The University of Jordan, Amman. This presentation provides guidelines about the preparation of the thesis proposal.
Urban Neighbourhood Analysis (UNA) using Mixed Method Research DesignProf Ashis Sarkar
This presentation emphasizes on identification and analysis of 'urban neighbourhood'. Of the several methods of research, the 'mixed method' design has been discussed with examples.
This document provides an introduction to statistics, including defining key terms and concepts. It discusses what statistics is, the difference between populations and samples, parameters and statistics. It also outlines the two main branches of statistics - descriptive statistics, which involves organizing and summarizing data, and inferential statistics, which uses samples to draw conclusions about populations. The document then discusses different types of data, such as qualitative vs. quantitative, and the four levels of measurement for quantitative data. Finally, it discusses methods for designing statistical studies and collecting data, such as interviews, questionnaires, observation, and using registration data or mechanical devices.
This document discusses various patterns of organization that can be found in written texts, including problem-solution, general-specific, and claim-counterclaim patterns. It provides examples of each pattern and explains how they sequence information. The problem-solution pattern typically describes a problem, process or cause, and proposed solution. The general-specific pattern moves from broad generalizations to increasingly specific details. The claim-counterclaim pattern presents a claim, evidence to support it, a counterclaim, and evidence to rebut the counterclaim. Patterns are important for text structure but can change and be combined in various ways within a single document.
This document provides an overview and template for conducting independent research. It discusses key aspects of the research process such as defining the research problem, identifying independent and dependent variables, developing hypotheses, choosing an appropriate research methodology, collecting and analyzing data, and presenting conclusions. Sample topics are provided to illustrate each step, such as examining factors that could contribute to an university's internet server crashing each July. The document concludes by listing references that were consulted in creating the research overview and template.
Relevance feature discovery for text miningredpel dot com
The document discusses relevance feature discovery for text mining. It presents an innovative model that discovers both positive and negative patterns in text documents as higher-level features and uses them to classify terms into categories and update term weights based on their specificity and distribution in patterns. Experiments on standard datasets show the proposed model outperforms both term-based and pattern-based methods.
Advantages of Query Biased Summaries in Information RetrievalOnur Yılmaz
Presentation of the paper:
Advantages of query biased summaries in information retrieval (1998)
Anastasios Tombros and Mark Sanderson.
In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR '98).
ACM, New York, NY, USA, 2-10.
DOI=10.1145/290941.290947
This document discusses mixed methods research, which combines qualitative and quantitative research approaches. Mixed methods research is defined as research that combines elements of qualitative and quantitative data collection, analysis, and findings. A popular framework identifies five main purposes of mixed methods research: triangulation, complementarity, development, initiation, and expansion.
Quantitative data analysis - Attitudes Towards ResearchLee Cox
This presentation aims to analyze survey data on attitudes towards research. However, the data set has limitations like missing key demographic details and being an opportunistic sample of 50 respondents. While the data could still be analyzed, issues are identified like ambiguous questions and a lack of raw data. Better approaches for analysis include categorizing roles, formatting questions clearly, and applying statistical tests to check for significance. Presenting findings would require directly answering the research question, conclusions, and validating interpretations with supporting data and considering alternatives.
A presentation about the added value of combining qualitative and quantitative methods. It begins with a brief discussion of qualitative research and how it is distinct from yet shares basic principles with quantitative research, followed by a discussion of four important ways mixed methods -- integrating qualitative and quantitative -- adds value to our research efforts, and then a discussion of mixed methods research -- what it is, typologies, alternatives to typologies, and the use of diagrams.
This document provides an overview of mixed methods research. It discusses the author's background and positioning in mixed methods. It then defines mixed methods research as collecting, analyzing, and integrating both quantitative and qualitative data within a single study. The document outlines key steps in designing a mixed methods study, including determining the research problem, collecting both types of data, analyzing the separate quantitative and qualitative results, and mixing or linking the two forms of data.
Elementary Statistics Picturing the World ch01.1Debra Wallace
This document provides an overview and introduction to statistics. It defines key statistical concepts such as data, population, sample, parameter, and statistic. It also distinguishes between descriptive and inferential statistics. Specifically, it states that statistics is the science of collecting, organizing, analyzing, and interpreting data to make decisions. A population is the entire set being studied, while a sample is a subset of the population. A parameter describes a characteristic of the entire population, while a statistic describes a characteristic of a sample. Descriptive statistics involves summarizing and displaying data, while inferential statistics involves using samples to draw conclusions about populations.
A mixed research design uses both quantitative and qualitative data and methods to conduct research. It has two main types - mixed method, which uses quantitative data for one stage and qualitative for another, and mixed model, which uses both quantitative and qualitative data in one or two stages. The advantages include providing context behind numbers, allowing triangulation, and compensating for weaknesses. However, mixed research also requires more resources, expertise, time, and can be more difficult to interpret.
The document describes a neural probabilistic model for context-based citation recommendation. It learns distributed representations of words and documents using neural networks. These representations are used to calculate the probability of citing a document given a citation context. The model is trained on over 10 million citation context-document pairs from CiteSeer. It significantly outperforms baseline methods in recommending citations based on citation context, achieving the highest MAP, recall, MRR and nDCG scores. Future work could explore combining this local context-based model with global recommendation models.
This document discusses mixed methods research. It defines mixed methods research as integrating both quantitative and qualitative data collection and analysis within a single study. The document outlines the basic characteristics, types of designs, steps, and advantages and disadvantages of mixed methods research. It discusses when mixed methods is appropriate and reasons for using it, such as to explain findings or address questions at different levels. The four main mixed methods designs are explanatory, exploratory, embedded, and triangulation designs.
Mixed methods research involves collecting and analyzing both quantitative and qualitative data within a single study. It originated in the social sciences and has expanded to fields like nursing. There are multiple mixed methods research designs including sequential explanatory, sequential exploratory, concurrent triangulation, and concurrent transformative. These designs differ in whether the quantitative and qualitative components are conducted sequentially or concurrently, and whether one method has priority over the other. The purpose is to develop a more comprehensive understanding than a single method could provide alone.
Definition
A procedure used to collect both qualitative and quantitative data.
This is done due to the fact that it is believed that both types of studies will provided a clearer understanding of what is being studied.
“It consists of merging ,integrating ,linking ,or embedding the two “strands””(Ceswell,2012).
Crisis situation, communication strategy and media coverageAfghanistan
In this PPT the status of crisis situation, the role of communication strategy during crisis and crisis management, and media coverage has also been addressed
A mixed methods approach involves collecting, analyzing, and integrating both quantitative and qualitative data within a single study or series of studies. While some argue it results in invalid studies, others believe quantitative and qualitative approaches can be compatible if used to complement each other's strengths. Mixed methods research can provide stronger evidence through triangulation, answer a broader range of questions, and increase generalizability, but it is also more complex, resource-intensive, and time-consuming than single method designs. There are different ways to sequence the quantitative and qualitative elements, such as explanatory or exploratory designs.
Classification of Researcher's Collaboration Patterns Towards Research Perfor...Nur Hazimah Khalid
A VIVA presentation slide for Master of Computer Science on 24th May 2016 at Faculty of Computing, Universiti Teknologi Malaysia by Nur Hazimah Khalid. Thank you.
Sarah Hammock is a chemical engineering student at Texas A&M University pursuing a Bachelor of Science degree. She has a 3.898 GPA and is currently taking courses in thermodynamics, numerical analysis, physical chemistry, and fluid operations. Her past research experience includes investigating the resistive switching properties of HfO2 thin films and new methods to recover carboxylic acids from an aqueous fermentation broth. She has strong skills in research, data analysis using Excel and MATLAB, experimental planning, and communication. She has received over $58,000 in scholarships and has been involved in various extracurricular and service activities.
The organized professional seminar company in HA NOI city.Hoàng Tuấn
The document advertises a company that organizes professional customer conferences and meetings in major cities across Vietnam such as Ho Chi Minh City, Đồng Nai, Phan Thiết, Nha Trang, Cần Thơ, and Đà Nẵng. It provides the company website and phone numbers for contact.
Công ty tổ chứac khai trương ngân hàng chuyên nghiệp nhất tại bình dươngHoàng Tuấn
The document advertises a company that professionally organizes grand opening events in Ho Chi Minh City, Đồng Nai, Phan Thiết, Nha Trang, Cần Thơ, and Đà Nẵng. It provides their website and phone numbers for contact. The same information is repeated throughout the document.
Tổ chức lễ kỷ niệm thành lập công ty chuyên nghiệp nhất tại tp.hcmHoàng Tuấn
The document advertises an event planning company in Ho Chi Minh City, Vietnam that specializes in organizing anniversary celebration events for companies. It provides the company's website and contact phone numbers, stating that they are the most professional and reputable event planning company in Ho Chi Minh City. The same message is repeated throughout the document.
This document provides an overview and template for conducting independent research. It discusses key aspects of the research process such as defining the research problem, identifying independent and dependent variables, developing hypotheses, choosing an appropriate research methodology, collecting and analyzing data, and presenting conclusions. Sample topics are provided to illustrate each step, such as examining factors that could contribute to an university's internet server crashing each July. The document concludes by listing references that were consulted in creating the research overview and template.
Relevance feature discovery for text miningredpel dot com
The document discusses relevance feature discovery for text mining. It presents an innovative model that discovers both positive and negative patterns in text documents as higher-level features and uses them to classify terms into categories and update term weights based on their specificity and distribution in patterns. Experiments on standard datasets show the proposed model outperforms both term-based and pattern-based methods.
Advantages of Query Biased Summaries in Information RetrievalOnur Yılmaz
Presentation of the paper:
Advantages of query biased summaries in information retrieval (1998)
Anastasios Tombros and Mark Sanderson.
In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR '98).
ACM, New York, NY, USA, 2-10.
DOI=10.1145/290941.290947
This document discusses mixed methods research, which combines qualitative and quantitative research approaches. Mixed methods research is defined as research that combines elements of qualitative and quantitative data collection, analysis, and findings. A popular framework identifies five main purposes of mixed methods research: triangulation, complementarity, development, initiation, and expansion.
Quantitative data analysis - Attitudes Towards ResearchLee Cox
This presentation aims to analyze survey data on attitudes towards research. However, the data set has limitations like missing key demographic details and being an opportunistic sample of 50 respondents. While the data could still be analyzed, issues are identified like ambiguous questions and a lack of raw data. Better approaches for analysis include categorizing roles, formatting questions clearly, and applying statistical tests to check for significance. Presenting findings would require directly answering the research question, conclusions, and validating interpretations with supporting data and considering alternatives.
A presentation about the added value of combining qualitative and quantitative methods. It begins with a brief discussion of qualitative research and how it is distinct from yet shares basic principles with quantitative research, followed by a discussion of four important ways mixed methods -- integrating qualitative and quantitative -- adds value to our research efforts, and then a discussion of mixed methods research -- what it is, typologies, alternatives to typologies, and the use of diagrams.
This document provides an overview of mixed methods research. It discusses the author's background and positioning in mixed methods. It then defines mixed methods research as collecting, analyzing, and integrating both quantitative and qualitative data within a single study. The document outlines key steps in designing a mixed methods study, including determining the research problem, collecting both types of data, analyzing the separate quantitative and qualitative results, and mixing or linking the two forms of data.
Elementary Statistics Picturing the World ch01.1Debra Wallace
This document provides an overview and introduction to statistics. It defines key statistical concepts such as data, population, sample, parameter, and statistic. It also distinguishes between descriptive and inferential statistics. Specifically, it states that statistics is the science of collecting, organizing, analyzing, and interpreting data to make decisions. A population is the entire set being studied, while a sample is a subset of the population. A parameter describes a characteristic of the entire population, while a statistic describes a characteristic of a sample. Descriptive statistics involves summarizing and displaying data, while inferential statistics involves using samples to draw conclusions about populations.
A mixed research design uses both quantitative and qualitative data and methods to conduct research. It has two main types - mixed method, which uses quantitative data for one stage and qualitative for another, and mixed model, which uses both quantitative and qualitative data in one or two stages. The advantages include providing context behind numbers, allowing triangulation, and compensating for weaknesses. However, mixed research also requires more resources, expertise, time, and can be more difficult to interpret.
The document describes a neural probabilistic model for context-based citation recommendation. It learns distributed representations of words and documents using neural networks. These representations are used to calculate the probability of citing a document given a citation context. The model is trained on over 10 million citation context-document pairs from CiteSeer. It significantly outperforms baseline methods in recommending citations based on citation context, achieving the highest MAP, recall, MRR and nDCG scores. Future work could explore combining this local context-based model with global recommendation models.
This document discusses mixed methods research. It defines mixed methods research as integrating both quantitative and qualitative data collection and analysis within a single study. The document outlines the basic characteristics, types of designs, steps, and advantages and disadvantages of mixed methods research. It discusses when mixed methods is appropriate and reasons for using it, such as to explain findings or address questions at different levels. The four main mixed methods designs are explanatory, exploratory, embedded, and triangulation designs.
Mixed methods research involves collecting and analyzing both quantitative and qualitative data within a single study. It originated in the social sciences and has expanded to fields like nursing. There are multiple mixed methods research designs including sequential explanatory, sequential exploratory, concurrent triangulation, and concurrent transformative. These designs differ in whether the quantitative and qualitative components are conducted sequentially or concurrently, and whether one method has priority over the other. The purpose is to develop a more comprehensive understanding than a single method could provide alone.
Definition
A procedure used to collect both qualitative and quantitative data.
This is done due to the fact that it is believed that both types of studies will provided a clearer understanding of what is being studied.
“It consists of merging ,integrating ,linking ,or embedding the two “strands””(Ceswell,2012).
Crisis situation, communication strategy and media coverageAfghanistan
In this PPT the status of crisis situation, the role of communication strategy during crisis and crisis management, and media coverage has also been addressed
A mixed methods approach involves collecting, analyzing, and integrating both quantitative and qualitative data within a single study or series of studies. While some argue it results in invalid studies, others believe quantitative and qualitative approaches can be compatible if used to complement each other's strengths. Mixed methods research can provide stronger evidence through triangulation, answer a broader range of questions, and increase generalizability, but it is also more complex, resource-intensive, and time-consuming than single method designs. There are different ways to sequence the quantitative and qualitative elements, such as explanatory or exploratory designs.
Classification of Researcher's Collaboration Patterns Towards Research Perfor...Nur Hazimah Khalid
A VIVA presentation slide for Master of Computer Science on 24th May 2016 at Faculty of Computing, Universiti Teknologi Malaysia by Nur Hazimah Khalid. Thank you.
Sarah Hammock is a chemical engineering student at Texas A&M University pursuing a Bachelor of Science degree. She has a 3.898 GPA and is currently taking courses in thermodynamics, numerical analysis, physical chemistry, and fluid operations. Her past research experience includes investigating the resistive switching properties of HfO2 thin films and new methods to recover carboxylic acids from an aqueous fermentation broth. She has strong skills in research, data analysis using Excel and MATLAB, experimental planning, and communication. She has received over $58,000 in scholarships and has been involved in various extracurricular and service activities.
The organized professional seminar company in HA NOI city.Hoàng Tuấn
The document advertises a company that organizes professional customer conferences and meetings in major cities across Vietnam such as Ho Chi Minh City, Đồng Nai, Phan Thiết, Nha Trang, Cần Thơ, and Đà Nẵng. It provides the company website and phone numbers for contact.
Công ty tổ chứac khai trương ngân hàng chuyên nghiệp nhất tại bình dươngHoàng Tuấn
The document advertises a company that professionally organizes grand opening events in Ho Chi Minh City, Đồng Nai, Phan Thiết, Nha Trang, Cần Thơ, and Đà Nẵng. It provides their website and phone numbers for contact. The same information is repeated throughout the document.
Tổ chức lễ kỷ niệm thành lập công ty chuyên nghiệp nhất tại tp.hcmHoàng Tuấn
The document advertises an event planning company in Ho Chi Minh City, Vietnam that specializes in organizing anniversary celebration events for companies. It provides the company's website and contact phone numbers, stating that they are the most professional and reputable event planning company in Ho Chi Minh City. The same message is repeated throughout the document.
The method of finding disparity between the networks of public transit and daily interest are measured by overlaying the metric and graphic information. By drawing comparison between the metric and graphic space of vulnerabilities, the disjuncture between the traffic network’s proximity to destination and physical organization and road network on site can be seen as a cause for failure. By understanding the higher risks involved from transit, this analysis creates a hypothetical scene to quantify who and what geographic regions are at risk when it comes to finding medical safety. People who are traveling beyond the four-mile radius is cross more at-risk road intersections due to the increased number or others with the same destination point.
The Professional Event Company in Ho chi minh City & Ha Noi, VietnamHoàng Tuấn
The document advertises a company that organizes professional customer conferences in major cities in Vietnam such as Ho Chi Minh City, Đồng Nai, Phan Thiết, Nha Trang, Cần Thơ, and Đà Nẵng. It provides the company website and phone numbers for contact.
Công ty tổ chức sự kiện ra mắt, giới thiệu sản phẩm mới chuyên nghiệp tại tp.hcmHoàng Tuấn
The document advertises an event planning company that organizes professional product launch events and introductions in major cities across Vietnam such as Ho Chi Minh City, Đồng Nai, Phan Thiết, Nha Trang, Cần Thơ and Đà Nẵng. It provides the company website and phone numbers for contact.
Lojistik Havuzu firmasının genel tanıtım sunumudur. Lojistik firmalarının nasıl yeni müşyeriler kazanabileceği, müşterilerini nasıl yönetebilecekleri konusunda destek olan Lojistik Havuzu, bu sunum ile faaliyetlerini anlatmaktadır.
El documento describe la personalidad de Paola Alvarenga. Ella es una mujer salvadoreña de 18 años de edad que se describe a sí misma como fuerte, alegre, bipolar, determinada, perfeccionista y desordenada. También se describe como católica, pequeña y parecida a una ardilla. En términos de su cultura, disfruta de comidas típicas salvadoreñas y es sociable y familiar. Internamente, se ve a sí misma como linda, misteriosa y creativa, aunque reservada con otras personas al principio.
This poster presents an overview of individual patient data (IPD) meta-analysis and introduces the ipdforest command in Stata. IPD meta-analysis involves pooling raw data from multiple studies and can address issues like inconsistent reporting across studies. The poster outlines three mixed-effects regression models for IPD meta-analysis with varying assumptions about fixed and random effects. It also demonstrates how ipdforest can be used to generate forest plots for one-stage IPD meta-analyses in Stata, overcoming limitations of standard two-stage approaches.
This poster summarizes a document presenting a meta-analysis of individual patient data (IPD) from multiple randomized controlled trials. It discusses three statistical models for conducting a one-stage IPD meta-analysis using mixed effects regression models. The first model includes a fixed common intercept and random treatment effects. The second allows for fixed trial-specific intercepts and baseline effects. The third considers random trial intercepts and treatment effects. The document outlines how to implement each model in Stata software.
CHAPTER 10 MIXED METHODS PROCEDURESHow would you write a mixed mEstelaJeffery653
CHAPTER 10 MIXED METHODS PROCEDURES
How would you write a mixed methods procedure section for your proposal or study? Up until this point, we have considered collected quantitative data and qualitative data. We have not discussed “mixing” or combining the two forms of data in a study. We can start with the assumption that both forms of data provide different types of information (open-ended data in the case of qualitative and closed-ended data in the case of quantitative). If we further assume that each type of data collection has both limitations and strengths, we can consider how the strengths can be combined to develop a stronger understanding of the research problem or questions (and, as well, overcome the limitations of each). In a sense, more insight into a problem is to be gained from mixing or integration of the quantitative and qualitative data. This “mixing” or integrating of data, it can be argued, provides a stronger understanding of the problem or question than either by itself. Mixed methods research, therefore, is simply “mining” the databases more by integrating them. This idea is at the core of a new methodology called “mixed methods research.”
Conveying the nature of mixed methods research and its essential characteristics needs to begin a good mixed methods procedure. Start with the assumption that mixed methods is a methodology in research and that the readers need to be educated as to the basic intent and definition of the design, the reasons for choosing the procedure, and the value it will lend to a study. Then, decide on a mixed methods design to use. There are several from which to choose; consider the different possibilities and decide which one is best for your proposed study. With this choice in hand, discuss the data collection, the data analysis, and the data interpretation, discussion, and validation procedures within the context of the design. Finally, end with a discussion of potential ethical issues that need to be anticipated in the study, and suggest an outline for writing the final study. These are all standard methods procedures, and they are framed in this chapter as they apply to mixed methods research. Table 10.1 shows a checklist of the mixed methods procedures addressed in this chapter.
COMPONENTS OF MIXED METHODS PROCEDURES
Mixed methods research has evolved into a set of procedures that proposal developers and study designers can use in planning a mixed methods study. In 2003, the Handbook of Mixed Methods in the Social and Behavior Sciences (Tashakkori & Teddlie, 2003) was published (and later added to in a second edition, see Tashakkori & Teddlie, 2010), providing a comprehensive overview of this approach. Now several journals emphasize mixed methods research, such as the Journal of Mixed Methods Research, Quality and Quantity, Field Methods, and the International Journal of Multiple Research Approaches. Additional journals actively encourage this form of inquiry (e.g., International Journal of ...
The document outlines various aspects of empirical research studies including sampling methods, measures, design, analysis, and conclusions. It discusses sampling procedures, measures of stressors and outcomes, research design types, variables that are controlled for, and issues with determining causality from evidence. Minimum quality criteria for research designs are also presented focusing on randomized controlled trials, experiments, and longitudinal and cross-sectional studies.
This document provides an overview of mixed methods research. It defines mixed methods research as combining quantitative and qualitative research techniques in a single study. The document discusses the purposes of mixed methods research, compares qualitative and quantitative research, and examines the philosophical basis of pragmatism in mixed methods. It also outlines various mixed methods research designs, procedures for planning a mixed methods study, and strengths and weaknesses of the approach.
This document discusses mixed methods research design. It begins by defining mixed methods research as involving collecting and integrating both quantitative and qualitative data within a single research project to provide a more comprehensive understanding of the phenomenon being studied. It then outlines the typical structure of a mixed methods research proposal, including an introduction with basic information, a section on the research topic, and a research plan section. The research plan section often includes a literature review and details on the specific mixed methods design and data collection methods. The document provides examples of five primary mixed methods designs: sequential explanatory, sequential exploratory, convergent parallel, embedded, and transformative.
This methodological guidance article discusses the elements of a high-quality meta-analysis that is conducted within the context of a systematic review.
Meta-analysis, a set of statistical techniques for synthesizing the results of
multiple studies, is used when the guiding research question focuses on a
quantitative summary of study results. In this guidance article, we discuss the
systematic review methods that support high-quality meta-analyses and outline best practice meta-analysis methods for describing the distribution of
effect sizes in a set of eligible studies. We also provide suggestions for transparently reporting the methods and results of meta-analyses to influence
practice and policy. Given the increasing use of meta-analysis for important
policy decisions, the methods and results of meta-analysis should be both
transparent and reproducible.
Keywords: meta-analysis, systematic review
A Qualitative Study on Reframing the Problem-solving Paradigm of Management Science.
Neither Qualitative nor Quantitative methods, as they are currently constituted, adequately resolve the problems of representation and legitimation in the management sciences. This project seeks to resolve contradictions in the ontological and epistemological foundations of social science in order to overcome shortcomings in the two major paradigms that are used in research, where different views of the same phenomena emerge and multiple realities appear to exist.
I. PurposeThe purpose of this experiential learning activity.docxflorriezhamphrey3065
I. Purpose
The purpose of this experiential learning activity is to apply nursing leadership knowledge and skills to plan for organizational change with system-wide impact. (CO 2, 3, 5)
III. Requirements
Description of the Assignment
This assignment provides the opportunity for the student to:
Create an evidence-based plan for system-wide change guided by a selected organizational change model
Engage in high-level decision-making processes common in the nurse executive role
Use reflective practice knowledge and skills in making high level decision making and change management
IV. Preparing the Assignment
Address all components of the Advanced Communication in Systems Leadership paper as outlined under "Assignment Directions and Criteria".
The paper is graded on quality and completeness of information, depth of thought, organization following outline provided, substantive narrative, use of citations, use of Standard English, and writing conventions.
Format:
American Psychological Association. (2010).
Publication manual of the American Psychological Association
(current ed.). Washington, DC: Author. Is the source used for this paper
Required elements
Title page, reference page
Use Microsoft Word
Page numbers, running head, doubles-spaced, times new roman, 12pt font, 1" margins, level 1 headings
Paper length: 7 maximum, excluding reference page and title page
Scholarly sources
Minimum of four (4) scholarly resources no older than 5 years (See:
What is a Scholarly Source
under APA resources)
Proof-reading
Use spell check and grammar check and correct all errors
Compare final draft to detailed outline directions to ensure all required elements included
Submitting the paper
DIRECTIONS AND ASSIGNMENT CRITERIA
You will use the following headings for your paper:
Approach to the organizational mandate
Purpose of the paper
Overview of the tasks, potential challenges, and implications of a reduction in workforce
Part II: Reduction in Workforce-Deciding
Using Human Resources (HR) metrics Table 1
Approach, choices, rationale
Challenges presented (including role of ethics)
Using HR metrics with Relative Information Table 2
Approach, choices, rational
Challenges presented
Conflicts raised
Negotiation used
Part III: Reduction in Workforce-Planning the Change
Overview of reorganization plan including timeline
Plan for change and application of Kotter's or Rogers' change model
Anticipated conflict (three areas) and the benefits of using a change model
Healthy work environment
Describe department and system-wide implications, impact, and conflict
Strategies for addressing morale and motivation of remaining workforce
Summary/Conclusions
Restatement of purpose
Overview of tasks
What was learned
.
The World Testifies Of Data And Our Understanding Of It EssaySandy Harwell
The document discusses qualitative research methods. It defines qualitative research as exploring and describing phenomena through subjective and inductive strategies. Some key points made include:
- Qualitative research aims to answer questions about why and how things occur.
- There are three main purposes: exploratory, explanatory, and descriptive. Exploratory research discovers patterns in phenomena, while explanatory research identifies relationships shaping phenomena and descriptive research documents phenomena of interest.
- Qualitative research relies on non-experimental and phenomenological approaches to collect data through open-ended questions and observations.
The Duty of Loyalty and Whistleblowing Please respond to the fol.docxcherry686017
"The Duty of Loyalty and Whistleblowing" Please respond to the following:
· Analyze the duty of loyalty in whistleblower cases to determine to whom loyalty is owed and who shows the greater duty of loyalty. Support your analysis with specific examples. Then, suggest at least one (1) change to an existing law.
· Reexamine the Citizens United decision in Chapter 1, and determine which of the following groups has the greatest free speech rights: corporations, public employees, or private employees. Provide a rationale for your determination.
11 Combining Research Methods: Case Studies and Action Research
Rebecca Jester
Introduction
In Chapters 7 and 8, we focused on the unique features of quantitative and qualitative research. In this chapter, we aim to demonstrate how research methods can be integrated and combined to address specific research questions. The chapter will provide an overview of two specific research designs: action research and case studies, together with examples from research projects conducted by the author. This chapter does not aim to provide an in-depth philosophical debate related to case study and action research approaches, but rather a practical discussion of the merits, limitations and application of these two approaches. We begin by discussing the concepts of ‘mixed methods’ and ‘triangulation', first introduced in Chapter 2.
Mixed methods approaches
Traditionally, within health and social research, individuals have aligned themselves with either the quantitative or qualitative paradigm. However, in reality, many real world research projects benefit from mixing or combining methods. Mixed methods research can be accomplished either by using specific approaches to research, such as action research or case study, as discussed within this chapter, or by adopting a phased approach within a study. This might involve the first stage being exploratory within the qualitative paradigm, and the results from this being used to form specific hypotheses for testing within an experimental design, such as a randomized controlled trial. Equally, a quantitative approach (say, a questionnaire) might be used to gather data from a wide range of people, with the results being used to develop a qualitative interview schedule for use with a small sample of respondents.
Triangulation
Very often a research study is undertaken with multiple datasets, mixed methodology or with different researchers, such as at different sites. Triangulation is a very useful technique that enables you to enhance and verify concepts. As Ramprogus (2005, p. 4) suggests, ‘triangulation … tries to reconcile the differences of two or more data sources, methodological approaches, designs, theoretical perspectives, investigators and data analysis to compensate for the weaknesses of any single strategy towards achieving completeness or confirmation of findings’. However, triangulation must be exercised with caution; it is no substitute for robust and well-established ...
The document provides guidance on writing the discussion section of a scientific article. It notes that the discussion is the most difficult section and aims to help readers understand the study by contextualizing results, exhibiting critical thinking, and comparing findings to previous literature. The discussion should include a summary of findings, interpretation of results, comparison to other studies, implications, limitations, and recommendations. Examples are provided for each component to illustrate how to effectively write the discussion section.
This document outlines the purpose and process of conducting a pilot study. It defines a pilot study as an experimental investigation used to test feasibility and methods for a larger study. The main goals of a pilot study are to assess feasibility and avoid issues in a full-scale study. Key aspects addressed in a pilot study include process, resources, management, and scientific questions. Data analysis focuses on descriptive statistics and confidence intervals rather than significance testing. Results are used to determine if the full study is feasible and whether any modifications are needed.
This document provides an overview of key components to include when designing quantitative research studies. It discusses the importance of stating study variables, identifying the research design and how it connects to research questions. It also addresses time and resource constraints, methodology considerations like sampling strategies and data collection procedures. The document outlines important elements for the data analysis plan and addressing threats to validity. Ethical procedures for access, data treatment, and IRB approval are also reviewed.
This document summarizes a simulation study comparing the performance of different meta-analysis methods when assumptions of normality are violated. The study generated simulated datasets with various distributions for true effects and degrees of heterogeneity. It then compared methods like fixed effects, DerSimonian-Laird, maximum likelihood, and permutations in terms of coverage, power, and confidence interval estimation. The results showed that some methods are more robust to non-normal data, with profile likelihood and permutations generally performing best, while other methods like fixed effects and DerSimonian-Laird showed poorer performance.
21 minutes agoTami Frazier RE Discussion - Week 3COLLAPSE.docxvickeryr87
21 minutes ago
Tami Frazier
RE: Discussion - Week 3
COLLAPSE
Top of Form
NURS 6052 – Essentials of Evidence-Based Practice
Week 3 Initial Discussion Post
The Role of Theoretical Frameworks in Research
Research is a process of evaluating a concept or theory concerning a specific subject. Analysis of a theory includes examining the behaviors and characteristics of people and how they interact with biological, interpersonal, and environmental factors (Polit & Beck, 2017). Every theory attempts to explain phenomena and how they are related to a specific purpose. Valid research uses a theory or model as the building blocks. Nursing theory relies on models to define what nursing is and the processes involved in providing care (Polit & Beck, 2017). In this post, I will examine a research example that has adopted different theories and models to design, implement, and evaluate health promotion efforts (Joseph, Daniel, Thind, Benitez, & Pekmezi, 2016).
Research Review
Finding research related to nursing theories and models was an easy task. Many fundamental nursing policies and procedures are founded on either a theory or a model. For this paper review, I chose the transtheoretical model which states that “transition from one stage of change to the next are affected by processes of change” (Polit & Beck, 2017, p. 124). The research paper was focused on reviewing numerous theories used to assess long-term maintenance of physical activity, weight loss, and smoking cessation (Joseph et al., 2016). Within this research, the authors referenced five prominent behavioral theories which are self-determination theory, the theory of planned behavior, social cognitive theory, transtheoretical model, and the social ecological model (Joseph et al., 2016). The paper excluded studies that referenced cognitive behavioral therapy used for intervention. PubMed and PsycINFO were used with relevant search terms and Boolean operators. Each article was then reviewed by three different reviewers.
Transtheoretical Model
In this article, the transtheoretical model (TTM) was used to define and recognize behavioral change through natural processes. The total number of participants was 20,645 with over 65% of participants being female with a mean age of 49.9 years (Joseph et al., 2016). TTM is a combination of behavior change theories and psychotherapy (Joseph et al., 2016). TTM presumes people move through the five stages of behavioral change which are precontemplation, contemplation, preparation, action, and maintenance in a cyclical manner instead of a linear route (Joseph et al., 2016). Often participants in the study found themselves making progress with physical activity, weight loss, and smoking cessation only to regress creating a cycle of one step ahead and two steps back(Joseph et al., 2016). Relapse is a common occurrence with TTM for new patients and long-term patients. Maintaining the stage of change can be challenging due to intrinsic and extrinsic.
Combining Qualitative and Quantitative ApproachesSome Argum.docxdrandy1
Combining Qualitative and Quantitative Approaches:
Some Arguments for Mixed Methods Research
Thorleif Lund
University of Oslo
One purpose of the present paper is to elaborate 4 general advantages of the mixed methods
approach. Another purpose is to propose a 5-phase evaluation design, and to demonstrate
its usefulness for mixed methods research. The account is limited to research on groups in
need of treatment, i.e., vulnerable groups, and the advantages of mixed methods are
illustrated by the help of the 5-phase evaluation design. The basic idea is that the total
set of relevant attributes and changes for such a vulnerable group should be taken into
consideration in all phases, and that the mixed methods approach will provide an
optimal treatment, will give a more complete description and understanding of the
treatment effects, and will facilitate generalization to professional work.
Keywords: mixed methods, qualitative-quantitative combination, evaluation design
The research methodology in the social and behavioral sciences has undergone radical
changes over the past 50 years. One may speak of three methodological movements:
(1) the quantitative movement, (2) the qualitative movement, and (3) the mixed methods
movement (Polit & Beck, 2004; Teddlie & Tashakkori, 2003). Research in the twentieth
century, especially in the first half of the century, was dominated by the quantitative move-
ment. Its philosophical basis of positivism can be said to have been substituted by critical
realism in the last half of the century (Cook & Campbell, 1979). The qualitative approach
developed partly as a protest against the dominance of the quantitative tradition, and it
attained its definitive breakthrough around 1970. Several philosophical assumptions have
been proposed for the qualitative approach, mainly some variants of constructivism
(Lincoln & Guba, 2000). The differences between the two approaches with respect to philo-
sophical basis, scientific fruitfulness, and empirical methods have been extensively debated.
The disagreement has been great, in particular with respect to philosophical positions, as
illustrated by the “paradigm wars” (Gage, 1989), and the two approaches are still regarded
by many researchers as incompatible means for knowledge construction (Teddlie & Tashak-
kori, 2003). The mixed methods movement represents a blending of quantitative and quali-
tative methods in research, and it can be said to have been evolved historically from the
notion of “triangulating” information from different data sources (Campbell & Fiske,
1959; Denzin, 1978; Morse, 1991; Patton, 1990). The mixed methods approach can be con-
sidered established as a formal discipline around 2000. This third movement is characterized
by a practical/pragmatic attitude in that the research questions in empirical studies are given
ISSN 0031-3831 print/ISSN 1470-1170 online
# 2012 Scandinavian Journal of Educational Research
http://dx.doi.org/10.1080/00313831.2011.568674.
Combining Qualitative and Quantitative ApproachesSome Argum.docxcargillfilberto
This document discusses the advantages of combining qualitative and quantitative research methods, known as mixed methods research. It proposes a 5-phase evaluation design to demonstrate how mixed methods can be useful. The 5 phases are: 1) need analysis, 2) construction and choice, 3) implementation and process analysis, 4) effect assessment and interpretation, and 5) generalization. The document argues that mixed methods research can answer more complex questions, provide a more complete picture by combining different perspectives, and produce more valid inferences through convergence of results. It illustrates how mixed methods can be applied effectively within each phase of the proposed design, using social anxiety treatment as an example, to better understand client needs, design effective interventions, analyze implementation and causal processes, assess
Exploratory Factor Analysis; Concepts and TheoryHamed Taherdoost
This document discusses exploratory factor analysis (EFA), including its concepts, theory, and process. EFA is commonly used to reduce a large number of variables into a smaller set of underlying factors and establish relationships between measured variables and latent constructs. The key steps of EFA include assessing suitability of the data, extracting factors, determining the number of factors to retain, rotating the factors for better interpretation, and labeling the factors. Sample size, factor extraction and rotation methods, and interpretation are also covered.
Evaluates a meta analysis of family therapy interventions for families facing physical illness.
The slide presentation and article is discussed in greater detail at http://jcoynester.wordpress.com/2013/08/12/interventions-for-the-family-in-chronic-illness-a-meta-analysis-i-like/
Similar to RSS local 2012 - Software challenges in meta-analysis (20)
This document summarizes several large primary care databases in the United Kingdom, including the Clinical Practice Research Datalink (CPRD) and The Health Improvement Network (THIN) database. It provides details on the number of practices and patients covered by each database. The document also discusses the structure of the primary care databases, tools for analyzing the data, and examples of using the data to study conditions like diabetes. Finally, it presents results from studies examining the impact of financial incentives on quality of care indicators in the UK Quality and Outcomes Framework.
Investigating the relationship between quality of primary care and premature ...Evangelos Kontopantelis
This document describes a study investigating the relationship between quality of primary care and premature mortality in England. The study used a longitudinal spatial design to analyze mortality rates and quality of care data at the neighborhood level from 2007-2012. Multiple linear regression analyses found that higher overall quality scores on the UK's Quality and Outcomes Framework for primary care were not significantly associated with lower all-cause premature mortality rates at the neighborhood level after accounting for deprivation, rurality, and disease burden. Higher disease burden was significantly associated with higher mortality rates.
Re-analysis of the Cochrane Library data and heterogeneity challengesEvangelos Kontopantelis
Heterogeneity issues and a re-analysis of the Cochrane Library data. Presented in the 35th Annual Conference of the International Society for Clinical Biostatistics (ISCB35) in Vienna
The document provides an overview of various UK primary care and population health databases, including Clinical Practice Research Datalink (CPRD), The Health Improvement Network (THIN), QResearch, and ResearchOne. It describes the size and coverage of each database, as well as what types of medical data they contain. Additionally, it outlines other relevant UK data sources like Quality and Outcomes Framework (QOF) datasets, patient satisfaction surveys, census and mortality records, and hospitalization databases that can help provide context when analyzing primary care databases.
This study re-analyzed data from the Cochrane Library to evaluate methods for estimating between-study heterogeneity in meta-analyses. The researchers downloaded RevMan files from over 3,800 Cochrane reviews containing over 57,000 meta-analyses. They evaluated methods for estimating the between-study variance (tau-squared) using simulated and real Cochrane data. Their results showed that the DerSimonian-Laird bootstrap method performed best overall at estimating tau-squared and detecting heterogeneity, especially in small meta-analyses. However, over 50% of small meta-analyses in the Cochrane data failed to detect high between-study heterogeneity. The study highlights limitations in commonly used methods for accounting for heterogeneity in meta-analyses.
This document summarizes a re-analysis of meta-analysis data from the Cochrane Library. It examines the performance of different methods for estimating between-study heterogeneity and explores model selection in published meta-analyses. Simulation studies were conducted to compare heterogeneity estimators. Over 57,000 meta-analyses from the Cochrane Library were also analyzed. Results showed that the DerSimonian-Laird estimator often failed to detect high between-study heterogeneity, particularly in small meta-analyses. Bayesian methods performed well for very small meta-analyses. In the Cochrane data, over 30% of meta-analyses had only 2 studies and the random-effects model was more commonly used with larger numbers of studies.
This document summarizes a presentation about opening up clinical performance variation and financial incentives in primary care quality of care. It discusses the Quality and Outcomes Framework (QOF), a pay-for-performance program introduced in 2004 as part of a new GP contract in the UK. The QOF rewards general practices for achieving quality targets in chronic disease care. It has expanded over time to include more indicators and domains. While initially estimated to cost £1.8 billion over 3 years, the program has cost over £9 billion after its first 9 years. The presentation examines research on the impact and effectiveness of the QOF program.
The document analyzes the relationship between clinical computing systems used by family practices in the UK and their performance under the Quality and Outcomes Framework (QOF) pay-for-performance scheme between 2007-2011. Statistical models found that practices' choice of clinical computing system was a significant predictor of their QOF achievement scores, with some systems associated with better performance than others. Practices using the Vision 3 or Synergy systems tended to score highest overall, while those using the PCS system tended to score the lowest. Performance varied by the type of clinical activities as well.
Effect of Financial Incentives on Incentivised and Non-Incentivised Clinical Activities: Utilising Primary Care Databases to answer clinical, policy and methodological questions
This document summarizes research investigating the effect of provider incentives for influenza immunization through longitudinal studies using two datasets. It finds that reported achievement generally increased over time with the Quality and Outcomes Framework (QOF), but this was partly due to increased exception reporting. Increasing the upper threshold for one indicator led to greater improvements in reported achievement compared to other indicators. Analyzing data prior to QOF allows disentangling the effects of different incentive schemes over time on immunization rates in various patient groups.
The document analyzes the impact of the UK's Quality and Outcomes Framework (QOF), a pay-for-performance scheme for primary care physicians introduced in 2004. It compares changes in quality indicators that were fully incentivized by the QOF, partially incentivized, and non-incentivized. In the short-term (2004/05), fully incentivized indicators showed the largest improvements above expectations, while partially incentivized treatment indicators declined. In the long-term (2006/07), fully incentivized indicators continued improving and non-incentivized/partially incentivized indicators declined below expectations, suggesting the QOF primarily impacted incentivized aspects of care.
This document discusses using electronic patient data from the United Kingdom's General Practice Research Database (GPRD) for primary care research. It provides background on the development of electronic patient records in the UK and the Quality and Outcomes Framework that incentivized general practices to computerize. The GPRD is described as containing longitudinal data from over 500 general practices that can be used to study disease prevalences, clinical quality indicators, and patient comorbidities over time. The document outlines the research team's process of developing code lists to extract relevant data from the GPRD for their studies on the effect of the Quality and Outcomes Framework on clinical quality in primary care.
This document summarizes the results of an analysis of the 2007-08 UK GP Patient Survey data. The analysis used multilevel logistic regression to examine how patient satisfaction and experience relates to patient, practice, and regional characteristics. Key findings include:
- Patient age, employment status, and ethnicity significantly impacted satisfaction levels, with older patients, non-full time workers, and white British individuals most satisfied.
- Practice size strongly influenced satisfaction and experience, except for satisfaction with hours - larger practices saw lower ratings.
- The three models examined relationships for all patients, working patients, and interactions between key predictors and found practice size and patient demographics to be major drivers of satisfaction.
The document summarizes a study that analyzed the impact of the Quality and Outcomes Framework (QOF), a pay-for-performance scheme introduced in 2004 for primary care in the UK. The study used an interrupted time series analysis of clinical quality data from 1998 to 2007 to evaluate whether quality of care improved for conditions like asthma, coronary heart disease, and diabetes after the introduction of QOF. The results showed QOF accelerated short-term quality improvements for asthma and diabetes but had a more mixed impact for coronary heart disease. Post-QOF trends differed between incentivized and non-incentivized quality measures for some conditions. Overall, the study found significant quality improvements over time but a nuanced effect of the Q
RSS 2009 - Investigating the impact of the QOF on quality of primary careEvangelos Kontopantelis
The document summarizes research investigating the impact of the UK's Quality and Outcomes Framework (QOF) pay-for-performance scheme on primary care quality. It finds that in the short-term, quality of care increased more than expected for incentivized measures, but decreased for some non-incentivized measures. Over the long-term, quality continued improving for incentivized measures but effects on non-incentivized care were mixed, with no clear declines. The study used a large patient database and statistical modeling to analyze changes in quality indicators for conditions and care activities both inside and outside the QOF incentives.
This document summarizes the results of an analysis of the 2007-08 UK GP Patient Survey, which assessed patient satisfaction with access to primary care. The analysis found that while overall satisfaction was high, it varied based on patient, practice, and location characteristics. Patient age, ethnicity, and employment status most impacted satisfaction levels. Having the ability to take time off work greatly improved satisfaction for employed patients. Practice size also had a strong influence, with smaller practices receiving higher satisfaction ratings. Geographic location made a difference, as patients in northeast England reported the best experiences.
This document discusses different methods for conducting random effects meta-analyses when study effects are non-normally distributed. It simulates various non-normal distributions for true effects and compares the performance of fixed effects, DerSimonian-Laird, maximum likelihood, profile likelihood, permutations, and t-test methods. The results show that the performance of meta-analysis methods is robust to non-normal distributions. However, in the presence of heterogeneity, permutations and profile likelihood methods maintain accurate coverage even with small sample sizes, making them preferable choices.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Immersive Learning That Works: Research Grounding and Paths Forward
RSS local 2012 - Software challenges in meta-analysis
1. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
Software and model selection challenges in
meta-analysis
MetaEasy, model assumptions and homogeneity
Evan Kontopantelis David Reeves
Health Sciences Primary Care Research Centre
University of Manchester
RSS Primary Care Study Group
Errol Street, 2 July 2012
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
2. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
Outline
1 Meta-analysis overview
The heterogeneity issue
More challenges
2 A practical guide
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2 = 0
3 Summary
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
3. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
The heterogeneity issue
More challenges
Heterogeneity
The big bad wolf
When the effect of the intervention varies significantly from
one study to another.
It can be attributed to clinical and/or methodological
diversity.
Clinical: variability that arises from different populations,
interventions, outcomes and follow-up times.
Methodological: relates to differences in trial design and
quality.
Detecting quantifying and dealing with heterogeneity can
be very hard.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
4. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
The heterogeneity issue
More challenges
Absence of heterogeneity
Assumes that the
true effects of the
studies are all
equal and
deviations occur
because of
imprecision of
results.
Analysed with the
fixed-effects
method.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
5. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
The heterogeneity issue
More challenges
Presence of heterogeneity
Assumes that
there is variation in
the size of the true
effect among
studies (in addition
to the imprecision
of results).
Analysed with
random-effects
methods.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
6. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
The heterogeneity issue
More challenges
Challenges with meta-analysis
Heterogeneity is common and the fixed-effect model is
under fire.
Methods are asymptotic: accuracy improves as studies
increase. But what if we only have a handful, as is usually
the case?
Almost all random-effects models (except Profile
Likelihood) do not take into account the uncertainty in ˆτ2.
Is this, practically, a problem?
DerSimonian-Laird is the most common method of
analysis, since it is easy to implement and widely available,
but is it the best?
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
7. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
The heterogeneity issue
More challenges
Challenges with meta-analysis
...continued
Can be difficult to organise since...
outcomes likely to have been disseminated using a variety
of statistical parameters
appropriate transformations to a common format required
tedious task, requiring at least some statistical adeptness
Parametric random-effects models assume that both the
effects and errors are normally distributed. Are methods
robust?
Sometimes heterogeneity is estimated to be zero,
especially when the number of studies is small. Good
news?
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
8. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Based on our original work...
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
9. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Organising
Data initially collected using data extraction forms.
A spreadsheet is the next logical step to summarise the
reported study outcomes and identify missing data.
Since in most cases MS Excel will be used we developed
an add-in that can help with most processes involved in
meta-analysis.
More useful when the need to combine differently reported
outcomes arises.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
10. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
What it can do
Help with the data collection using pre-formatted
worksheets.
Its unique feature, which can be supplementary to other
meta-analysis software, is implementation of methods for
calculating effect sizes (& SEs) from different input types.
For each outcome of each study...
it identifies which methods can be used
calculates an effect size and its standard error
selects the most precise method for each outcome
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
11. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
What it can do
...continued
Creates a forest plot that summarises all the outcomes,
organised by study.
Uses a variety of standard and advanced meta-analysis
methods to calculate an overall effect.
a variety of options is available for selecting which
outcome(s) are to be meta-analysed from each study
Plots the results in a second forest plot.
Reports a variety of heterogeneity measures, including
Cochran’s Q, I2, HM
2
and ˆτ2 (and its estimated confidence
interval under the Profile Likelihood method).
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
12. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Advantages
Free (provided Microsoft Excel is available).
Easy to use and time saving.
Extracted data from each study are easily accessible, can
be quickly edited or corrected and analysis repeated.
Choice of many meta-analysis models, including some
advanced methods not currently available in other software
packages (e.g. Permutations, Profile Likelihood, REML).
Unique forest plot that allows multiple outcomes per study.
Effect sizes and standard errors can be exported for use in
other meta-analysis software packages.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
13. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Installing
Latest version available from www.statanalysis.co.uk
Compatible with Excel 2003, 2007 and 2010.
Manual provided but also described in:
Kontopantelis E and Reeves D.
MetaEasy: A Meta-Analysis Add-In for Microsoft Excel.
Journal of Statistical Software, 30(7):1-25, 2009.
Play video clip
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
14. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Stata implementation
MetaEasy methods implemented in Stata under:
metaeff, which uses the different study input to provide
effect sizes and SEs
metaan, which meta-analyses the study effects with a
fixed-effect or one of five available random-effects models
To install, type in Stata:
ssc install <command name>
help <command name>
Described in:
Kontopantelis E and Reeves D.
metaan: Random-effects meta-analysis.
The Stata Journal, 10(3):395-407, 2010.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
15. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Many random-effects methods
which to use?
DerSimonian-Laird (DL): Moment-based estimator of both
within and between-study variance.
Maximum Likelihood (ML): Improves the variance estimate
using iteration.
Restricted Maximum Likelihood (REML): an ML variation
that uses a likelihood function calculated from a
transformed set of data.
Profile Likelihood (PL): A more advanced version of ML
that uses nested iterations for converging.
Permutations method (PE): Simulates the distribution of
the overall effect using the observed data.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
16. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Performance evaluation
our approach
Simulated various distributions for the true
effects:
Normal.
Skew-Normal.
Uniform.
Bimodal.
Created datasets of 10,000
meta-analyses for various numbers of
studies and different degrees of
heterogeneity, for each distribution.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
17. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Performance evaluation
our approach
Compared all methods in terms of:
Coverage, the rate of true negatives when the overall true
effect is zero.
Power, the rate of true positives when the true overall effect
is non-zero.
Confidence Interval performance, a measure of how wide
the (estimated around the effect) CI is, compared to its true
width.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
18. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Homogeneity
Zero between study variance
0.75
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
zero between study variance
Coverage
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
zero between study variance
Power (25th centile)
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
zero between study variance
Overall estimation performance
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
19. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Coverage performance
Small and large heterogeneity under various distributional assumptions
0.75
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=0, ku=3
H²=1.18 − I²=15% (normal distribution)
Coverage
0.75
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=2, ku=9
H²=1.18 − I²=15% (skew−normal distribution)
Coverage
0.75
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk1=0, ku1=3, var1=.1, sk2=0, ku2=3, var2=.1
H=1.18 − I=15% (bimodal distribution)
Coverage
0.75
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
H=1.18 − I=15% (uniform distribution)
Coverage
0.75
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=0, ku=3
H²=2.78 − I²=64% (normal distribution)
Coverage
0.75
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=2, ku=9
H²=2.78 − I²=64% (skew−normal distribution)
Coverage
0.75
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk1=0, ku1=3, var1=.1, sk2=0, ku2=3, var2=.1
H=2.78 − I=64% (bimodal distribution)
Coverage
0.75
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
H=2.78 − I=64% (uniform distribution)
Coverage
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
20. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Power performance
Small and large heterogeneity under various distributional assumptions
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=0, ku=3
H²=1.18 − I²=15% (normal distribution)
Power (25th centile)
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=2, ku=9
H²=1.18 − I²=15% (skew−normal distribution)
Power (25th centile)
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk1=0, ku1=3, var1=.1, sk2=0, ku2=3, var2=.1
H=1.18 − I=15% (bimodal distribution)
Power (25th centile)
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
H=1.18 − I=15% (uniform distribution)
Power (25th centile)
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=0, ku=3
H²=2.78 − I²=64% (normal distribution)
Power (25th centile)
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=2, ku=9
H²=2.78 − I²=64% (skew−normal distribution)
Power (25th centile)
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk1=0, ku1=3, var1=.1, sk2=0, ku2=3, var2=.1
H=2.78 − I=64% (bimodal distribution)
Power (25th centile)
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
H=2.78 − I=64% (uniform distribution)
Power (25th centile)
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
21. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
CI performance
Small and large heterogeneity under various distributional assumptions
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=0, ku=3
H²=1.18 − I²=15% (normal distribution)
Confidence Interval performance
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=2, ku=9
H²=1.18 − I²=15% (skew−normal distribution)
Confidence Interval performance
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk1=0, ku1=3, var1=.1, sk2=0, ku2=3, var2=.1
H=1.18 − I=15% (bimodal distribution)
Overall estimation performance (medians)
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
H=1.18 − I=15% (uniform distribution)
Overall estimation performance
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=0, ku=3
H²=2.78 − I²=64% (normal distribution)
Confidence Interval performance
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk=2, ku=9
H²=2.78 − I²=64% (skew−normal distribution)
Confidence Interval performance
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
sk1=0, ku1=3, var1=.1, sk2=0, ku2=3, var2=.1
H=2.78 − I=64% (bimodal distribution)
Overall estimation performance (medians)
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
FE DL ML
REML PL PE
H=2.78 − I=64% (uniform distribution)
Overall estimation performance
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
22. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Coverage by method
Large heterogeneity across various between-study variance distributions
0.50
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
FE − Coverage
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
DL − Coverage
0.70
0.75
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
ML − Coverage
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
REML − Coverage
0.80
0.85
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
PL − Coverage
0.90
0.95
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
PE − Coverage
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
23. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Power by method
Large heterogeneity across various between-study variance distributions
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
FE − Power (25th centile)
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
DL − Power (25th centile)
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
ML − Power (25th centile)
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
REML − Power (25th centile)
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
PL − Power (25th centile)
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
%
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
PE − Power (25th centile)
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
24. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
CI performance by method
Large heterogeneity across various between-study variance distributions
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
FE − Overall estimation performance
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
DL − Overall estimation performance
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
ML − Overall estimation performance
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
REML − Overall estimation performance
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
PL − Overall estimation performance
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
2 5 8 11 14 17 20 23 26 29 32 35
number of studies
Zero variance Normal Skew−normal
Bimodal Uniform
H=2.78 − I=64%
PE − Overall estimation performance
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
25. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Which method then?
Within any given method, the results were consistent
across all types of distribution shape.
Therefore methods are highly robust against even severe
violations of the assumption of normality.
Choose PE if the priority is an accurate Type I error rate
(false positive).
But low power makes it a poor choice when control of the
Type II error rate (false negative) is also important and it
cannot be used with less than 6 studies.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
26. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Which method then?
For very small study numbers (≤5) only PL gives coverage
>90% and an acccurate CI.
PL has a ‘reasonable’ coverage in most situations,
especially for moderate and large heterogeneiry, giving it
an edge over other methods.
REML and DL perform similarly and better than PL only
when heterogeneity is low (I2 < 15%)
The computational complexity of REML is not justified.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
27. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
Bring on the champagne?
Does not necessarily mean homogeneity.
Most methods use biased estimators and not uncommon
to get a negative ˆτ2 which is set to 0 by the model.
We identified a large percentage of cases where the
estimators failed to identify existing heterogeneity.
In our simulations, for 5 studies and I2 ≈ 29%:
30% of the meta-analyses were erroneously estimated to
be homogeneous under the DL method.
32% for REML and 48% for ML-PL.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
28. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
the MetaEasy add-in
metaeff & metaan
Methods and performance
ˆτ2
= 0
What does it mean?
In these cases coverage was substandard and was over
10% lower than in cases where ˆτ2 > 0, on average.
The problem becomes less profound as the number of
studies and the level of heterogeneity increase.
Better estimators are needed.
There might be a large number of meta-analyses of
‘homogeneous’ studies which have reached a wrong
conclusion.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
29. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Meta-analysis overview
A practical guide
Summary
What to take home
MetaEasy can help you organise your meta-analysis and
can be especially useful if you need to combine continuous
and binary outcomes.
Methods implemented in Stata under metaeff and metaan.
A zero ˆτ2 is a reason to worry. Heterogeneity might be
there but we cannot measure or account for in the model.
If ˆτ2 > 0, even if very small, use a random-effects model.
The DL method works reasonably well, under all
distributions, especially for low levels of heterogeneity.
Profile likelihood, which takes into account the uncertaintly
in ˆτ2, works better when I2 ≥ 15%.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
30. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Appendix
Thank you!
Key references
Comments, suggestions:
e.kontopantelis@manchester.ac.uk
Kontopantelis, Reeves Software and model selection challenges in meta-analysis
31. [Poster title]
[Replace the following names and titles with those of the actual contributors: Helge Hoeing, PhD1; Carol Philips, PhD2; Jonathan Haas, RN, BSN, MHA3, and Kimberly B. Zimmerman, MD4
1[Add affiliation for first contributor], 2[Add affiliation for second contributor], 3[Add affiliation for third contributor], 4[Add affiliation for fourth contributor]
Appendix
Thank you!
Key references
References
Kontopantelis E, Reeves D.
MetaEasy: A Meta-Analysis Add-In for Microsoft Excel.
Journal of Statistical Software, 30(7):1-25, 2009.
Kontopantelis E, Reeves D.
metaan: Random-effects meta-analysis.
The Stata Journal, 10(3):395-407, 2010.
Kontopantelis E, Reeves D.
Performance of statistical methods for meta-analysis when
true study effects are non-normally distributed: A simulation
study.
Stat Methods Med Res, published online Dec 9 2010.
Kontopantelis, Reeves Software and model selection challenges in meta-analysis