This document discusses experimental and quasi-experimental designs. It outlines the key components of classical experimental designs, including independent and dependent variables, experimental and control groups, pretesting and posttesting. It also discusses threats to internal and external validity and variations like quasi-experimental designs that use nonequivalent groups or time series when randomization is not possible. Quasi-experiments aim to make groups as comparable as possible through matching or using natural cohorts.
This presentation is for educational purpose only. I do not own the rights to written material or pictures or illustrations used.
This is being uploaded for students who are in search of, or trying to understand how a quasi-experimental research design should look like.
This presentation is for educational purpose only. I do not own the rights to written material or pictures or illustrations used.
This is being uploaded for students who are in search of, or trying to understand how a quasi-experimental research design should look like.
This presentation, presented to senior thesis students at UC Berkeley, reviews the uses of qualitative research methods such as ethnography in public health, walking students through methods, sampling, ensuring rigor, and analysis with CAQDAS software such as Atlas.ti
Research Design: single subject design -
History of studying the individual
Single subject research
Features of single subject designs
Reversal designs
Multiple baseline designs
Data analysis in single subject research
Advantages of single subject research
Disadvantages of single subject research
A measurable characteristic that varies and may change from group to group, person to person, or even within one person over time.
Variable is a logical grouping of attributes, characteristics or qualities that describe an object. It may be either height, weight, anxiety levels, body temperature, income and so on.
Variable is frequently used in quantitative research projects pertinent to define and identify variables.
A variable incites excitement in any research than constants as it facilitate accurate explanation of relationship between the variables.
This presentation, presented to senior thesis students at UC Berkeley, reviews the uses of qualitative research methods such as ethnography in public health, walking students through methods, sampling, ensuring rigor, and analysis with CAQDAS software such as Atlas.ti
Research Design: single subject design -
History of studying the individual
Single subject research
Features of single subject designs
Reversal designs
Multiple baseline designs
Data analysis in single subject research
Advantages of single subject research
Disadvantages of single subject research
A measurable characteristic that varies and may change from group to group, person to person, or even within one person over time.
Variable is a logical grouping of attributes, characteristics or qualities that describe an object. It may be either height, weight, anxiety levels, body temperature, income and so on.
Variable is frequently used in quantitative research projects pertinent to define and identify variables.
A variable incites excitement in any research than constants as it facilitate accurate explanation of relationship between the variables.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. OUTLINE
Introduction
The Classical Experiment
Experiments and Causal Inference
Variations in the Classical
Experimental Design
Quasi-Experimental Designs
3. 3
•Experimentation is an approach to research
best suited for explanation and evaluation
•An experiment is “a process of observation,
to be carried out in a situation expressly
brought about for that purpose”
•Experiments involve:
•Taking action
•Observing the consequences of that action
•Especially suited for hypothesis testing
4. 4
•Variables, time order, measures, and groups
are the central features of the classical
experiment
•Involves three major pairs of components:
•Independent and dependent variables
•Pretesting and posttesting
•Experimental and control groups
5. 5
• The Independent Variable takes the form of a
dichotomous stimulus that is either present or
absent
• It varies (i.e., is independent) in our
experimental process
• “The Cause”
6. 6
• The outcome, the effect we expect to see
• Depends on the Independent Variable
• Might be physical conditions, social behavior,
attitudes, feelings, or beliefs
• “The Effect”
7. 7
• Subjects are initially measured in terms of
the Dependent Variable prior to association
with the Independent Variable (pretested)
• Then, they are exposed to the Independent
Variable
• Then, they are re-measured in terms of the
Dependent Variable (posttested)
• Differences noted between the
measurements on the Dependent Variable
are attributed to influence of the
Independent Variable
8. 8
• Experimental group – Exposed to whatever
treatment, policy, initiative we are testing
• Control group – Very similar to experimental
group, except that they are NOT exposed
• If we see a difference, we want to make sure
it is due to the Independent Variable, and
not to a difference between the two groups
9. 9
• Pointed to the necessity of control groups
• Independent Variable: improved working
conditions (better lighting)
• Dependent Variable: improvement in
employee satisfaction and productivity
• Workers were responding more to the
attention than to the improved working
conditions
10. 10
• We often don’t want people to know if they
are receiving treatment or not
• We expose our control group to a “dummy”
Independent Variable just so we are treating
everyone the same
• Medical research: Participants don’t know
what they are taking
• Ensures that changes in Dependent Variable
actually result from Independent Variable
and are not psychologically based
11. 11
• Experimenters may be more likely to
“observe” improvements among those who
received drug
• In a Double-Blind experiment, neither the
subjects nor the experimenters know which is
the experimental group and which is the
control group
• Broward County Florida and Portland, Oregon
domestic violence policing units study:
“keeping safe” strategies
12. 12
• First, must decide on target population –
the group to which the results of your
experiment will apply
• Second, must decide how to select
particular members from that group for
your experiment
• Cardinal rule – ensure that Experimental
and Control groups are as similar as
possible
• Randomization purposes towards this
13. 13
• “Randomization”
• Central feature of the classical experiment
• Produces experimental and control groups
that are statistically equivalent
• Farrington and associates:
• “Randomization insures that the average unit
in the treatment group is approx. equivalent
to the average unit in another group before
the treatment is applied”
• “All Other Things are Equal”
14. 14
• Experiments potentially control for many
threats to the validity of causal inference
• Experimental design ensures:
• Cause precedes effect via taking posttest
• Empirical correlation exists via comparing
pretest to posttest
• No spurious 3rd variable influencing
correlation via posttest comparison between
experimental and control groups, and via
randomization
15. 15
• Conclusions drawn from experimental
results may not reflect what went on in
experiment
3. History: External events may occur during
the course of the experiment
4. Maturation: People constantly are growing
5. Testing: The process of testing and
retesting
16. 16
4. Instrumentation: Changes in the
measurement process
5. Statistical regression: Extreme scores
regress to the mean
6. Selection biases: The way in which subjects
are chosen (use random assignment)
7. Experimental mortality: Subjects may drop
out prior to completion of experiment
17. 17
8. Causal time order: Ambiguity about order of
stimulus and Dependent Variable – which
caused which?
9. Diffusion/Imitation of treatments:
Experimental group may pass on elements to
Control group when communicating
10. Compensatory treatment: Cgroup is
deprived of something considered to be of
value
18. 18
11. Compensatory Rivalry: Control group
deprived of the stimulus may try to
compensate by working harder
12. Demoralization: Feelings of deprivation
among control group result in subjects
giving up
19. 19
• Potential threats to internal validity are only
some of the complications faced by
experimenters; they also have the problem
of generalizing from experimental findings
to the real world
• Two dimensions of generalizability:
• Construct Validity
• External Validity
20. 20
• Concerned with generalizing from experiment
to actual causal processes in the real world
• Link construct and measures to theory
• Clearly indicate what constructs are
represented by what measures
• Decide how much treatment is required to
produce change in Dependent Variable
21. 21
• Significant for experiments conducted under
carefully controlled conditions rather than
more natural conditions
• Reduces internal validity threats
• John Eck (2002): "diabolical dilemma."
• Suggestion:
• explanatory studies internal validity
• applied studies external validity
22. 22
• Becomes an issue when findings are based
on small samples
• More cases allows you to reliably detect small
differences; less cases result in detection of
only large differences
• Finding cause-and-effect relationships
through experiments depends on two related
factors:
• Number of Subjects
• Magnitude of posttest differences between the
experimental and control groups
23. 23
• Four basic building blocks present in
experimental designs:
2.The number of experimental & control groups
3.The number & variation of experimental stimuli
4.The number of pretest & posttest
measurements
5.The procedures used to select subjects and
assign them to groups
• Variations on the classical experiment can be
produced by manipulating the building blocks
24. 24
• When randomization is not possible for legal
or ethical reasons
• Renders them subject to Internal Validity
threats
• Quasi = “to a certain degree”
• Two categories:
• nonequivalent-groups designs
• time series designs
25. 25
• When we cannot randomize, we cannot
assume equivalency; hence the name
• We take steps to make groups as comparable
as possible
• Match subjects in Experimental and Control
groups using important variables likely
related to Dependent Variable under study
• Aggregate matching – comparable average
characteristics
26. 26
• Cohort – Group of subjects who enter or leave
an institution at the same time
• Ex: A class of police officers who graduate
from a training academy at the same time,
All persons who were sentenced to
probation in May
• Necessary to ensure that two cohorts being
examined against one another are actually
comparable
27. 27
• Longitudinal Studies
• Examine a series of observations over time
• Interrupted – Observations compared before
and after some intervention
(used in cause-and-effect studies)
• Instrumentation threat to internal validity is
likely because changes in measurements
may occur over a long period of time
• Often use measures produced by CJ
organizations
28. 28
• A large number of variables are studied for a
small number of cases or subjects
• Case-oriented research: Many cases are
examined to understand a small number of
variables (Boston Gun Project)
• Variable-oriented research: A large number of
variables are studied for a small number of
cases or subjects
• Case Study Design: Centered on an in-depth
examination of one or a few cases on many
dimensions