Data analysis is the process of bringing order, structure and meaning to the mass of collected data. It is a messy, ambiguous, time-consuming, creative, and fascinating process. It does not proceed in a linear fashion; it is not neat. Qualitative data analysis is a search for general statements about relationships among categories of data
2016 Symposium Poster - statistics - FinalBrian Lin
This document discusses common pitfalls in statistical analysis and provides examples to illustrate typical mistakes. It notes that statistical significance does not always imply practical significance. Even with the same means and variances, different datasets can have very different distributions. Correlation does not necessarily indicate causation. Qualitative scales should not always be treated as quantitative variables. Choosing the appropriate statistical test is important to get the right results. Sample size calculations depend on study details and objectives. Involving statisticians early in the research process helps ensure proper experimental design and analysis.
Hypothesis Testing: Central Tendency – Normal (Compare 1:1)Matt Hansen
An extension on a series about hypothesis testing, this lesson reviews the 2 Sample T & Paired T tests as central tendency measurements for normal distributions.
Khalid Shafi Abbasi received feedback from 15 raters as part of a 360-degree feedback process to improve his leadership skills. The feedback indicated that most raters felt communication and trust had increased since the last feedback process, though some felt Abbasi could ask for feedback more regularly. Ratings also showed most felt Abbasi's leadership effectiveness had improved. However, some felt he could better share his improvement plan with direct reports. The report provided Abbasi's scores on specific leadership competencies compared to averages, as well as written comments from individual raters to help Abbasi identify strengths and areas for further development.
Hypothesis Testing: Central Tendency – Normal (Compare 2+ Factors)Matt Hansen
An extension on a series about hypothesis testing, this lesson reviews the ANOVA test as a central tendency measurement for normal distributions. It also explains what residuals and boxplots are and how to use them with the ANOVA test.
Hypothesis Testing: Central Tendency – Non-Normal (Compare 2+ Factors)Matt Hansen
An extension on hypothesis testing, this lesson reviews the Mood’s Median & Kruskal-Wallis tests as central tendency measurements for non-normal distributions.
2016 Symposium Poster - statistics - FinalBrian Lin
This document discusses common pitfalls in statistical analysis and provides examples to illustrate typical mistakes. It notes that statistical significance does not always imply practical significance. Even with the same means and variances, different datasets can have very different distributions. Correlation does not necessarily indicate causation. Qualitative scales should not always be treated as quantitative variables. Choosing the appropriate statistical test is important to get the right results. Sample size calculations depend on study details and objectives. Involving statisticians early in the research process helps ensure proper experimental design and analysis.
Hypothesis Testing: Central Tendency – Normal (Compare 1:1)Matt Hansen
An extension on a series about hypothesis testing, this lesson reviews the 2 Sample T & Paired T tests as central tendency measurements for normal distributions.
Khalid Shafi Abbasi received feedback from 15 raters as part of a 360-degree feedback process to improve his leadership skills. The feedback indicated that most raters felt communication and trust had increased since the last feedback process, though some felt Abbasi could ask for feedback more regularly. Ratings also showed most felt Abbasi's leadership effectiveness had improved. However, some felt he could better share his improvement plan with direct reports. The report provided Abbasi's scores on specific leadership competencies compared to averages, as well as written comments from individual raters to help Abbasi identify strengths and areas for further development.
Hypothesis Testing: Central Tendency – Normal (Compare 2+ Factors)Matt Hansen
An extension on a series about hypothesis testing, this lesson reviews the ANOVA test as a central tendency measurement for normal distributions. It also explains what residuals and boxplots are and how to use them with the ANOVA test.
Hypothesis Testing: Central Tendency – Non-Normal (Compare 2+ Factors)Matt Hansen
An extension on hypothesis testing, this lesson reviews the Mood’s Median & Kruskal-Wallis tests as central tendency measurements for non-normal distributions.
This document provides instruction on using the 1 variance test for hypothesis testing. It begins with an overview of why hypothesis testing is needed to build a transfer function model. It then reviews the 4-step process for hypothesis testing and provides a decision tree to help select the appropriate statistical test based on data type and characteristics. The document demonstrates how to perform a 1 variance test using Minitab through examples comparing standard deviation to a target value. It concludes by prompting the reader to apply the 1 variance test to factors identified in a previous lesson and consider how the results could influence organizational decisions and goals.
Hypothesis Testing: Central Tendency – Non-Normal (Compare 1:Standard)Matt Hansen
An extension on hypothesis testing, this lesson reviews the 1 Sample Sign & Wilcoxon tests as central tendency measurements for non-normal distributions.
This document defines key terms used in data analysis and statistical inference, including population, sample, parameter, and statistic. It explains that statistics estimated from samples are used to infer unknown population parameters, and that error occurs since samples rather than entire populations are studied. The document also discusses theory and logic in data analysis, noting that theories are built on testable propositions and hypotheses are tested but never proven, instead only rejected or not rejected.
This document discusses key concepts in statistical estimation including:
- Estimation involves using sample data to infer properties of the population by calculating point estimates and interval estimates.
- A point estimate is a single value that estimates an unknown population parameter, while an interval estimate provides a range of plausible values for the parameter.
- A confidence interval gives the probability that the interval calculated from the sample data contains the true population parameter. Common confidence intervals are 95% confidence intervals.
- Formulas for confidence intervals depend on whether the population standard deviation is known or unknown, and the sample size.
Hypothesis Testing: Central Tendency – Non-Normal (Compare 1:1)Matt Hansen
This document provides instruction on using the Mann-Whitney test to compare the medians of two independent samples. It discusses when to use the Mann-Whitney test, how to run it in Minitab, and provides an example comparing the medians of two columns of sample data labeled MetricC1 and MetricC2. The results of running the Mann-Whitney test on this example are interpreted to determine if the medians are statistically different between the two samples. The document encourages applying the test to factors identified in a previous lesson and discussing how the results could impact an organization.
This lesson discusses hypothesis testing using the Chi2 test to compare proportions between groups. The Chi2 test can be used for goodness-of-fit tests to compare observed data to expected proportions. It can also be used for tests of association to compare proportions between two or more factors. Examples are provided to demonstrate Chi2 tests for goodness-of-fit on coin toss and die rolling data, as well as a test of association on call center volume data.
This document provides an overview and agenda for a hands-on introduction to data science. It includes the following sections: Data Science Overview and Intro to R (90 minutes), Exploratory Data Analysis (60 minutes), and Logistic Regression Model (30 minutes). The document then covers key concepts in data science including collecting and analyzing data to find insights to help decision making, using analytics to improve operations and innovations, and predicting problems before they occur. Machine learning and statistical techniques are also introduced such as supervised and unsupervised learning, parameters versus statistics, and calculating variance and standard deviation.
Distinction between outliers and influential data points w out hyp testAditya Praveen Kumar
This document distinguishes between outliers and influential data points in regression analysis. An outlier is a data point whose response y does not follow the general trend of other y values, while an influential point unduly influences the regression results. Through four examples, it shows that outliers may or may not be influential. Example 1 has no outliers or influential points. Example 2 has an outlier but not an influential point. Example 3 has neither. Example 4 has an outlier and influential point that significantly changes the regression slope. Outliers can influence results, but not all do; it is important to check for influential points.
This article provides a brief discussion on several statistical parameters that are most commonly used in any measurement and analysis process. There are a plethora of such parameters but the most important and widely used are briefed in here.
Statistical Processes
Can descriptive statistical processes be used in determining relationships, differences, or effects in your research question and testable null hypothesis? Why or why not? Also, address the value of descriptive statistics for the forensic psychology research problem that you have identified for your course project. read an article for additional information on descriptive statistics and pictorial data presentations.
300 words APA rules for attributing sources.
Computing Descriptive Statistics
Computing Descriptive Statistics: “Ever Wonder What Secrets They Hold?” The Mean, Mode, Median, Variability, and Standard Deviation
Introduction
Before gaining an appreciation for the value of descriptive statistics in behavioral science environments, one must first become familiar with the type of measurement data these statistical processes use. Knowing the types of measurement data will aid the decision maker in making sure that the chosen statistical method will, indeed, produce the results needed and expected. Using the wrong type of measurement data with a selected statistic tool will result in erroneous results, errors, and ineffective decision making.
Measurement, or numerical, data is divided into four types: nominal, ordinal, interval, and ratio. The businessperson, because of administering questionnaires, taking polls, conducting surveys, administering tests, and counting events, products, and a host of other numerical data instrumentations, garners all the numerical values associated with these four types.
Nominal Data
Nominal data is the simplest of all four forms of numerical data. The mathematical values are assigned to that which is being assessed simply by arbitrarily assigning numerical values to a characteristic, event, occasion, or phenomenon. For example, a human resources (HR) manager wishes to determine the differences in leadership styles between managers who are at different geographical regions. To compute the differences, the HR manager might assign the following values: 1 = West, 2 = Midwest, 3 = North, and so on. The numerical values are not descriptive of anything other than the location and are not indicative of quantity.
Ordinal Data
In terms of ordinal data, the variables contained within the measurement instrument are ranked in order of importance. For example, a product-marketing specialist might be interested in how a consumer group would respond to a new product. To garner the information, the questionnaire administered to a group of consumers would include questions scaled as follows: 1 = Not Likely, 2 = Somewhat Likely, 3 = Likely, 4 = More Than Likely, and 5 = Most Likely. This creates a scale rank order from Not Likely to Most Likely with respect to acceptance of the new consumer product.
Interval Data
Oftentimes, in addition to being ordered, the differences (or intervals) between two adjacent measurement values on a measurement scale are identical. For example, the di ...
This document provides an overview of descriptive statistics and different types of measurement data. It discusses nominal, ordinal, interval, and ratio data and how each type is measured. It also defines and provides examples of key descriptive statistics like mean, median, mode, variability, standard deviation, and different ways to visually represent data through graphs and charts. The goal is to familiarize readers with descriptive statistics concepts before more advanced statistical analysis is introduced.
This document provides an overview and objectives for Chapter 3 of the textbook "Statistical Techniques in Business and Economics" by Lind. The chapter covers describing data through numerical measures of central tendency (mean, median, mode) and dispersion (range, variance, standard deviation). It includes examples of computing various measures like the weighted mean, median, mode, and interpreting their relationships. The document also lists learning activities for students such as reading the chapter, watching video lectures, completing practice problems in the book, and participating in an online discussion forum.
This document provides an overview of a workshop that demonstrates how to use Microsoft Excel and the Real Statistics add-in to perform statistical analysis and descriptive statistics. It discusses concepts like mean, standard deviation, and normal distribution. It then walks through examples of calculating the mean and standard deviation of student performance data in mathematics, and generating a histogram and normal distribution curve of those scores. The goal is to help teachers better understand and apply basic statistical and descriptive analysis in their research.
This document provides an overview of various quality control and statistical process control tools and techniques. It discusses statistical process control as a methodology for quality analysis and improvement using real-time quality data. Other topics covered include control charts, Pareto analysis, cause-and-effect diagrams, check sheets, histograms, measures of central tendency and dispersion, normal distribution, and variable control charts. The document aims to explain these statistical concepts and quality management tools and how they can be used for problem solving, identification, and measuring process improvement.
This document provides instruction on using the 1 variance test for hypothesis testing. It begins with an overview of why hypothesis testing is needed to build a transfer function model. It then reviews the 4-step process for hypothesis testing and provides a decision tree to help select the appropriate statistical test based on data type and characteristics. The document demonstrates how to perform a 1 variance test using Minitab through examples comparing standard deviation to a target value. It concludes by prompting the reader to apply the 1 variance test to factors identified in a previous lesson and consider how the results could influence organizational decisions and goals.
Hypothesis Testing: Central Tendency – Non-Normal (Compare 1:Standard)Matt Hansen
An extension on hypothesis testing, this lesson reviews the 1 Sample Sign & Wilcoxon tests as central tendency measurements for non-normal distributions.
This document defines key terms used in data analysis and statistical inference, including population, sample, parameter, and statistic. It explains that statistics estimated from samples are used to infer unknown population parameters, and that error occurs since samples rather than entire populations are studied. The document also discusses theory and logic in data analysis, noting that theories are built on testable propositions and hypotheses are tested but never proven, instead only rejected or not rejected.
This document discusses key concepts in statistical estimation including:
- Estimation involves using sample data to infer properties of the population by calculating point estimates and interval estimates.
- A point estimate is a single value that estimates an unknown population parameter, while an interval estimate provides a range of plausible values for the parameter.
- A confidence interval gives the probability that the interval calculated from the sample data contains the true population parameter. Common confidence intervals are 95% confidence intervals.
- Formulas for confidence intervals depend on whether the population standard deviation is known or unknown, and the sample size.
Hypothesis Testing: Central Tendency – Non-Normal (Compare 1:1)Matt Hansen
This document provides instruction on using the Mann-Whitney test to compare the medians of two independent samples. It discusses when to use the Mann-Whitney test, how to run it in Minitab, and provides an example comparing the medians of two columns of sample data labeled MetricC1 and MetricC2. The results of running the Mann-Whitney test on this example are interpreted to determine if the medians are statistically different between the two samples. The document encourages applying the test to factors identified in a previous lesson and discussing how the results could impact an organization.
This lesson discusses hypothesis testing using the Chi2 test to compare proportions between groups. The Chi2 test can be used for goodness-of-fit tests to compare observed data to expected proportions. It can also be used for tests of association to compare proportions between two or more factors. Examples are provided to demonstrate Chi2 tests for goodness-of-fit on coin toss and die rolling data, as well as a test of association on call center volume data.
This document provides an overview and agenda for a hands-on introduction to data science. It includes the following sections: Data Science Overview and Intro to R (90 minutes), Exploratory Data Analysis (60 minutes), and Logistic Regression Model (30 minutes). The document then covers key concepts in data science including collecting and analyzing data to find insights to help decision making, using analytics to improve operations and innovations, and predicting problems before they occur. Machine learning and statistical techniques are also introduced such as supervised and unsupervised learning, parameters versus statistics, and calculating variance and standard deviation.
Distinction between outliers and influential data points w out hyp testAditya Praveen Kumar
This document distinguishes between outliers and influential data points in regression analysis. An outlier is a data point whose response y does not follow the general trend of other y values, while an influential point unduly influences the regression results. Through four examples, it shows that outliers may or may not be influential. Example 1 has no outliers or influential points. Example 2 has an outlier but not an influential point. Example 3 has neither. Example 4 has an outlier and influential point that significantly changes the regression slope. Outliers can influence results, but not all do; it is important to check for influential points.
This article provides a brief discussion on several statistical parameters that are most commonly used in any measurement and analysis process. There are a plethora of such parameters but the most important and widely used are briefed in here.
Statistical Processes
Can descriptive statistical processes be used in determining relationships, differences, or effects in your research question and testable null hypothesis? Why or why not? Also, address the value of descriptive statistics for the forensic psychology research problem that you have identified for your course project. read an article for additional information on descriptive statistics and pictorial data presentations.
300 words APA rules for attributing sources.
Computing Descriptive Statistics
Computing Descriptive Statistics: “Ever Wonder What Secrets They Hold?” The Mean, Mode, Median, Variability, and Standard Deviation
Introduction
Before gaining an appreciation for the value of descriptive statistics in behavioral science environments, one must first become familiar with the type of measurement data these statistical processes use. Knowing the types of measurement data will aid the decision maker in making sure that the chosen statistical method will, indeed, produce the results needed and expected. Using the wrong type of measurement data with a selected statistic tool will result in erroneous results, errors, and ineffective decision making.
Measurement, or numerical, data is divided into four types: nominal, ordinal, interval, and ratio. The businessperson, because of administering questionnaires, taking polls, conducting surveys, administering tests, and counting events, products, and a host of other numerical data instrumentations, garners all the numerical values associated with these four types.
Nominal Data
Nominal data is the simplest of all four forms of numerical data. The mathematical values are assigned to that which is being assessed simply by arbitrarily assigning numerical values to a characteristic, event, occasion, or phenomenon. For example, a human resources (HR) manager wishes to determine the differences in leadership styles between managers who are at different geographical regions. To compute the differences, the HR manager might assign the following values: 1 = West, 2 = Midwest, 3 = North, and so on. The numerical values are not descriptive of anything other than the location and are not indicative of quantity.
Ordinal Data
In terms of ordinal data, the variables contained within the measurement instrument are ranked in order of importance. For example, a product-marketing specialist might be interested in how a consumer group would respond to a new product. To garner the information, the questionnaire administered to a group of consumers would include questions scaled as follows: 1 = Not Likely, 2 = Somewhat Likely, 3 = Likely, 4 = More Than Likely, and 5 = Most Likely. This creates a scale rank order from Not Likely to Most Likely with respect to acceptance of the new consumer product.
Interval Data
Oftentimes, in addition to being ordered, the differences (or intervals) between two adjacent measurement values on a measurement scale are identical. For example, the di ...
This document provides an overview of descriptive statistics and different types of measurement data. It discusses nominal, ordinal, interval, and ratio data and how each type is measured. It also defines and provides examples of key descriptive statistics like mean, median, mode, variability, standard deviation, and different ways to visually represent data through graphs and charts. The goal is to familiarize readers with descriptive statistics concepts before more advanced statistical analysis is introduced.
This document provides an overview and objectives for Chapter 3 of the textbook "Statistical Techniques in Business and Economics" by Lind. The chapter covers describing data through numerical measures of central tendency (mean, median, mode) and dispersion (range, variance, standard deviation). It includes examples of computing various measures like the weighted mean, median, mode, and interpreting their relationships. The document also lists learning activities for students such as reading the chapter, watching video lectures, completing practice problems in the book, and participating in an online discussion forum.
This document provides an overview of a workshop that demonstrates how to use Microsoft Excel and the Real Statistics add-in to perform statistical analysis and descriptive statistics. It discusses concepts like mean, standard deviation, and normal distribution. It then walks through examples of calculating the mean and standard deviation of student performance data in mathematics, and generating a histogram and normal distribution curve of those scores. The goal is to help teachers better understand and apply basic statistical and descriptive analysis in their research.
This document provides an overview of various quality control and statistical process control tools and techniques. It discusses statistical process control as a methodology for quality analysis and improvement using real-time quality data. Other topics covered include control charts, Pareto analysis, cause-and-effect diagrams, check sheets, histograms, measures of central tendency and dispersion, normal distribution, and variable control charts. The document aims to explain these statistical concepts and quality management tools and how they can be used for problem solving, identification, and measuring process improvement.
The two major areas of statistics are: descriptive statistics and inferential statistics. In this presentation, the difference between the two are shown including examples.
This document discusses various statistical concepts for summarizing and analyzing quantitative data, including:
- Descriptive statistics like mean, median, mode, range, and standard deviation to summarize central tendency and variability.
- Different measurement scales for data like nominal, ordinal, interval, and ratio scales.
- Graphical representations of data like histograms, bar graphs, and scatterplots.
- Correlational research which investigates relationships between two variables using the Pearson correlation coefficient.
SPSS GuideAssessing Normality, Handling Missing Data, and Calculating Scores...ahmedragab433449
"This comprehensive SPSS guide covers essential topics in data analysis and statistical research. Key contents include:
Missing Data: Understanding and handling data gaps (Page 2)
Assessing Normality: Why and how to check normality in data sets (Page 6)
Interpretation of Output: A guide to exploring and interpreting SPSS outputs (Page 8)
Skewness and Kurtosis: Insights into data distribution (Page 11)
Kolmogorov-Smirnov and Shapiro-Wilk Tests: Testing for normality (Page 14)
Manipulating Data: Techniques and strategies for data manipulation (Page 25)
Calculating Total Scores and Reversing Negative Worded Items: SPSS guidance (Page 26)
Ideal for students, educators, researchers, and professionals in data analysis and statistics."
Statistics are used by organizations to measure and analyze business performance. American Express uses statistics such as total returns to shareholders, numbers of cardholders by age group, and cardholder spending by age to analyze business units, identify targeted customer groups, and inform marketing campaigns. Statistics on labor force characteristics by gender help conclude that male monthly incomes are typically higher than females, though this does not necessarily mean males spend more.
data science course with placement in hyderabadmaneesha2312
360DigiTMG delivers data science course with placement in hyderabad, where you can gain practical experience in key methods and tools through real-world projects. Study under skilled trainers and transform into a skilled Data Scientist. Enroll today!
This document contains a research report on comparing the shoe sizes of students from two Design and Technology classes. The report includes an introduction outlining the objectives, methodology, and topic of the research. Data on shoe sizes is presented in a table and chart. Measures of central tendency (mean, median, mode) and dispersion (standard deviation) are calculated and interpreted for each class. The analysis finds the modes are the same but means and standard deviations are slightly different, likely due to similar age ranges among the students. A reflection discusses the process of researching and completing the assignment with the goal of applying statistical concepts correctly.
Executive Program Practical Connection Assignment - 100 poinBetseyCalderon89
This document discusses descriptive statistics and how to calculate and interpret various descriptive statistics, including mean, median, mode, range, variance, and standard deviation. It provides examples and formulas for computing each statistic using data on employee productivity. The key points are:
- Descriptive statistics are used to summarize and describe data through measures of central tendency (mean, median, mode) and variability (range, variance, standard deviation).
- The appropriate statistic to use depends on the level of measurement of the data (nominal, ordinal, interval, ratio).
- Examples are provided to demonstrate how to calculate and interpret the mean, median, mode, range, variance, and standard deviation using data on the number of items employees produced.
Statistics is the study of collecting, analyzing, and presenting quantitative data. It involves planning data collection through surveys and experiments, as well as analyzing the data using measures of central tendency like the mean, median, and mode. The mean is the average value found by summing all values and dividing by the total number of values. The median is the middle value when data is arranged in order. The mode is the most frequent value. Statistics has limitations as it does not study qualitative data or individuals, and statistical laws may not be universally applicable. Frequency distributions organize data values and their frequencies to understand patterns in the data.
Data confusion (how to confuse yourself and others with data analysis)Vijay Kukrety
The document discusses various ways that data can be misused or misinterpreted, including through the use of misleading or non-informative graphs, misapplying averages, and using inappropriate statistical methods. It provides examples of bad graphs and analyses to avoid, and emphasizes the importance of properly collecting and presenting data to draw accurate conclusions. Key topics covered include distinguishing descriptive, enumerative, and analytical studies; understanding outliers and regression analysis; and avoiding forcing linear models on nonlinear data relationships.
Here are the key points about hyperlactatemia in pediatric patients:
- Hyperlactatemia occurs when there is an imbalance between tissue oxygen supply and demand, leading to increased anaerobic glycolysis and lactate production.
- It is commonly seen in pediatric ICU patients, especially following surgery, trauma, or septic shock which cause multiple organ dysfunction.
- Higher lactate levels are associated with worse clinical outcomes and prognosis in critically ill children.
- The PRISM III score, which evaluates the risk of mortality in pediatric ICU patients, was calculated for the patients in this study.
- Treatment aims to support organ function, optimize tissue oxygen delivery, and address any underlying causes contributing to the hyper
A General Manger of Harley-Davidson has to decide on the size of a.docxevonnehoggarth79783
A General Manger of Harley-Davidson has to decide on the size of a new facility. The GM has narrowed the choices to two: large facility or small facility. The company has collected information on the payoffs. It now has to decide which option is the best using probability analysis, the decision tree model, and expected monetary value.
Options:
Facility
Demand Options
Probability
Actions
Expected Payoffs
Large
Low Demand
0.4
Do Nothing
($10)
Low Demand
0.4
Reduce Prices
$50
High Demand
0.6
$70
Small
Low Demand
0.4
$40
High Demand
0.6
Do Nothing
$40
High Demand
0.6
Overtime
$50
High Demand
0.6
Expand
$55
Determination of chance probability and respective payoffs:
Build Small:
Low Demand
0.4($40)=$16
High Demand
0.6($55)=$33
Build Large:
Low Demand
0.4($50)=$20
High Demand
0.6($70)=$42
Determination of Expected Value of each alternative
Build Small: $16+$33=$49
Build Large: $20+$42=$62
Click here for the Statistical Terms review sheet.
Submit your conclusion in a Word document to the M4: Assignment 2 Dropbox byWednesday, November 18, 2015.
A General Manger of Harley
-
Davidson has to decide on the size of a new facility. The GM has narrowed
the choices to two: large facility or small facility. The company has collected information on the payoffs. It
now has to decide which option is the best u
sing probability analysis, the decision tree model, and
expected monetary value.
Options:
Facility
Demand
Options
Probability
Actions
Expected
Payoffs
Large
Low Demand
0.4
Do Nothing
($10)
Low Demand
0.4
Reduce Prices
$50
High Demand
0.6
$70
Small
Low Demand
0.4
$40
High Demand
0.6
Do Nothing
$40
High Demand
0.6
Overtime
$50
High Demand
0.6
Expand
$55
A General Manger of Harley-Davidson has to decide on the size of a new facility. The GM has narrowed
the choices to two: large facility or small facility. The company has collected information on the payoffs. It
now has to decide which option is the best using probability analysis, the decision tree model, and
expected monetary value.
Options:
Facility
Demand
Options
Probability Actions
Expected
Payoffs
Large Low Demand 0.4 Do Nothing ($10)
Low Demand 0.4 Reduce Prices $50
High Demand 0.6
$70
Small Low Demand 0.4
$40
High Demand 0.6 Do Nothing $40
High Demand 0.6 Overtime $50
High Demand 0.6 Expand $55
SAMPLING MEAN:
DEFINITION:
The term sampling mean is a statistical term used to describe the properties of statistical distributions. In statistical terms, the sample meanfrom a group of observations is an estimate of the population mean. Given a sample of size n, consider n independent random variables X1, X2... Xn, each corresponding to one randomly selected observation. Each of these variables has the distribution of the population, with mean and standard deviation. The sample mean is defined to be
WHAT IT IS USED FOR:
It is also used to measure central tendency of the numbers in a .
Data analysis involves cleaning, transforming and modeling data to extract useful information for making business decisions. It involves gathering past data or memories to analyze what happened previously or what could happen from different decisions in order to make informed choices. There are various tools that can help users process, manipulate and analyze relationships in data to identify patterns and trends. Major techniques of data analysis include text analysis, statistical analysis, diagnostic analysis, predictive analysis and prescriptive analysis. Statistical modeling applies statistical analysis to data to understand relationships between variables, make predictions and visualize data for stakeholders. Learning statistical modeling helps in choosing the right model, preparing data for analysis, and communicating findings to different audiences.
This document discusses collecting and analyzing data for evaluation purposes. It defines data collection as gathering information through various means and organizing it so it can be easily worked with. Analyzing data involves examining collected information to reveal relationships, patterns, and trends. Both quantitative and qualitative data should be collected from the start of a program through completion and afterwards to evaluate effectiveness. Statistical analysis of quantitative data can show if changes were significant, while qualitative data provides insight into participants' experiences. Collecting and analyzing both types of high-quality data produces the best overall evaluation.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
Orchestrating the Future: Navigating Today's Data Workflow Challenges with Ai...Kaxil Naik
Navigating today's data landscape isn't just about managing workflows; it's about strategically propelling your business forward. Apache Airflow has stood out as the benchmark in this arena, driving data orchestration forward since its early days. As we dive into the complexities of our current data-rich environment, where the sheer volume of information and its timely, accurate processing are crucial for AI and ML applications, the role of Airflow has never been more critical.
In my journey as the Senior Engineering Director and a pivotal member of Apache Airflow's Project Management Committee (PMC), I've witnessed Airflow transform data handling, making agility and insight the norm in an ever-evolving digital space. At Astronomer, our collaboration with leading AI & ML teams worldwide has not only tested but also proven Airflow's mettle in delivering data reliably and efficiently—data that now powers not just insights but core business functions.
This session is a deep dive into the essence of Airflow's success. We'll trace its evolution from a budding project to the backbone of data orchestration it is today, constantly adapting to meet the next wave of data challenges, including those brought on by Generative AI. It's this forward-thinking adaptability that keeps Airflow at the forefront of innovation, ready for whatever comes next.
The ever-growing demands of AI and ML applications have ushered in an era where sophisticated data management isn't a luxury—it's a necessity. Airflow's innate flexibility and scalability are what makes it indispensable in managing the intricate workflows of today, especially those involving Large Language Models (LLMs).
This talk isn't just a rundown of Airflow's features; it's about harnessing these capabilities to turn your data workflows into a strategic asset. Together, we'll explore how Airflow remains at the cutting edge of data orchestration, ensuring your organization is not just keeping pace but setting the pace in a data-driven future.
Session in https://budapestdata.hu/2024/04/kaxil-naik-astronomer-io/ | https://dataml24.sessionize.com/session/667627
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
1. Kaedah
Menganalisis
Data
Prof. Dr. Md. Nasir Ibrahim
Post-doctoral, Sheffield Hallam University, UK
PhD, University of Tasmania, Australia
MA, McGill University, Canada
B.A. (Hons.), USM, Penang
4. D a t a
A n a l y s i s
is…………….
Definition
a process used by researchers for
reducing data to a story and interpreting
it to derive insights. The data analysis
process helps in reducing a large chunk
of data into smaller fragments, which
makes sense.
-LeCompte & Schensul (2013)
Prof. Dr. Nasir | UPSI
5. Widely used in the
natural and social
s c i e n c e s : b i o l o g y,
chemistry, psychology,
economics, sociology,
m a r k e t i n g , e t c .
PURPOSE
Find patterns and
averages, make
predictions, test causal
relationships, and
generalize results
6. The Purpose
6
Data analysis helps to make sense of our data otherwise they will remain a
pile of unwieldy information; perhaps a pile of figures.
To answer the research
questions and to help
determine the trends
and relationships
among the variables.
7. Characteristics of Quantitative and Qualitative
Research Process
Analysing and
Interpreting Data
Report and Evaluate
Collecting Data
Specifying a Purpose
Reviewing the Literatures
Identifying a Problem
Quantitative Characteristics Qualitative Characteristics
•Descriptive/Explanatory
Exploratory/Understanding a
Phenomena
•Major Role Justify Problem •Major Role Explore Problem
•Specific and Narrow
•Measurable/Observable
•General and Broad
•Participants’ Experience
•Pre-determined, Instruments
•Numeric Data, Large numbers
•General, emerging form Text or image data
•Small Number
•Statistical, Description of Trends
•Comparisons/Predictions
•Text Analysis, Description and Themes
•Larger Meanings of Findings
•Standard and Fixed
•Objective and Unbiased
•Flexible and Emerging
•Reflexive and Biased
9. 9
Two types
Descriptive
Analysis
Inferential
• Collecting, summarising and describing
data.
• Numerical values obtained from the
sample that gives meaning to the data
collected.
• Use percentage, mean, median, mode,
range and Standard Deviation.
• Drawing conclusions based only on sample
data beyond the immediate data.
• E.g. Use t-test (difference between the
means of two independent groups),
ANOVA (to test the significance of
differences between means of two or
more groups).
10. Descriptive
10
You own a restaurant and want to know how your business is being
perceived by people. So, you give one of your staff, Amy, the
responsibility of carrying out a small survey of your customers to
see how they feel. Amy goes head on to collect information from
every client who visits your restaurant. At the end of the week, Amy
comes to you with a giant grin on her face and hands to you a long
list of numbers that contain how many points each client has given
your business. You stare down at this soup of numbers and scratch
your head as you have no idea what to make of it! You’re probably
considering firing Amy at the moment, right?
An example of real case
11. 11
Well, Amy’s efforts weren’t exactly in vain, but what she
should have done instead of just handing you a bunch of
numbers was to represent these numbers in a more
meaningful way so that you could make a proper inference
from them. This is where statistical analysis comes into
play.
An example of real case
12. Descriptive
12
Amy got hold of about 100 customer feedbacks. Say your
customers filled out a survey form where they scored your
restaurant on a scale of 1 to 10. She could have organized
these scores into a chart or graph to let you pictorially deduce
how many of your customers think your food rocks. Assuming
75 of them gave you more than 5 points, while 25 of them gave
you less than 5 points. It would serve you better if Amy had
created a Pie chart that clearly indicated what percentage of
your clients/customers love you. In this case, you could easily
see that 75% percent of your customers like you, so you’re right
on track!
An example of real case
14. Descriptive
14
Amy could also have calculated an average of all the scores to give
you an idea of the general client sentiment. So, if most of the clients
scored you between 6 and 7, your average would come to around
6.5, showing that it’s not all the bad, but you could do better. There
are many other ways to visualize data or draw an inference from a
limited set of data. These include other measures of central
tendency like mean, median and mode as well as measures of
dispersion, like variance and standard deviation. These values can
give you an idea of how varied the opinions of your customers are.
Altogether all these measures and visualizations could give you a
pretty good idea about how your business is doing.
An example of real case
16. How to calculate Mean?
16
Customer Score
1 5
2 7
3 6
4 7
5 7
6 7
7 7
8 6
9 6
10 7
11 8
12 6
13 5
14 7
15 7
16 6
17 6
JUMLAH 110
𝑥
=
N
Jumlah
Jumlah Skor
Bil. Responden
Mean
17
17. How to calculate Mean?
17
Statistical Analysis
Mean 6.470588
Median 7
Mode 7
Variance 0.639706
Standard
Deviation
0.799816
Customer
S.No.
Score
1 5
2 7
3 6
4 7
5 7
6 7
7 7
8 6
9 6
10 7
11 8
12 6
13 5
14 7
15 7
16 6
17 6
JUMLAH 110
110
=
17
Jumlah
Skor
17
18. The mean may not be a fair
representation of the data,
because the average is easily
influenced by outliers (very small
or large values in the data set
that are not typical).
18
19. median
19
The point at which there are an equal number of data points whose values lie above and below
the median value.
Truly the middle of the data set. It
is the central value of the variable
that divide the series into two
equal parts in such a way that half
of the items lie above the value
and the remaining half lie below
this value. 3.
The next time you hear
an average reported,
look to see whether the
median is also reported.
If not, ask for it!
21. How to calculate Median?
21
Statistical Analysis
Mean 6.470588
Median 7
Mode 7
Variance 0.639706
Standard
Deviation
0.799816
No. of Customers Score
1 5
2 5
3 6
4 6
5 6
6 6
7 6
8 6
9 7
10 7
11 7
12 7
13 7
14 7
15 7
16 7
17 8
JUMLAH 110
(17+1)
M =
2
Median = (n + 1) / 2
17
= (18)
2
= 9
22. How to calculate Mode?
22
Statistical Analysis
Mean 6.470588
Median 7
Mode 7
Variance 0.639706
Standard
Deviation
0.799816
No. of Customers Score
1 5
2 5
3 6
4 6
5 6
6 6
7 6
8 6
9 7
10 7
11 7
12 7
13 7
14 7
15 7
16 7
17 8
JUMLAH 110
1. Write the numbers in
your data set.
2. Order the numbers from
smallest to largest
3. Count the number of
times each number is
repeated
4. Identify the value (or
values) that occur most
often. In our example
set, ({5, 5, 6, 6, 6, 6, 6,
6, 7, 7, 7, 7, 7, 7, 7, 7,
8}), because 7 occurs
more times than any
other value, 7 is the
mode.
24. 24
“…to discover and describe issues in the
field or structures and processes in
routines and practices. Often, qualitative
data analysis combines approaches of a
rough analysis of the material (overviews,
condensation, summaries) with approaches
of a detailed analysis (elaboration of
categories, hermeneutic interpretations or
identified structures). The final aim is often
to arrive at generalizable statements by
comparing various materials or various
texts or several cases.”
Flick, U. (2014). The SAGE handbook of qualitative data analysis. SAGE. P. 5
25. 25
The analysis of qualitative data can have several aims.:=A
Subjective experiences of a specific individual or group. Can compare and contrast on the case/s (individual or
group) and its special features and the links between them.
Explaining such differences (e.g. circumstances which make it more likely students to learn drawing with
a specific situation more successful than other students).
Based on the phenomenon under study from the analysis of empirical material (e.g. a theory of art making), a
theory is developed.
26. 26
Three essential things take place:
Data Organization Data Reduction Data Interpretation
Method of classifying
and organising data
sets to make them
more useful.
Summarisation and
categorisation. It
helps in finding
patterns and themes
in the data for easy
identification and
linking.
Conducted in both top-
down or bottom-up
fashion to make
meaning.
27. 27
A step-by-step Guide
Organisation of the
collected data
Identification of
Framework
Second Order AnalysisDescriptive AnalysisSorting the Data
QDA is the range of processes and procedures whereby the data that have been collected is
explained based on the understanding or interpretation of the people and situations we are
investigating. QDA is usually based on an interpretative philosophy. The idea is to examine the
meaningful and symbolic content of qualitative data.
Transcribe the
interviews,
translate the
data, record the
details, label the
contents
The framework
is the coding
plan to
structure, label,
and define data
The data is
sorted based on
category and
theme
Describing the
data based on
research
questions
Identification
and
consolidation of
recurrent
themes,
patterns
present in the
data
28. 28
Five Type sof Qualitative Analysis
Descriptive – What’s the data?
Interpretative – What was meant by the data?
Refers to a cluster of analytic methods for deciphering texts
or visual data that have a storied kind.
Involves real text not invented, created and artificial text.
Five distinct phases: Familiarization, Identifying a thematic
framework, Coding, Charting, and Mapping; and Interpretation.
Analysis and development of theories happen when you
have collected the information.
29. What’s Content Analysis
(CA)
Content analysis is the study of documents
and artifacts, which might be texts of various
formats, paintings, pictures, audio or video.
Social scientists use content analysis to
examine patterns in communication in a
replicable and systematic manner.
29
Publisher: Routledge
Year: 1999Sage Publications, 2004
Dr. Klaus H. Krippendorff
30. 30
Can be both quantitative (focused on
counting and measuring) and qualitative
(focused on interpreting and
understanding).
In both types, you categorize or “code”
words, themes, and concepts within the
texts and then analyze the results.
Prof. Dr. Nasir | UPSI 30
31. How to conduct content analysis
Prof. Dr. Nasir | UPSI 31
5 STEPS:
Select the content
you will analyze
Choose the texts that you will analyze. If there are
only a small amount of texts that meet your
criteria, you might analyze all of them. If there is a
large volume of texts, you can select a sample.
1. Develop a set of
rules for coding
Organize the units of meaning into the previously
defined categories. it’s important to clearly define
the rules for what will and won’t be included to
ensure that all texts are coded consistently.
3. Analyze results and
draw conclusions
Find patterns and draw
conclusions in response to
your research question and
make inferences.
5.
Define the units
and categories of analysis
The unit(s) of meaning that will be coded. The set of
categories that you will use for coding. Categories
can be objective characteristics (e.g. female, aged
40-50, lawyer, mother) or more conceptual (e.g.
trustworthy, corrupt, conservative, family oriented).2.
Code the text
according to the rules
Go through each text and record all relevant data in the
appropriate categories. This can be done manually or aided
with computer programs, such as QSR NVivo, Atlas.ti and
Diction, which can help speed up the process of counting
and categorizing words and phrases.4.
32. Coding Qualitative Data:
How to Code Qualitative Research
• What is coding in qualitative research?
1. Coding is the process of labeling and organizing your qualitative data to
identify different themes and the relationships between them.
2. When coding, you assign labels to words or phrases that represent important
(and recurring) themes in each response.
3. These labels can be words, phrases, or numbers; we recommend using words
or short phrases, since they’re easier to remember, skim, and organize.
4. Thematic analysis is done to find common themes and concepts. Thematic
analysis extracts themes from text by analyzing the word and sentence
structure.
30
33. Types of Coding
33
Two Types of Coding
Deductive Coding
Deductive coding is also called concept-driven
coding. Start with a predefined set of codes.
These codes might come from previous
research, or you might already know what
themes you’re interested in analyzing.
Inductive Coding
Also called open coding, starts from scratch
and creates codes based on the qualitative
data itself. You don’t have a set codebook; all
codes arise directly from the responses.
34. Deductive Coding
34
Deductive coding is a top down approach where you
start by developing a codebook with your initial set
of codes (pre-set coding schemes)
This set could be based on your research questions
or an existing research framework or theory.
Researcher’s setup the codes based on emerging
themes and define them according to the source
(e.g. literature review, support, etc.).
Once the coding scheme is established, the
researcher applies the codes to the text.
What is deductive Coding?
35. Inductive Coding
35
Involves the conversion of raw, qualitative
data into more useful quantitative data.
Unlike deductive analysis, inductive
research does not involve the testing of
pre-conceived hypotheses, instead
allowing the theory to emerge from the
content of the raw data.
What is Inductive Coding?
36. 36
What’s Narrative Inquiry
Relatively new qualitative methodology.
The study of experience understood narratively.
Narrative inquirers think narratively about experience
throughout inquiry.
Uses a recursive, reflexive process of moving from field (with
starting points in telling or living of stories) to field texts
(data) to interim and final research texts.
Commonplaces of temporality, sociality, and place create a
conceptual framework within which different kinds of field
texts and different analyses can be used.
Highlights ethical matters as well as shapes new theoretical
understandings of people’s experiences.
Title: Handbook of narrative
inquiry : mapping a
methodology
Author(s): D. Jean Clandinin
Publisher: Sage
Year: 2007
37. 37
What’s Narrative Analysis
(NA)
The social constructionist perspective is that all
‘narratives sit at the intersection of history,
biography, and society’ (Liamputtong and Ezzy
2005: 132); they are dependent on the context
of the teller and the listener; and are not
intended to represent ‘truth’.
Title: Handbook of narrative
inquiry : mapping a
methodology
Author(s): D. Jean Clandinin
Publisher: Sage
Year: 2007
38. How to analyse?
Texts are analysed within their social, cultural, and historical
context.
Deconstructed to search for themes and subthemes, in order
to build up a theory grounded in the data.
Different researchers have their own style of narrative
analysis.
The Foucauldian examines multiple voices and to draw out
which voices were silenced and which were powerful. ‘The
interpretation we call truth is the one that is attached to
power’ (Byrne-Armstrong 2001: 113).
38
39. Narrative Analysis
Prof. Dr. Nasir | UPSI 39
Step-by-Step
Biographical Details
looking for explanatory factors such
as age, gender, education,
experience, and so on.
01
Coding
Developing categories, themes and
sub-themes using participants’ own
language to describe each theme
03
Life History (Flick,
von Kardorff and
Steinke 2004)
Reducing and re-
ordering narratives;
writing up are
interwoven processes
05
Summarising Story
Summarise each participant’s story
without losing the meaning.
Capturing the significant ideas or
issues.02
Creating Metaphor
Highlighting ‘quotable quotes’, pulling
out one phrase to represent each
participant; Creating Metaphor
04
40. What’s Discourse Analysis
(DA)
A research method for studying written or spoken language
in relation to its social context. It aims to understand how
language is used in real life situations.
Discourse analysis is a common qualitative research
method in many humanities and social science disciplines,
including linguistics, sociology, anthropology, psychology
and cultural studies.
40
Publisher: Routledge
Year: 1999
41. 41
You make interpretations based on both the details of the material itself and on contextual knowledge.
Step 1: Define the research question and
select the content of analysis
Begin with a clearly defined research question.
Then, select a range of material that is appropriate
to answer it.
Step 3: Analyze the content for themes
and patterns
Closely examine various elements of the material –
such as words, sentences, paragraphs, and overall
structure – and relating them to attributes, themes,
and patterns relevant to your research question.
Step 2: Gather information and theory on the
context
Establish the social and historical context in which the
material was produced and intended to be received.
Understanding the real-life context of the discourse,
you can also conduct a literature review on the topic
and construct a theoretical framework to guide your
analysis.
Step 4: Review your results and draw
conclusions
Once you have assigned particular attributes to
elements of the material, reflect on your results to
examine the function and meaning of the language
used. Here, you will consider your analysis in relation
to the broader context that you established earlier to
draw conclusions that answer your research question.
43. What’s Grounded Theory
Analysis (GTA)
A type of scientific research concerned with the
emerging concepts of social phenomena.
It refers to situations where data collection is
conducted in an unstructured way (Joubish,
Khurram, Ahmed, Fatima, & Haider, 2011)
43
Publisher: Routledge
Year: 1999
44. 44
Three stages (Strauss & Corbin):
Open Coding Axial Coding Selective Coding
Take your textual
data and break it
up into discrete
parts.
In vivo
Research denoted
Breaking down of
core themes.
The process of
relating codes
(categories and
concepts) to each
other, via a
combination of
inductive and
deductive
thinking.
Selecting one
central category that
connects all the
codes from your
analysis and
captures the
essence of your
research.
45. How to do Open Coding?
Reading the interview transcript line-by-line
and interesting statements are marked
45
Statements which belong together are
summarized in one category
Characteristics and dimension of the
category are developed out of the data
Codebook with all categories and
subcategories also their characteristics and
dimension
Reorganisation of
Subcategories and
Summarisation of
Categories
Reading the other
transcripts line-by-line
Memo Writing
49. Axial Coding
49
What’s Axial Coding?
Axial coding is a qualitative research technique that involves relating data together in order to
reveal codes, categories, and subcategories ground within participants’ voices within one’s collected
data. In other words, axial coding is one way to construct linkages between data.
Axial coding is the breaking
down of core themes
during qualitative data
analysis. Axial coding in
grounded theory is the
process of relating codes
(categories and concepts) to
each other, via a
combination of inductive and
deductive thinking.
50. Axial Coding
50
Example of Axial Coding
Axial coding. Axial coding is the breaking down of core themes during qualitative data analysis.
Axial coding in grounded theory is the process of relating codes (categories and concepts) to each
other, via a combination of inductive and deductive thinking.
• Describing relevant
excerpt selected
• Creating a code that
reflects the description
of the excerpt (with the
research question in
mind)
51. Selective Coding
51
Selective coding is the process of choosing one category to be the core category, and relating all other categories to
that category. The essential idea is to develop a single storyline around which all everything else is draped. There is a
belief that such a core concept always exists.
Connect data to
discover patterns
Core category
More abstract
More difficult part
52. P l a n n i n
M e t h o ds
Usin
Evaluation
Ground rules Dialo ue
C l a s s
M a n a g e m e n t Learning
C o n t r a c t s
How to
teach
M e t h o d s Questioning
W o r k- b a se d
rac t ic e
Lectures
30/10/20105 - w20
The issue
Form s of
asse ssme n t
Markin
A s s e s s m e n t
W hit eboard
Overhead
r o e c t o r s
Data
P r u c t i o n
S c r e e n - b a s e d Size
M e d i a
U s i n
m ent
Handouts
Hum our
Yourself
http://chks.wested.org/using_results/resilience
55. A visual analysis is used to
communicate how the aesthetic or
formal qualities of an image relate
to seemingly relevant ideas,
histories, narratives, politics,
cultures, affects, and/or
experiences. In other words, visual
analyses are used to show how
particular visuals create specific
effects and/or affects.
55
56. Visual analyses often involve a
combination of writing styles. This can
include observant, technical, emotive,
critical, reflective, and/or speculative
modes of writing about images. Visual
analyses can vary in length from a few
sentences, to a paragraph, to an entire
essay
56
57. • To create a coherent and clear
• interpretation of an image, it is
recommended that you do
three things:
• describe, analyse, and
interpret.
57
58. • descriptive, and visual language, tell the
reader what the image looks like.
• The subject matter and how it
• has been composed.
• Detail the formal and structural qualities of
the image (such as the tonal, linear, and
textural characteristics of the image). You
may also wish to describe how the image
has been made and exhibited.
• Consider the affective and experiential
qualities of the work;
• Consider step one in relation to the
image’s context and seemingly
associated concepts.
• Relate to specific theoretical framework.
• Provide the reader with a concluding
remark (take the image as their focus)
that clearly articulates the overall
impact – or, perhaps, ‘meaning’ – of
your selected image. Sometimes you
may arrive at multiple, even
conflicting, interpretations of a work.
58
How should you structure a
visual analysis of a an art
work?
59.
60. Faculty of Art, Computing and Creative Industry,
Universiti Pendidikan Sultan Idris,
35900 Tanjong Malim, Perak,
MALAYSIA
facebook.com twitter.com
bistarian@mail.com +6011 3350 1941
60