Day 08 ActivityFisher & HughesSeptember 21, 2018Study
A study was conducted to determine the effects of alcohol on human reaction times. Fifty-seven adult individuals within two-age groups were recruited for this study and were randomly allocated into one of three alcohol treatment groups – a control where the subjects remain sober during the entire study, a moderate group were the subject is supplied alcohol but is limited in such a way that their blood alcohol content (BAC) remains under the legal limit to drive (BAC of 0.08) and a group that received a high amount of alcohol to which their BAC may exceed the legal limit for driving. Each subject was trained on a video game system and their reaction time (in milliseconds) to a visual stimulus was recorded at 7 time points 30 minutes apart (labeled T0=0, T1=30, T2=60 and so on). At time point T0, all subjects were sober and those in one of the alcohol consumption groups began drinking after the first measured reaction time (controlled within the specifications outlined). The researcher is interested in determining the influence alcohol and age (namely, is reaction time different for those in the 20s versus 30s) has on reaction times.
The task for today is to do a complete analysis for this study and dig into the effects of alcohol, age and time have on reaction times.Data input and wrangling
First read in the data:alcohol <- read.csv("alcoholReaction.csv")
head(alcohol)## Subject Age Alcohol T0 T1 T2 T3 T4 T5 T6
## 1 1 24 Control 255.3 254.8 256.4 255.1 257.0 256.1 257.0
## 2 2 34 Control 250.1 249.2 249.0 248.0 248.0 248.9 248.1
## 3 3 31 Control 248.2 247.1 246.9 246.7 246.0 246.0 247.0
## 4 4 24 Control 253.9 253.8 254.9 254.1 253.2 254.1 255.0
## 5 5 38 Control 250.0 251.0 250.0 249.9 248.8 249.1 249.9
## 6 6 38 Control 246.0 248.0 247.0 248.1 248.1 246.9 244.0
Note, the Age variable is recorded as an actual age in years, not the category of 20s or 30s like we want – we need to dichotomize this variable. Also note the data is in wide format – the reaction times (the response variables) are spread over multiple columns. We need a way to gather these columns into a single column. So we need to do some data processing.
First consider the below code:head(alcohol %>%
mutate(Age = case_when(Age<31 ~ "20s",
Age %in% 31:40 ~ "30s")))## Subject Age Alcohol T0 T1 T2 T3 T4 T5 T6
## 1 1 20s Control 255.3 254.8 256.4 255.1 257.0 256.1 257.0
## 2 2 30s Control 250.1 249.2 249.0 248.0 248.0 248.9 248.1
## 3 3 30s Control 248.2 247.1 246.9 246.7 246.0 246.0 247.0
## 4 4 20s Control 253.9 253.8 254.9 254.1 253.2 254.1 255.0
## 5 5 30s Control 250.0 251.0 250.0 249.9 248.8 249.1 249.9
## 6 6 30s Control 246.0 248.0 247.0 248.1 248.1 246.9 244.0
case_when is essentially a piece-wise comparison. When Age is less than 31, you overwrite Age variable .
COMMUNITY CORRECTIONSPrepared ByDatePROBATIONDescr.docxcargillfilberto
COMMUNITY CORRECTIONS
Prepared By:
Date:
PROBATION
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
INTERMEDIATE SANCTIONS
Name of punishment: COMMUNITY SERVICE
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
Name of punishment: RESTITUTION
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
Name of punishment: HOUSE ARREST
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
REFERENCES
1
Day 08 ActivityFisher & HughesSeptember 21, 2018Study
A study was conducted to determine the effects of alcohol on human reaction times. Fifty-seven adult individuals within two-age groups were recruited for this study and were randomly allocated into one of three alcohol treatment groups – a control where the subjects remain sober during the entire study, a moderate group were the subject is supplied alcohol but is limited in such a way that their blood alcohol content (BAC) remains under the legal limit to drive (BAC of 0.08) and a group that received a high amount of alcohol to which their BAC may exceed the legal limit for driving. Each subject was trained on a video game system and their reaction time (in milliseconds) to a visual stimulus was recorded at 7 time points 30 minutes apart (labeled T0=0, T1=30, T2=60 and so on). At time point T0, all subjects were sober and those in one of the alcohol consumption groups began drinking after the first measured reaction time (controlled within the specifications outlined). The researcher is interested in determining the influence alcohol and age (namely, is reaction time different for those in the 20s versus 30s) has on reaction times.
The task for today is to do a complete analysis for this study and dig into the effects of alcohol, age and time have on reaction times.Data input and wrangling
First read in the data:alcohol <- read.csv("alcoholReaction.csv")
head(alcohol)## Subject Age Alcohol T0 T1 T2 T3 T4 T5 T6
## 1 1 24 Control 255.3 254.8 256.4 255.1 257.0 256.1 257.0
## 2 2 34 Control 250.1 249.2 249.0 248.0 248.0 248.9 248.1
## 3 3 31 Control 248.2 247.1 246.9 246.7 246.0 246.0 247.0
## 4 4 24 Control 253.9 253.8 254.9 254.1 253.2 254.1 255.0
## 5 5 38 Control 250.0 251.0 250.0 249.9 248.8 249.1 249.9
## 6 6 38 Control 246.0 248.0 247.0 248.1 248.1 246.9 244.0
Note, the Age variable is recorded as an actual age in years, not the category of 20s or 30s like we want – we need to dichotomize this variable. Also note the data is in wide format – the reaction times (the response variables) are spread over multiple columns. We need a way to gather these columns into a single column. So we need to do some data processing.
First consider the below code:head(alcohol %>%
mutate(Age = case_when(Age<31 ~ "20s",
Age %in% 31:40 ~ "30s")))## Subject Age Alcohol .
COMMUNITY CORRECTIONSPrepared ByDatePROBATIONDescr.docxdrandy1
COMMUNITY CORRECTIONS
Prepared By:
Date:
PROBATION
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
INTERMEDIATE SANCTIONS
Name of punishment: COMMUNITY SERVICE
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
Name of punishment: RESTITUTION
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
Name of punishment: HOUSE ARREST
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
REFERENCES
1
Day 08 ActivityFisher & HughesSeptember 21, 2018Study
A study was conducted to determine the effects of alcohol on human reaction times. Fifty-seven adult individuals within two-age groups were recruited for this study and were randomly allocated into one of three alcohol treatment groups – a control where the subjects remain sober during the entire study, a moderate group were the subject is supplied alcohol but is limited in such a way that their blood alcohol content (BAC) remains under the legal limit to drive (BAC of 0.08) and a group that received a high amount of alcohol to which their BAC may exceed the legal limit for driving. Each subject was trained on a video game system and their reaction time (in milliseconds) to a visual stimulus was recorded at 7 time points 30 minutes apart (labeled T0=0, T1=30, T2=60 and so on). At time point T0, all subjects were sober and those in one of the alcohol consumption groups began drinking after the first measured reaction time (controlled within the specifications outlined). The researcher is interested in determining the influence alcohol and age (namely, is reaction time different for those in the 20s versus 30s) has on reaction times.
The task for today is to do a complete analysis for this study and dig into the effects of alcohol, age and time have on reaction times.Data input and wrangling
First read in the data:alcohol <- read.csv("alcoholReaction.csv")
head(alcohol)## Subject Age Alcohol T0 T1 T2 T3 T4 T5 T6
## 1 1 24 Control 255.3 254.8 256.4 255.1 257.0 256.1 257.0
## 2 2 34 Control 250.1 249.2 249.0 248.0 248.0 248.9 248.1
## 3 3 31 Control 248.2 247.1 246.9 246.7 246.0 246.0 247.0
## 4 4 24 Control 253.9 253.8 254.9 254.1 253.2 254.1 255.0
## 5 5 38 Control 250.0 251.0 250.0 249.9 248.8 249.1 249.9
## 6 6 38 Control 246.0 248.0 247.0 248.1 248.1 246.9 244.0
Note, the Age variable is recorded as an actual age in years, not the category of 20s or 30s like we want – we need to dichotomize this variable. Also note the data is in wide format – the reaction times (the response variables) are spread over multiple columns. We need a way to gather these columns into a single column. So we need to do some data processing.
First consider the below code:head(alcohol %>%
mutate(Age = case_when(Age<31 ~ "20s",
Age %in% 31:40 ~ "30s")))## Subject Age Alcohol .
Chapter 16 Inference for RegressionClimate ChangeThe .docxketurahhazelhurst
Chapter 16: Inference for Regression
Climate Change
The earth has been getting warmer. Most climate scientists agree that one important cause of the warming is
the increase in atmospheric levels of carbon dioxide (CO2), a green house gas. Here is part of a regression
analysis of the mean annual CO2 concentration (CO2) in the atmosphere, measured in parts per thousand
(ppt), at the top of Mauna Loa in Hawaii and the mean annual air temperature (Temp) over both land and
sea across the globe, in degrees Celsius.
Let’s first read the dataset into R
climate <- read.table('Climate_Change.txt', sep = '\t', header = TRUE)
and take a look at the data structure:
str(climate)
## 'data.frame': 29 obs. of 3 variables:
## $ year: int 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 ...
## $ Temp: num 14.2 14.3 14.1 14.3 14.1 ...
## $ CO2 : num 339 340 341 342 344 ...
We see three variables, which are year, Temp (mean annual air temperature) and CO2 (mean annual CO2
concentration), and there are 29 observations in each variable.
We now take Temp as the response variable and CO2 the predictor variable, to study their relationship. To see
if linear regression is appropriate, we make a scatterplot of Temp against CO2
plot(climate$CO2, climate$Temp, xlab = 'CO2 Concentration', ylab = 'Temperature')
340 350 360 370 380
1
4
.1
1
4
.3
1
4
.5
CO2 Concentration
Te
m
p
e
ra
tu
re
It seems reasonable to fit a linear model to the dataset, because both variables are quantitative, the data
points show a linear pattern, and there is no outlier. So, let’s fit the model:
imod <- lm(Temp ~ CO2, data = climate)
1
The summary of the fitted model is given by
summary(imod)
##
## Call:
## lm(formula = Temp ~ CO2, data = climate)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.16809 -0.07972 0.00194 0.07013 0.18532
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.707076 0.481006 22.260 < 2e-16 ***
## CO2 0.010062 0.001336 7.534 4.19e-08 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.09847 on 27 degrees of freedom
## Multiple R-squared: 0.6776, Adjusted R-squared: 0.6657
## F-statistic: 56.76 on 1 and 27 DF, p-value: 4.192e-08
which contains a lot of information. We see that R2 = 0.6776 and the SD of residuals se = 0.09847 (the
estimator of population standard deviation σ) with 27 degrees of freedom. In Coefficients section we
see the intercept b0 = 10.71 and the slope b1 = 0.01. Their standard errors are SE(b0) = 0.481 and
SE(b1) = 0.00134. Their t-test statistics are t0 = b0/SE(b0) = 22.26 and t1 = b1/SE(b1) = 7.534. Their
corresponding (two-tailed) p-values are very small (<2e-16 and 4.19e-08). As a result, we reject H0 : β1 = 0
and conclude there is a positive correlation between Temp and CO2. The b1 = 0.01 can be interpreted as
follows: The air temperature will increase by 0.01 degrees Celsius on average if the CO2 concentration in the
atmosphere increases by 1 p ...
The document describes an experiment examining the effect of different culturing conditions on the growth of Methicillin-resistant Staphylococcus aureus (MRSA) strains. Five MRSA strains were cultured under various time, temperature, and tryptone concentration levels. ANOVA and polynomial regression analyses found that time, temperature, and concentration all significantly affected bacterial counts, with some interaction effects. The optimal conditions estimated were 48 hours for time and 35°C for temperature based on maximizing counts in the regression models.
This document summarizes key concepts from Chapter 12 of Zumdahl's textbook on kinetics. It begins by identifying the chapter objectives, which include determining factors that influence reaction rates, writing rate laws, calculating rate constants, and determining reaction order and half-life. It then discusses specific examples to illustrate these concepts, such as using data on initial concentrations and rates to determine that the reaction 2NO2 → 2NO + O2 is first order. It also explains how to use integrated rate laws to determine concentrations over time for first and second order reactions from graphs of natural log of concentration versus time and inverse concentration versus time, respectively. The document emphasizes that reaction order must be established before applying integrated rate laws. Finally, it notes
- The document discusses concepts related to chemical kinetics including reaction rates, rate laws, reaction mechanisms, and reaction orders.
- Key concepts covered include determining rate laws through experimental methods, distinguishing between differential and integrated rate laws, and characteristics of reactions that are zero order, first order, or second order.
- Examples are provided to illustrate determining the order of reactions and calculating rate constants from experimental data using integrated rate laws.
This document summarizes a laboratory experiment on binomial distribution, probability, and entropy. The experiment involved flipping coins and recording the number of heads and tails over 140 trials. Histograms and standard deviations were calculated for trial numbers 10, 20, 40, 80, and 140. As the number of trials increased, the expected number of tails approached 5 and the standard deviations decreased, showing the system approaching equilibrium. However, the decrease in standard deviations was not as expected. Resources on statistical mechanics, thermodynamics, and the binomial distribution were also listed.
X-bar and R control charts are used to monitor the mean and variation of a process based on samples taken over time. An initial series of samples is used to estimate the mean and standard deviation of the process and establish control limits for subsequent X-bar and R charts. These control charts can then be used to monitor the process mean and variation and detect any points that are outside the control limits, indicating an out-of-control process that requires investigation. The document provides steps for constructing X-bar and R control charts using sample data and calculating control limits based on the sample mean and range.
COMMUNITY CORRECTIONSPrepared ByDatePROBATIONDescr.docxcargillfilberto
COMMUNITY CORRECTIONS
Prepared By:
Date:
PROBATION
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
INTERMEDIATE SANCTIONS
Name of punishment: COMMUNITY SERVICE
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
Name of punishment: RESTITUTION
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
Name of punishment: HOUSE ARREST
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
REFERENCES
1
Day 08 ActivityFisher & HughesSeptember 21, 2018Study
A study was conducted to determine the effects of alcohol on human reaction times. Fifty-seven adult individuals within two-age groups were recruited for this study and were randomly allocated into one of three alcohol treatment groups – a control where the subjects remain sober during the entire study, a moderate group were the subject is supplied alcohol but is limited in such a way that their blood alcohol content (BAC) remains under the legal limit to drive (BAC of 0.08) and a group that received a high amount of alcohol to which their BAC may exceed the legal limit for driving. Each subject was trained on a video game system and their reaction time (in milliseconds) to a visual stimulus was recorded at 7 time points 30 minutes apart (labeled T0=0, T1=30, T2=60 and so on). At time point T0, all subjects were sober and those in one of the alcohol consumption groups began drinking after the first measured reaction time (controlled within the specifications outlined). The researcher is interested in determining the influence alcohol and age (namely, is reaction time different for those in the 20s versus 30s) has on reaction times.
The task for today is to do a complete analysis for this study and dig into the effects of alcohol, age and time have on reaction times.Data input and wrangling
First read in the data:alcohol <- read.csv("alcoholReaction.csv")
head(alcohol)## Subject Age Alcohol T0 T1 T2 T3 T4 T5 T6
## 1 1 24 Control 255.3 254.8 256.4 255.1 257.0 256.1 257.0
## 2 2 34 Control 250.1 249.2 249.0 248.0 248.0 248.9 248.1
## 3 3 31 Control 248.2 247.1 246.9 246.7 246.0 246.0 247.0
## 4 4 24 Control 253.9 253.8 254.9 254.1 253.2 254.1 255.0
## 5 5 38 Control 250.0 251.0 250.0 249.9 248.8 249.1 249.9
## 6 6 38 Control 246.0 248.0 247.0 248.1 248.1 246.9 244.0
Note, the Age variable is recorded as an actual age in years, not the category of 20s or 30s like we want – we need to dichotomize this variable. Also note the data is in wide format – the reaction times (the response variables) are spread over multiple columns. We need a way to gather these columns into a single column. So we need to do some data processing.
First consider the below code:head(alcohol %>%
mutate(Age = case_when(Age<31 ~ "20s",
Age %in% 31:40 ~ "30s")))## Subject Age Alcohol .
COMMUNITY CORRECTIONSPrepared ByDatePROBATIONDescr.docxdrandy1
COMMUNITY CORRECTIONS
Prepared By:
Date:
PROBATION
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
INTERMEDIATE SANCTIONS
Name of punishment: COMMUNITY SERVICE
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
Name of punishment: RESTITUTION
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
Name of punishment: HOUSE ARREST
Description:
Purpose(s) served:
Advantages:
1.
2.
3.
Drawbacks:
1.
2.
3.
REFERENCES
1
Day 08 ActivityFisher & HughesSeptember 21, 2018Study
A study was conducted to determine the effects of alcohol on human reaction times. Fifty-seven adult individuals within two-age groups were recruited for this study and were randomly allocated into one of three alcohol treatment groups – a control where the subjects remain sober during the entire study, a moderate group were the subject is supplied alcohol but is limited in such a way that their blood alcohol content (BAC) remains under the legal limit to drive (BAC of 0.08) and a group that received a high amount of alcohol to which their BAC may exceed the legal limit for driving. Each subject was trained on a video game system and their reaction time (in milliseconds) to a visual stimulus was recorded at 7 time points 30 minutes apart (labeled T0=0, T1=30, T2=60 and so on). At time point T0, all subjects were sober and those in one of the alcohol consumption groups began drinking after the first measured reaction time (controlled within the specifications outlined). The researcher is interested in determining the influence alcohol and age (namely, is reaction time different for those in the 20s versus 30s) has on reaction times.
The task for today is to do a complete analysis for this study and dig into the effects of alcohol, age and time have on reaction times.Data input and wrangling
First read in the data:alcohol <- read.csv("alcoholReaction.csv")
head(alcohol)## Subject Age Alcohol T0 T1 T2 T3 T4 T5 T6
## 1 1 24 Control 255.3 254.8 256.4 255.1 257.0 256.1 257.0
## 2 2 34 Control 250.1 249.2 249.0 248.0 248.0 248.9 248.1
## 3 3 31 Control 248.2 247.1 246.9 246.7 246.0 246.0 247.0
## 4 4 24 Control 253.9 253.8 254.9 254.1 253.2 254.1 255.0
## 5 5 38 Control 250.0 251.0 250.0 249.9 248.8 249.1 249.9
## 6 6 38 Control 246.0 248.0 247.0 248.1 248.1 246.9 244.0
Note, the Age variable is recorded as an actual age in years, not the category of 20s or 30s like we want – we need to dichotomize this variable. Also note the data is in wide format – the reaction times (the response variables) are spread over multiple columns. We need a way to gather these columns into a single column. So we need to do some data processing.
First consider the below code:head(alcohol %>%
mutate(Age = case_when(Age<31 ~ "20s",
Age %in% 31:40 ~ "30s")))## Subject Age Alcohol .
Chapter 16 Inference for RegressionClimate ChangeThe .docxketurahhazelhurst
Chapter 16: Inference for Regression
Climate Change
The earth has been getting warmer. Most climate scientists agree that one important cause of the warming is
the increase in atmospheric levels of carbon dioxide (CO2), a green house gas. Here is part of a regression
analysis of the mean annual CO2 concentration (CO2) in the atmosphere, measured in parts per thousand
(ppt), at the top of Mauna Loa in Hawaii and the mean annual air temperature (Temp) over both land and
sea across the globe, in degrees Celsius.
Let’s first read the dataset into R
climate <- read.table('Climate_Change.txt', sep = '\t', header = TRUE)
and take a look at the data structure:
str(climate)
## 'data.frame': 29 obs. of 3 variables:
## $ year: int 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 ...
## $ Temp: num 14.2 14.3 14.1 14.3 14.1 ...
## $ CO2 : num 339 340 341 342 344 ...
We see three variables, which are year, Temp (mean annual air temperature) and CO2 (mean annual CO2
concentration), and there are 29 observations in each variable.
We now take Temp as the response variable and CO2 the predictor variable, to study their relationship. To see
if linear regression is appropriate, we make a scatterplot of Temp against CO2
plot(climate$CO2, climate$Temp, xlab = 'CO2 Concentration', ylab = 'Temperature')
340 350 360 370 380
1
4
.1
1
4
.3
1
4
.5
CO2 Concentration
Te
m
p
e
ra
tu
re
It seems reasonable to fit a linear model to the dataset, because both variables are quantitative, the data
points show a linear pattern, and there is no outlier. So, let’s fit the model:
imod <- lm(Temp ~ CO2, data = climate)
1
The summary of the fitted model is given by
summary(imod)
##
## Call:
## lm(formula = Temp ~ CO2, data = climate)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.16809 -0.07972 0.00194 0.07013 0.18532
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.707076 0.481006 22.260 < 2e-16 ***
## CO2 0.010062 0.001336 7.534 4.19e-08 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.09847 on 27 degrees of freedom
## Multiple R-squared: 0.6776, Adjusted R-squared: 0.6657
## F-statistic: 56.76 on 1 and 27 DF, p-value: 4.192e-08
which contains a lot of information. We see that R2 = 0.6776 and the SD of residuals se = 0.09847 (the
estimator of population standard deviation σ) with 27 degrees of freedom. In Coefficients section we
see the intercept b0 = 10.71 and the slope b1 = 0.01. Their standard errors are SE(b0) = 0.481 and
SE(b1) = 0.00134. Their t-test statistics are t0 = b0/SE(b0) = 22.26 and t1 = b1/SE(b1) = 7.534. Their
corresponding (two-tailed) p-values are very small (<2e-16 and 4.19e-08). As a result, we reject H0 : β1 = 0
and conclude there is a positive correlation between Temp and CO2. The b1 = 0.01 can be interpreted as
follows: The air temperature will increase by 0.01 degrees Celsius on average if the CO2 concentration in the
atmosphere increases by 1 p ...
The document describes an experiment examining the effect of different culturing conditions on the growth of Methicillin-resistant Staphylococcus aureus (MRSA) strains. Five MRSA strains were cultured under various time, temperature, and tryptone concentration levels. ANOVA and polynomial regression analyses found that time, temperature, and concentration all significantly affected bacterial counts, with some interaction effects. The optimal conditions estimated were 48 hours for time and 35°C for temperature based on maximizing counts in the regression models.
This document summarizes key concepts from Chapter 12 of Zumdahl's textbook on kinetics. It begins by identifying the chapter objectives, which include determining factors that influence reaction rates, writing rate laws, calculating rate constants, and determining reaction order and half-life. It then discusses specific examples to illustrate these concepts, such as using data on initial concentrations and rates to determine that the reaction 2NO2 → 2NO + O2 is first order. It also explains how to use integrated rate laws to determine concentrations over time for first and second order reactions from graphs of natural log of concentration versus time and inverse concentration versus time, respectively. The document emphasizes that reaction order must be established before applying integrated rate laws. Finally, it notes
- The document discusses concepts related to chemical kinetics including reaction rates, rate laws, reaction mechanisms, and reaction orders.
- Key concepts covered include determining rate laws through experimental methods, distinguishing between differential and integrated rate laws, and characteristics of reactions that are zero order, first order, or second order.
- Examples are provided to illustrate determining the order of reactions and calculating rate constants from experimental data using integrated rate laws.
This document summarizes a laboratory experiment on binomial distribution, probability, and entropy. The experiment involved flipping coins and recording the number of heads and tails over 140 trials. Histograms and standard deviations were calculated for trial numbers 10, 20, 40, 80, and 140. As the number of trials increased, the expected number of tails approached 5 and the standard deviations decreased, showing the system approaching equilibrium. However, the decrease in standard deviations was not as expected. Resources on statistical mechanics, thermodynamics, and the binomial distribution were also listed.
X-bar and R control charts are used to monitor the mean and variation of a process based on samples taken over time. An initial series of samples is used to estimate the mean and standard deviation of the process and establish control limits for subsequent X-bar and R charts. These control charts can then be used to monitor the process mean and variation and detect any points that are outside the control limits, indicating an out-of-control process that requires investigation. The document provides steps for constructing X-bar and R control charts using sample data and calculating control limits based on the sample mean and range.
The document describes two experiments: 1) A radioactive decay simulation using nuts to model particles decaying over time. Repeated trials were conducted and the data was fitted to an exponential equation to determine the half-life. The half-life was calculated to be 1.494 seconds. 2) A statistical analysis of the heights of 71 people which were organized into intervals and represented graphically. The average height was determined to be 1.738 meters with the most common height being 1.76 meters. Both experiments helped students better understand concepts such as average lifetime, statistical distributions, and exponential decay.
This document discusses autocorrelation and its consequences. Autocorrelation occurs when error terms in a time series regression model are correlated over time. This violates the classical linear regression assumption that error terms are independent. If autocorrelation is present, it can bias standard error estimates and invalidate statistical tests. The document outlines various causes of autocorrelation like inertia in time series data, omitted variables, incorrect functional form, lags, and data manipulation. It also discusses the consequences of autocorrelation like biased standard errors and underestimated variance estimates. Methods to detect autocorrelation graphically and through statistical tests like runs tests are presented.
The document discusses various topics related to statistics including:
1) Different types of numbers such as natural numbers, integers, rational numbers, and irrational numbers.
2) Qualitative and quantitative data types as well as different measurement scales such as nominal, ordinal, interval, and ratio.
3) Key concepts in time series analysis including time series, cross-sectional, and panel data.
4) Various forecasting techniques like naive method, moving average, exponential smoothing, Holt's method, and regression analysis.
1. The document describes analyzing longitudinal data from a randomized controlled trial that examined the effects of treatment on blood lead levels in children over time.
2. Two general approaches for modeling time are discussed: mean response analysis, which uses time as a discrete variable, and parametric curve analysis, which uses time as a continuous variable.
3. The example trial data from the Treatment of Lead-exposure Children Trial is then analyzed using a mean response model to test whether blood lead levels differ between the treatment and placebo groups over the 4 time points, while adjusting for baseline differences.
This document discusses generalized estimating equations (GEE) and mixed models for analyzing longitudinal data. It provides an example dataset measuring depression scores and brain chemical levels over time for 6 patients. GEE is introduced as a method for analyzing such longitudinal data that accounts for the correlation between observations from the same subject. The document demonstrates using GEE to analyze the example dataset in SAS, showing that it provides smaller standard errors than a naïve analysis ignoring correlations. An exchangeable correlation structure is assumed for the GEE analysis.
Introduction to fundamentals of instrument & controlMahmoud Wanis
This document provides an introduction to the fundamentals of measurement and control. It outlines the objectives of describing key process parameters like pressure, level, flow and temperature. These parameters are measured using instruments that convert them to electrical signals. Pressure is measured using devices like manometers, bourdon tubes, and transducers that convert pressure to movement or electrical signals. Controlling processes requires maintaining a balance between inputs and outputs over time using different control loop types.
This document discusses key concepts in chemical kinetics including:
- Reaction rates can be expressed as changes in reactant or product concentrations over time.
- Factors that influence reaction rates include concentration, physical state, and temperature. Higher concentrations, smaller particle sizes, and higher temperatures typically increase reaction rates.
- The rate law expresses reaction rate as a function of reactant concentrations and can reveal information about the reaction mechanism. It is determined experimentally by measuring initial rates under different conditions.
This document summarizes key concepts in chemical kinetics including:
1) Kinetics is the study of reaction rates and how the molecular mechanism influences the rate. Factors like temperature, concentration, and catalysts affect the reaction rate.
2) The rate of a reaction is defined as the change in concentration of a reactant or product over time. Rate laws relate the reaction rate to concentrations of reactants.
3) Integrated rate laws allow calculation of reactant/product concentration as a function of time for different reaction orders (zero, first, second order). Graphical methods using these relations can determine the reaction order and rate constant.
The document discusses various quality tools used to solve problems including flowcharts, run charts, control charts, histograms, stratification, and other graphs. It provides information on how to construct and interpret these tools. Flowcharts are described as a way to visually map out processes and identify complex steps. Control charts are presented as a method to monitor processes and detect assignable causes by tracking variation over time. Stratification is introduced as a statistical technique to break down data into meaningful groups to better understand sources of variation.
The document discusses various quality tools used to solve problems including flowcharts, run charts, control charts, histograms, stratification, and other graphs. It provides information on how to construct and interpret these tools. Flowcharts are described as a way to visually map out processes and identify complex steps. Control charts are presented as a method of monitoring processes over time to detect assignable causes and keep variation under control. Stratification is outlined as a technique to break down data into meaningful groups to identify sources of variation.
These notes are of chemistry class 11th first chapter which are strictly according to CBSE & state Board. This notes covers Some basics concepts of chemistry i.e. Branches of chemistry, classification of matter & many more..
This document provides background information on reaction rates and mechanisms. It discusses how factors like reactant concentrations, temperature, catalysts, and surface area can influence reaction rates. It also defines concepts like the rate law, rate constant, reaction order, energy of activation, and Arrhenius equation. Methods for determining reaction order are described, including by varying reactant concentrations and analyzing integrated rate expressions for zero, first, and second order reactions. The effects of temperature on reaction rates are also addressed through the Arrhenius equation.
This document discusses kinetics and factors that affect reaction rates. It defines kinetics as how quickly reactions occur and the factors that influence reaction rates, such as temperature, concentration, and the presence of catalysts. Reaction rates are linked to reaction mechanisms - the step-by-step processes by which reactions take place. Increasing temperature leads to more collisions between reactant particles and faster reaction rates, as described by the Arrhenius equation. Catalysts lower the activation energy of reactions, speeding up reaction rates without being consumed.
Dimensional analysis can be used to derive equations, check if equations are dimensionally correct, and find the dimensions or units of derived quantities. It involves identifying the fundamental dimensions - such as length, time, mass - of the variables in an equation. An equation is dimensionally correct if the dimensions on both sides are equal. For example, the equation for velocity, v=s/t, can be dimensionally checked as [v]=[s]/[t] which gives meters/second. Dimensional analysis allows deriving the formula for the period of a pendulum as T=2π√(l/g).
This document provides an overview of time series analysis. It defines a time series as numerical data obtained at regular time intervals that occurs in many domains like economics, finance, and environment. Time series data are different from other data as they are not independent and have large sample sizes. The key components of a time series are the trend, seasonal variation, cyclical variation, and irregular/random variation. Decomposition methods are used to separate out these components. Smoothing techniques like moving averages are employed to better understand the overall patterns in time series data. Seasonal indices are calculated to measure the degree to which different seasons vary from each other.
This document discusses chemical kinetics, which is the branch of chemistry that deals with the rates of chemical reactions. It explains that chemical kinetics tells us about the speed or rate of reactions, whereas thermodynamics only tells us about the feasibility of reactions. The key points are:
- Chemical kinetics studies how the rates of chemical reactions change with factors like concentration, temperature, and catalysts. It helps answer questions about how quickly processes like food spoilage or fuel combustion occur.
- The rate of a reaction can be defined as the change in concentration of a reactant or product over time. Average rate is calculated over a time interval, while instantaneous rate is the rate at a single moment in time.
-
This document discusses reaction rates and kinetics concepts including:
- Instantaneous reaction rates can be calculated from the slope of concentration-time graphs at specific points.
- Reaction orders and rate laws can be determined experimentally using methods like the initial rate method or integrated rate law method.
- First-order reactions follow the integrated rate law that the natural log of the concentration is linear with time. Second-order and zero-order reactions also have defining rate laws and kinetics equations.
Chemistry (Module 1) introduces several key concepts:
[1] It discusses units and dimensions, and defines the seven SI base units - meter, kilogram, second, kelvin, ampere, candela, and mole.
[2] It explains prefixes that are used to modify the SI units and increase or decrease their magnitude, such as milli, centi, kilo, mega.
[3] It describes derived units which are derived by combining the basic units through multiplication or division, such as m3 for volume, m2 for area, and J for energy.
[4] It discusses the classification of matter as elements, compounds, and mixtures based on their chemical
Deadline 6 PM Friday September 27, 201310 Project Management Que.docxedwardmarivel
Deadline 6 PM Friday September 27, 2013
10 Project Management Questions with sub-questions under each question. A word document is provided with all questions and directions.
Problem 1
The following data were obtained from a project to create a new portable electronic.
Activity
Duration
Predecessors
A
5 Days
---
B
6 Days
---
C
8 Days
---
D
4 Days
A, B
E
3 Days
C
F
5 Days
D
G
5 Days
E, F
H
9 Days
D
I
12 Days
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
What is the Scheduled Completion of the Project?
b)
What is the Critical Path of the Project?
c)
What is the ES for Activity D?
d)
What is the LS for Activity G?
e)
What is the EF for Activity B?
f)
What is the LF for Activity H?
g)
What is the float for Activity I?
Problem 2
The following data were obtained from a project to build a pressure vessel:
Activity
Duration
Predecessors
A
6 weeks
---
B
6 weeks
---
C
5 weeks
B
D
4 weeks
A, C
E
5 weeks
B
F
7 weeks
D, E, G
G
4 weeks
B
H
8 weeks
F
I
5 weeks
G
J
3 week
I
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 3
The following data were obtained from a project to design a new software package:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
6 Days
A
D
4 Days
C, B
E
5 Days
A
F
4 Days
D, E, G
G
4 Days
B, C
H
3 Day
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path(s)
c)
What is the slack time (float) for activity B?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 4
The following data were obtained from an in-house MIS project:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
5 Days
A
D
4 Days
B
E
5 Days
B
F
3 Day
C, D
G
7 Days
C, D
H
6 Days
E, F, G
I
9 Days
E, F
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e)
What is the slack time (float) for activity E?
f)
What is the slack time (float) for activity F?
PROBLEM 5
Use the network diagram below and the additional information provided to answer the corresponding questions.
a) Give the crash cost per day per activity.
b) Which activities should be crash.
More Related Content
Similar to Day 08 ActivityFisher & HughesSeptember 21, 2018StudyA study was c.docx
The document describes two experiments: 1) A radioactive decay simulation using nuts to model particles decaying over time. Repeated trials were conducted and the data was fitted to an exponential equation to determine the half-life. The half-life was calculated to be 1.494 seconds. 2) A statistical analysis of the heights of 71 people which were organized into intervals and represented graphically. The average height was determined to be 1.738 meters with the most common height being 1.76 meters. Both experiments helped students better understand concepts such as average lifetime, statistical distributions, and exponential decay.
This document discusses autocorrelation and its consequences. Autocorrelation occurs when error terms in a time series regression model are correlated over time. This violates the classical linear regression assumption that error terms are independent. If autocorrelation is present, it can bias standard error estimates and invalidate statistical tests. The document outlines various causes of autocorrelation like inertia in time series data, omitted variables, incorrect functional form, lags, and data manipulation. It also discusses the consequences of autocorrelation like biased standard errors and underestimated variance estimates. Methods to detect autocorrelation graphically and through statistical tests like runs tests are presented.
The document discusses various topics related to statistics including:
1) Different types of numbers such as natural numbers, integers, rational numbers, and irrational numbers.
2) Qualitative and quantitative data types as well as different measurement scales such as nominal, ordinal, interval, and ratio.
3) Key concepts in time series analysis including time series, cross-sectional, and panel data.
4) Various forecasting techniques like naive method, moving average, exponential smoothing, Holt's method, and regression analysis.
1. The document describes analyzing longitudinal data from a randomized controlled trial that examined the effects of treatment on blood lead levels in children over time.
2. Two general approaches for modeling time are discussed: mean response analysis, which uses time as a discrete variable, and parametric curve analysis, which uses time as a continuous variable.
3. The example trial data from the Treatment of Lead-exposure Children Trial is then analyzed using a mean response model to test whether blood lead levels differ between the treatment and placebo groups over the 4 time points, while adjusting for baseline differences.
This document discusses generalized estimating equations (GEE) and mixed models for analyzing longitudinal data. It provides an example dataset measuring depression scores and brain chemical levels over time for 6 patients. GEE is introduced as a method for analyzing such longitudinal data that accounts for the correlation between observations from the same subject. The document demonstrates using GEE to analyze the example dataset in SAS, showing that it provides smaller standard errors than a naïve analysis ignoring correlations. An exchangeable correlation structure is assumed for the GEE analysis.
Introduction to fundamentals of instrument & controlMahmoud Wanis
This document provides an introduction to the fundamentals of measurement and control. It outlines the objectives of describing key process parameters like pressure, level, flow and temperature. These parameters are measured using instruments that convert them to electrical signals. Pressure is measured using devices like manometers, bourdon tubes, and transducers that convert pressure to movement or electrical signals. Controlling processes requires maintaining a balance between inputs and outputs over time using different control loop types.
This document discusses key concepts in chemical kinetics including:
- Reaction rates can be expressed as changes in reactant or product concentrations over time.
- Factors that influence reaction rates include concentration, physical state, and temperature. Higher concentrations, smaller particle sizes, and higher temperatures typically increase reaction rates.
- The rate law expresses reaction rate as a function of reactant concentrations and can reveal information about the reaction mechanism. It is determined experimentally by measuring initial rates under different conditions.
This document summarizes key concepts in chemical kinetics including:
1) Kinetics is the study of reaction rates and how the molecular mechanism influences the rate. Factors like temperature, concentration, and catalysts affect the reaction rate.
2) The rate of a reaction is defined as the change in concentration of a reactant or product over time. Rate laws relate the reaction rate to concentrations of reactants.
3) Integrated rate laws allow calculation of reactant/product concentration as a function of time for different reaction orders (zero, first, second order). Graphical methods using these relations can determine the reaction order and rate constant.
The document discusses various quality tools used to solve problems including flowcharts, run charts, control charts, histograms, stratification, and other graphs. It provides information on how to construct and interpret these tools. Flowcharts are described as a way to visually map out processes and identify complex steps. Control charts are presented as a method to monitor processes and detect assignable causes by tracking variation over time. Stratification is introduced as a statistical technique to break down data into meaningful groups to better understand sources of variation.
The document discusses various quality tools used to solve problems including flowcharts, run charts, control charts, histograms, stratification, and other graphs. It provides information on how to construct and interpret these tools. Flowcharts are described as a way to visually map out processes and identify complex steps. Control charts are presented as a method of monitoring processes over time to detect assignable causes and keep variation under control. Stratification is outlined as a technique to break down data into meaningful groups to identify sources of variation.
These notes are of chemistry class 11th first chapter which are strictly according to CBSE & state Board. This notes covers Some basics concepts of chemistry i.e. Branches of chemistry, classification of matter & many more..
This document provides background information on reaction rates and mechanisms. It discusses how factors like reactant concentrations, temperature, catalysts, and surface area can influence reaction rates. It also defines concepts like the rate law, rate constant, reaction order, energy of activation, and Arrhenius equation. Methods for determining reaction order are described, including by varying reactant concentrations and analyzing integrated rate expressions for zero, first, and second order reactions. The effects of temperature on reaction rates are also addressed through the Arrhenius equation.
This document discusses kinetics and factors that affect reaction rates. It defines kinetics as how quickly reactions occur and the factors that influence reaction rates, such as temperature, concentration, and the presence of catalysts. Reaction rates are linked to reaction mechanisms - the step-by-step processes by which reactions take place. Increasing temperature leads to more collisions between reactant particles and faster reaction rates, as described by the Arrhenius equation. Catalysts lower the activation energy of reactions, speeding up reaction rates without being consumed.
Dimensional analysis can be used to derive equations, check if equations are dimensionally correct, and find the dimensions or units of derived quantities. It involves identifying the fundamental dimensions - such as length, time, mass - of the variables in an equation. An equation is dimensionally correct if the dimensions on both sides are equal. For example, the equation for velocity, v=s/t, can be dimensionally checked as [v]=[s]/[t] which gives meters/second. Dimensional analysis allows deriving the formula for the period of a pendulum as T=2π√(l/g).
This document provides an overview of time series analysis. It defines a time series as numerical data obtained at regular time intervals that occurs in many domains like economics, finance, and environment. Time series data are different from other data as they are not independent and have large sample sizes. The key components of a time series are the trend, seasonal variation, cyclical variation, and irregular/random variation. Decomposition methods are used to separate out these components. Smoothing techniques like moving averages are employed to better understand the overall patterns in time series data. Seasonal indices are calculated to measure the degree to which different seasons vary from each other.
This document discusses chemical kinetics, which is the branch of chemistry that deals with the rates of chemical reactions. It explains that chemical kinetics tells us about the speed or rate of reactions, whereas thermodynamics only tells us about the feasibility of reactions. The key points are:
- Chemical kinetics studies how the rates of chemical reactions change with factors like concentration, temperature, and catalysts. It helps answer questions about how quickly processes like food spoilage or fuel combustion occur.
- The rate of a reaction can be defined as the change in concentration of a reactant or product over time. Average rate is calculated over a time interval, while instantaneous rate is the rate at a single moment in time.
-
This document discusses reaction rates and kinetics concepts including:
- Instantaneous reaction rates can be calculated from the slope of concentration-time graphs at specific points.
- Reaction orders and rate laws can be determined experimentally using methods like the initial rate method or integrated rate law method.
- First-order reactions follow the integrated rate law that the natural log of the concentration is linear with time. Second-order and zero-order reactions also have defining rate laws and kinetics equations.
Chemistry (Module 1) introduces several key concepts:
[1] It discusses units and dimensions, and defines the seven SI base units - meter, kilogram, second, kelvin, ampere, candela, and mole.
[2] It explains prefixes that are used to modify the SI units and increase or decrease their magnitude, such as milli, centi, kilo, mega.
[3] It describes derived units which are derived by combining the basic units through multiplication or division, such as m3 for volume, m2 for area, and J for energy.
[4] It discusses the classification of matter as elements, compounds, and mixtures based on their chemical
Similar to Day 08 ActivityFisher & HughesSeptember 21, 2018StudyA study was c.docx (20)
Deadline 6 PM Friday September 27, 201310 Project Management Que.docxedwardmarivel
Deadline 6 PM Friday September 27, 2013
10 Project Management Questions with sub-questions under each question. A word document is provided with all questions and directions.
Problem 1
The following data were obtained from a project to create a new portable electronic.
Activity
Duration
Predecessors
A
5 Days
---
B
6 Days
---
C
8 Days
---
D
4 Days
A, B
E
3 Days
C
F
5 Days
D
G
5 Days
E, F
H
9 Days
D
I
12 Days
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
What is the Scheduled Completion of the Project?
b)
What is the Critical Path of the Project?
c)
What is the ES for Activity D?
d)
What is the LS for Activity G?
e)
What is the EF for Activity B?
f)
What is the LF for Activity H?
g)
What is the float for Activity I?
Problem 2
The following data were obtained from a project to build a pressure vessel:
Activity
Duration
Predecessors
A
6 weeks
---
B
6 weeks
---
C
5 weeks
B
D
4 weeks
A, C
E
5 weeks
B
F
7 weeks
D, E, G
G
4 weeks
B
H
8 weeks
F
I
5 weeks
G
J
3 week
I
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 3
The following data were obtained from a project to design a new software package:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
6 Days
A
D
4 Days
C, B
E
5 Days
A
F
4 Days
D, E, G
G
4 Days
B, C
H
3 Day
G
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path(s)
c)
What is the slack time (float) for activity B?
d)
What is the slack time (float) for activity D?
e) What is the slack time (float) for activity E?
f) What is the slack time (float) for activity G?
Problem 4
The following data were obtained from an in-house MIS project:
Activity
Duration
Predecessors
A
5 Days
---
B
8 Days
---
C
5 Days
A
D
4 Days
B
E
5 Days
B
F
3 Day
C, D
G
7 Days
C, D
H
6 Days
E, F, G
I
9 Days
E, F
Step 1: Construct a network diagram for the project.
Step 2: Answer the following questions:
a)
Calculate the scheduled completion time.
b)
Identify the critical path
c)
What is the slack time (float) for activity A?
d)
What is the slack time (float) for activity D?
e)
What is the slack time (float) for activity E?
f)
What is the slack time (float) for activity F?
PROBLEM 5
Use the network diagram below and the additional information provided to answer the corresponding questions.
a) Give the crash cost per day per activity.
b) Which activities should be crash.
DEADLINE 15 HOURS
6 PAGES
UNDERGRADUATE
COURSEWORK
HARVARD FORMATING
DOUBLE SPACING
INSTRUCTIONS
This assignment seeks to assess your ability to:
• Critically evaluate and discuss the major developments during 2017 in corporate taxation from the perspective of multinational companies and their auditors, governments and other stakeholders.
• Apply appropriate knowledge, analytical techniques and concepts to problems and issues arising from both familiar and unfamiliar situations;
• Think critically, examine problems and issues from a number of perspectives, challenge viewpoints, ideas and concepts and make well-reasoned judgements;
• Present, discuss and defend ideas, concepts and views effectively through formal language.
Background:
In the final weeks of 2017 a leading tax expert suggested that “a whirlwind of international tax changes has swept the globe”. He also went on to say that for companies operating in Europe there is no end in sight to the pace of change. The final recommendations on base erosion and profit shifting (BEPS) from the OECD have been endorsed by the EU. In fact a number of European governments have already implemented large parts of these proposals ahead of schedule.
The third quarter of the year saw the European Commission in the spotlight with its landmark decision that the technology giant Apple must repay no less than €13 billion of taxes to the Irish government. This ruling was based on the view that the favourable tax treatment was effectively state aid and hence the Irish government had broken EU law. At the same time countries across the world continue to compete by reducing the rate of corporate taxes. Many commentators suggest that the UK government will cut the corporate tax rate to 10% if the country fails to negotiate a trade deal with the European Union as part of the Brexit process. In a separate development earlier in the year the government of Hungary announced it would become the tax haven of Central Europe with a plan to reduce corporation tax to a mere 9%.
Required:
You are to write a report for the Board of Directors of a listed global company that has manufacturing and R&D activities across Europe, Asia, Australasia and America. The report should assume that the directors have detailed knowledge of the group activities but are not taxation specialists. However they would be aware of issues relating to corporate governance, transparency and reputational risks.
The report should cover the following aspects:
Evaluate the major developments that occurred in corporate taxation in 2017 and the issues that may arise in the current year.
Discuss the implications for the group in regard to the relationship with its auditors.
Consider how other stakeholders and non-governmental organisations (NGOs) may be affected by changes in the level of corporate taxes and their possible reaction.
The resources below are on Blackboard and provide an introduction to the topic.
“Corpor.
De nada.El gusto es mío.Encantada.Me llamo Pepe.Muy bien, grac.docxedwardmarivel
Este documento presenta varios diálogos y conversaciones cortas que incluyen saludos comunes, preguntas sobre el origen y el nombre de las personas, y despedidas. Los diálogos practican vocabulario y estructuras básicas de conversación en español.
DDL 24 hours reading the article and writing a 1-page doubl.docxedwardmarivel
DDL:
24 hours
reading the article and writing a
1-page double space
annotated bibliography
including:
1.reference
2.specify the concept you will use
3.explain its significance to the course
4.specify how you'll use it in your project
see the article and project inf below
.
*
DCF valuation methodSuper-normal growth modelApplications: single CF, annuity, perpetuity, uneven CFs, bond, stock, etc.
LECTURE 2 Valuation Basics
(Chapters 4, 6, 7)
*
Amount of cash flows expectedRisk of the cash flows Timing of the cash flow stream
Factors that Determine Value
*
DCF Method: General Formula
Finding PVs is discounting. The discount factor i is determined by the cost of capital invested.
*
10%
Single Cash Flow
100
0
1
2
3
PV = ?
What’s the PV of $100 due in 3 years if i = 10%?
*
Financial Calculator Setup
BGN END
P/Y 1
FORMAT: DEC 4 or larger
*
Financial Calculator
Solution
s
N I/YR PV PMTFV
?
N = 3, I/YR = 10, PMT = 0, FV = 100
CPT, PV
-75.13
/
INPUTS
OUTPUT
*
Spreadsheet
.
DDBA 8307 Week 2 Assignment Exemplar
John Doe[footnoteRef:1] [1: Type your name here]
DDBA 8307-6[footnoteRef:2] [2: Type in DDBA section number (e.g. DDBA 8307 – 6) ]
Dr. Jane Doe[footnoteRef:3] [3: Enter faculty name here.]
1
Scales of Measurement
Type text here. Discuss the implications of “scales of measurement” in quantitative research. Be sure to use a minimum of two citations to support your position(s). Be sure to review the “Scales of Measurement” media from Week 1. This section should be no more than two paragraphs.
Research Question
What are the means, standard deviations, frequencies, and percentages of the Lesson 21 Exercise File variables?
Presentation of Findings
I analyzed data from Lesson 21 Exercise File [footnoteRef:4]. In this section, I present descriptive statistics for the study quantitative and qualitative variables. Appropriate APA tables and figures accompany the analysis[footnoteRef:5]. [4: Insert the appropriate file name. ] [5: The tables and figures from your SPSS output will need to be copied and pasted in the appropriate location.]
Descriptive Statistics[footnoteRef:6] [6: Detailed information can be found in Lesson 20, “Univariate Descriptive Statistics for Qualitative Variables,” and Lesson 21, “Univariate Descriptive Statistics for Quantitative Variables,” in the Green and Salkind text.
]
Descriptive statistics were run for the quantitative and qualitative variables in the Week 1 Assignment data set. Table 1 depicts the means and standard deviations for the quantitative data. Figure 1 depicts a histogram for the GPA variable. Table 2 depicts the frequencies and percentages for the qualitative (categorical) data. Figure 2 depicts a pie chart for the ethnic variable. Appendix 1 depicts the SPSS output.
Table 1[footnoteRef:7] [7: This is an example of an APA-formatted descriptive statistics table. Refer to Sections 5.01-5.19, in the APA Manual for detailed information on APA tables. The descriptive statistics table here includes the appropriate information derived from the SPSS output that is to be pasted as an appendix. Do not split tables across pages. Note: The numbers in the SPSS output presented here are fictitious numbers and do not represent correct numbers in the data set you will use for this application.
]
Means (M) and Standard Deviations (SD) for Study
Quantitative Variables (N = 105)
Variable[footnoteRef:8] [8: You would simply add rows to the table to accommodate the variables you have used in the analysis (i.e., variable 3, variable 4, etc.). Hint: Use the Microsoft Word Table feature.
]
M
SD
GPA
2.78
.76
Final
61.48
7.94
Percent
80.34
12.12
Figure 1. Histogram of GPA distribution.
Table 2[footnoteRef:9] [9: Recall from Lesson 20, “Univariate Descriptive Statistics for Qualitative Variables” (Green & Salkind, 2017), frequencies and percentages are reported for qualitative (nominal) variables. Note: Frequency and percentages are the only c.
DBM380 v14Create a DatabaseDBM380 v14Page 2 of 2Create a D.docxedwardmarivel
DBM/380 v14
Create a Database
DBM/380 v14
Page 2 of 2Create a Database
The following assignment is based on the business scenario for which you created both an entity-relationship diagram and a normalized database design in Week 2.
For this assignment, you will create multiple related tables that match your normalized database design. In other words, you will implement a physical design (an actual, usable database) based on a logical design.
Refer to the linked W3Schools.com articles “SQL CREATE TABLE Statement,” “SQL PRIMARY KEY Constraint,” “SQL FOREIGN KEY Constraint,” and “SQL INSERT INTO Statement” for help in completing this assignment.
Note: In the industry, even the most carefully thought out database designs can contain mistakes. Feel free to correct in your tables any mistakes you notice in your normalized database design. Also, note that in Microsoft® Access®, you follow the steps below to launch the SQL editor:
Figure 1. To create a SQL query in Microsoft® Access®, begin by clicking the CREATE tab.
To Complete This Assignment:
1. Use the CREATE TABLE statement to create each table in your design. Note that a table in a RDMS corresponds to an entity in an entity-relationship diagram. Recommended tables for this assignment are CUSTOMER, ORDER, ORDER_DETAIL, PRODUCT, EMPLOYEE, and STORE.
2. As part of each CREATE TABLE statement, define all of the columns, or fields, that you want each particular table to contain. Give them short, meaningful names and include constraints; that is, describe what type of data each column (field) is allowed to hold and any other constraints, such as size, range, or uniqueness.
3. Note that any field you marked as a unique identifier in your normalized database design is a key field. Key fields must be described as both UNIQUE and NOT NULL, which means a value must exist for each record and that value must be unique across all records.
4. After you have created all six tables, including relationships between the tables as appropriate (matching the primary key in one table to a foreign key in another table), use the INSERT INTO statement to insert 10 records into each of your tables. You will need to make up the data you insert into your tables. For example, to insert one record into the CUSTOMER table, you will need to invent a customer number, a customer name, and so on—one value for each of the fields you defined for the CUSTOMER table—to insert into the table.
5. To ensure that your INSERT INTO statements succeeded in populating your tables, use the SELECT statement described in Ch. 7, “Introduction to Structured Query Language,” in Database Systems: Design, Implementation, and Management.to retrieve the records you inserted. For example, to see all 10 records you inserted into the CUSTOMER table, you might apply the following SQL statement: SELECT * FROM CUSTOMER;
After you have created all six tables and populated ten records in each table, submit to the Assignment Files tab the database containin.
DB3.1 Mexico corruptionDiscuss the connection between pol.docxedwardmarivel
DB3.1: Mexico corruption
Discuss the connection between politics, corruption, and criminal organizations in Mexico. How would you go about separating these? Give examples and be specific. Support your ideas on why you would do these specific measures.
DB3.2: Collapse of Soviet Union
How has the collapse of the Soviet Union fostered pirate capitalism and organized crime? Be specific with your answer and support your answer. Do you think that if the Soviet Union did not collapse pirate capitalism and organized crime would still flourish? Support your opinion.
300 words per post
.
DB2Pepsi Co and Coke American beverage giants, must adhere to th.docxedwardmarivel
DB2
Pepsi Co and Coke American beverage giants, must adhere to the U.S Foreign Corruption Act wherever their businesses may take them. Both companies expanded their U.S businesses to India with differing initial results. Coke came home (initially) and Pepsi Co prospered.
Do your research and explain the socio-cultural barriers faced by these two companies? What in your view were the reasons which negatively impacted Coke and positively touched Pepsi Co?
WEEK 3:
Interactive
: Select one company other than the 2 mentioned above, and share this company’s experience in the United Arab Emirates. Comment on another learner’s company experience in a different location of the world.
WEEK 4:
Interactive
: Comment on a different learner’s company experience in a totally different location from those completed earlier. Do you feel that cultural training is an essential pre-requisite for expatriates in any host country? Why/Why not?
Remember to use APA referencing in the body of your posting.
.
DB1 What Ive observedHave you ever experienced a self-managed .docxedwardmarivel
DB1: What I've observed
Have you ever experienced a self-managed team? If so, describe it. If not, why do you think your organization has not embraced self managed teams?
DB2: Case Analysis
Review the case study at the end of Chapter 8, Frederick W. Smith - FedEx. Answer the five questions below:
1. How do the standards set by Fred Smith for FedEx teams improve organizational performance?
2. What motivates the members of FedEx to remain highly engaged in their teams?
3. Describe the role FedEx managers play in facilitating team effectiveness.
4. What types of teams does FedEx use? Provide evidence from the case to support your answer.
5. Leaders play a critical role in building effective teams. Cite evidence from the case that FedEx managers performed some of these roles in developing effective teams.
Image Source Team:
http://www.freedigitalphotos.net/images/gallery-thumbnails.php?id=50143103253525199427035558
.
DB Response 1I agree with the decision to search the house. Ther.docxedwardmarivel
DB Response 1
I agree with the decision to search the house. There was reasonable suspicion to believe the fugitive could have been in the home. The homeowner not only consented to the search of the house but requested it for her safety. Complacency kills. In this situation, the officer is very regretful in his decision to conduct a complacent search of the home, and luckily nobody was killed.
My department does not have body cameras, but I still conduct business as if somebody is recording me. We live in a generation of surveillance. You never know when there are hidden cameras, a camera on a business you did not notice, or a cell phone recording from the top floor of a building. We hire police officers with high amounts of integrity because the definition of integrity is doing the right thing even when nobody is looking. I would be lying if I said my grandmother would approve of everything I do on the job. I am most guilty of foul language and it is something that I am working on not doing that. However, I can emphatically say I work with integrity and honesty without a doubt.
I think setting limits on tolerable behavior in regards to sexual and general harassment is appropriate; however, there are too many situations to make a policy for every behavior one could find inappropriate. When it comes to using force again every situation is different but there should be a pretty well laid out policy at departments for when and how an officer should use a certain amount of force. Officers should be trained on de-escalation tactics and alternatives to using force. Tactical training should include strategies to create time, space, and distance, to reduce the likelihood that force will be necessary and should occur in realistic conditions appropriate to the department’s location (U.S. Commission On Civil Rights, 2018).
Philippians 2 verses 3 – 8 is a pretty straightforward verse with great leadership lessons. Be humble, put others before yourself, and be a servant leader.
From the very beginning of any interrogation, the accused has constitutional rights not to speak to police and also to have an attorney present. The Eighth Amendment to the Constitution prohibits cruel and unusual punishments placed upon any persons in the U.S. With these rights in mind I will only go as far as the Constitution allows when interrogating this suspect even if the suspect admits where the child is if the admission was coerced that admission could get thrown out of court. I would never compromise the investigation. There are other ways to find the abducted girl through detective work than just interrogating the suspect. The cost of illegal interrogations is documented in the number of lost prosecutions. Literally, thousands of cases across the country have had to be dismissed because prosecutors could not trust that the evidence provided by police officers was legitimate or the officer had lost credibility as a witness in all cases because of his or her wrongdoing (P.
DB Response prompt ZAKChapter 7, Q1.Customers are expecting.docxedwardmarivel
DB Response prompt ZAK
Chapter 7, Q1.
Customers are expecting more from their service providers. Rather than traditionally accepting boilerplate offerings from service providers, customers desire that service providers cater to their requests. Organizations providing services must keep up with the customer’s demand or risk losing business to others who will. Many service providers have been adopting lean principles to accommodate the needs of their customers in successful attempts to decrease waste, increase efficiency, improve customer service and satisfaction (Daft, 2016, p. 275). From online music providers, customers expect music tracks personalized for their tastes. From airlines, customers can expect preflight seat and meal selections. Amazon.com provides custom personalization to a customers’ home pages by placing personally directed advertisements and products which the customer is more likely to order from the company. Amazon book recommendations are personalized to the specific customer and are provided based upon previous books read. With customers expecting customized and catered experiences, companies need to keep up with this demand and embrace mass customization in order to obtain and retain customers.
Chapter 7, Q2.
While many facets of businesses may involve craft technology, it is still important for business schools to teach management. Some businesses which only expect their leaders to gain knowledge and expertise from experience, may be creating a bureaucratic and restricted model for their business. Companies which rely only on internal training for their leaders can miss opportunities from potential leaders coming in from the outside. Business schools which teach management can provide potential leaders with a foundation to draw from. Teaching management can expose students to issues and opportunities experienced by others, not just ones restricted to one specific company. Teaching management from a textbook is just one method of conveying information. Just as one would not necessarily be proficient in piloting a boat from reading a book, a textbook about doing so would provide the student with underlying concepts which could dramatically increase the success of the student when they move to an actual boat. This textbook based training would be further enhanced with some practical experience.
Chapter 8, Q1.
Technology has progressed allowing real time instant messaging and virtual meetings. High level managers can indeed expect technology to allow them to do their jobs with little face-to-face communication, but they should question if that is something they really want to do. There are currently methods available which could be used effectively to communicate with subordinates, employees and stockholders, such as recorded feeds which would be able to reach every associated individual. These however may not provide a sense of personalization from the managers. Leaders in an organization may resort to using tec.
DB Topic of Discussion Information-related CapabilitiesAnalyze .docxedwardmarivel
DB Topic of Discussion: Information-related Capabilities
Analyze 2 of the 14 information-related capabilities and explain how the joint force can use these capabilities to affect the three dimensions of the information environment. Give examples of real-world or life events for the capabilities and how can you use these concepts as a CSM/SGM.
Consumer Brand Metrics Q3 2015
Eater Archetypes:
Brand usage and preferences by consumer segment
The restaurant industry has long relied on demographic factors to
identify and prioritize consumer groups. For example, many
brands currently obsess over attracting Millennials—some
without pausing to consider the variations among consumers
within this demographic cohort. In addition to life stages,
consumer attitudes about health, value, convenience and the
overall role of foodservice in their lives drive significant
differences in preferences and behavior.
With these distinctions in mind, we have updated the Consumer
Brand Metrics (CBM) survey with questions that allow us to
segment consumers into one of seven Eater Archetypes. Each
segment has a distinct psychographic profile, which is outlined in
our recent Consumer Foodservice Landscape. Accordingly, their
patronage of the segments and brands tracked in CBM varies.
This paper explores some differences we can discern after the
initial quarterly results, including the archetypes’ segment usage,
brand patronage and occasion dynamics. Examining CBM data by
Eater Archetype reveals nuances that complement a demographic
profile of a chain’s guests.
By Colleen Rothman, Manager, Consumer Insights
To learn more about the Consumer Brand Metrics program or to sign up for future
Spotlight by Consumer Brand Metrics white papers, please contact Bart Henyan,
Senior Marketing Manager, at [email protected]
Consumer Brand Metrics Q3 2015
Segmenting consumers by psychographic factors, rather than
just demographic characteristics, can lead to a better
understanding of the consumers that matter to your brand and
how to appeal to them.
Key Takeaways
Busy Balancers and Functional Eaters drive usage across
restaurants and convenience stores. Full-service restaurant
(FSR) operators may also consider targeting Foodservice
Hobbyists and Affluent Socializers, as these archetypes
comprise more than a quarter of FSR patrons, on average.
How does foodservice segment usage vary by archetype?
Driven by unique needs and motivations, Eater Archetypes
gravitate to a wide variety of brands. For example,
McDonald’s, Burger King and Whataburger each
disproportionately attract unique archetypes (Habitual
Matures, Bargain Hunters and Functional Eaters,
respectively).
Which chains do each archetype visit most frequently?
Archetypes that patronize the same restaurant may not use
the brand the same way. For example, usage varies by
daypart, with afternoon snacks skewing to Busy Balancers
and late-night meals d.
DB Instructions Each reply must be 250–300 words with a minim.docxedwardmarivel
DB Instructions:
Each reply must be 250–300 words with a minimum of 1 scholarly source. The scholarly source used for your thread and response should be in addition to the class textbooks.
Reference Book: Young, M. (2017). Learning the Art of Helping. Boston, MA: Pearson. ISBN: 9780134165783.
.
DB Defining White Collar CrimeHow would you define white co.docxedwardmarivel
DB: Defining White Collar Crime
How would you define white collar crime? What are the advantages and disadvantages of the various terms, such as “white collar crime,” “crimes of the powerful,” “elite deviance,” etc., used to describe the type of crimes.
300 Word Minimum
.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
Liberal Approach to the Study of Indian Politics.pdf
Day 08 ActivityFisher & HughesSeptember 21, 2018StudyA study was c.docx
1. Day 08 ActivityFisher & HughesSeptember 21, 2018Study
A study was conducted to determine the effects of alcohol on
human reaction times. Fifty-seven adult individuals within two-
age groups were recruited for this study and were randomly
allocated into one of three alcohol treatment groups – a control
where the subjects remain sober during the entire study, a
moderate group were the subject is supplied alcohol but is
limited in such a way that their blood alcohol content (BAC)
remains under the legal limit to drive (BAC of 0.08) and a
group that received a high amount of alcohol to which their
BAC may exceed the legal limit for driving. Each subject was
trained on a video game system and their reaction time (in
milliseconds) to a visual stimulus was recorded at 7 time points
30 minutes apart (labeled T0=0, T1=30, T2=60 and so on). At
time point T0, all subjects were sober and those in one of the
alcohol consumption groups began drinking after the first
measured reaction time (controlled within the specifications
outlined). The researcher is interested in determining the
influence alcohol and age (namely, is reaction time different for
those in the 20s versus 30s) has on reaction times.
The task for today is to do a complete analysis for this study
and dig into the effects of alcohol, age and time have on
reaction times.Data input and wrangling
First read in the data:alcohol <- read.csv("alcoholReaction.csv")
head(alcohol)## Subject Age Alcohol T0 T1 T2 T3
T4 T5 T6
## 1 1 24 Control 255.3 254.8 256.4 255.1 257.0 256.1
257.0
## 2 2 34 Control 250.1 249.2 249.0 248.0 248.0 248.9
248.1
## 3 3 31 Control 248.2 247.1 246.9 246.7 246.0 246.0
247.0
## 4 4 24 Control 253.9 253.8 254.9 254.1 253.2 254.1
255.0
## 5 5 38 Control 250.0 251.0 250.0 249.9 248.8 249.1
2. 249.9
## 6 6 38 Control 246.0 248.0 247.0 248.1 248.1 246.9
244.0
Note, the Age variable is recorded as an actual age in years, not
the category of 20s or 30s like we want – we need to
dichotomize this variable. Also note the data is in wide format –
the reaction times (the response variables) are spread over
multiple columns. We need a way to gather these columns into a
single column. So we need to do some data processing.
First consider the below code:head(alcohol %>%
mutate(Age = case_when(Age<31 ~ "20s",
Age %in% 31:40 ~ "30s")))## Subject Age
Alcohol T0 T1 T2 T3 T4 T5 T6
## 1 1 20s Control 255.3 254.8 256.4 255.1 257.0 256.1
257.0
## 2 2 30s Control 250.1 249.2 249.0 248.0 248.0 248.9
248.1
## 3 3 30s Control 248.2 247.1 246.9 246.7 246.0 246.0
247.0
## 4 4 20s Control 253.9 253.8 254.9 254.1 253.2 254.1
255.0
## 5 5 30s Control 250.0 251.0 250.0 249.9 248.8 249.1
249.9
## 6 6 30s Control 246.0 248.0 247.0 248.1 248.1 246.9
244.0
case_when is essentially a piece-wise comparison. When Age is
less than 31, you overwrite Age variable with “20s”. If the Age
is greater than 30, you replace it with “30s”. In this example we
used both a < comparison and the %in% statement we’ve seen
before just to show multiple functionality. Also note we include
30 in the 20s group and 40 in the 30s group so they are each of
size 10.alcohol <- alcohol %>%
mutate(Age = case_when(Age<31 ~ "20s",
Age %in% 31:40 ~ "30s") )
So the Age variable has been categorized. Now we need to
convert the data from wide to tall format. We do this with the
3. gather() function included in tidyverse.alcohol.tall <- alcohol
%>%
gather(key=Time, value=Reaction, c(T0, T1, T2, T3, T4, T5,
T6))
A blurb about gather There are essentially three inputs into the
gather() functions. Firstkey - Essentially provides the name of
the new variable we are going to create that consist of the
column namesvalue - Is the name for the new variable that will
house the values originally stored in the columns of interestThe
final part is a list of all the columns we want to gather, in this
case, T0, T1, T2, T3, T4, T5 and T6.head(alcohol.tall, n=10)##
Subject Age Alcohol Time Reaction
## 1 1 20s Control T0 255.3
## 2 2 30s Control T0 250.1
## 3 3 30s Control T0 248.2
## 4 4 20s Control T0 253.9
## 5 5 30s Control T0 250.0
## 6 6 30s Control T0 246.0
## 7 7 20s Control T0 248.8
## 8 8 30s Control T0 245.9
## 9 9 20s Control T0 246.9
## 10 10 30s Control T0 249.1
You will now note the data is a in a tall format, which is good
for analysis.
Lastly, so R doesn’t try and treat it as a number, we tell it that
the Subject variable is a factor or categorical variable. I also put
the Alcohol variables in the order we think…alcohol.tall <-
alcohol.tall %>%
mutate(Subject = as.factor(Subject),
Alcohol = factor(Alcohol, levels=c("Control",
"Moderate", "High")))Exploratory Data Analysis
There are 2 categories for age, 3 categories for alcohol use and
then 7 time points to consider. Essentially (2times 3times 7 =
42) combinations to consider. Rather than look numerically we
will consider things graphically.
First we consider a plot of the Reaction times in Time based on
4. Alcohol treatment with Age determining the
linetype.ggplot(alcohol.tall) +
geom_line(aes(x=Time, y=Reaction, group=Subject,
color=Alcohol, linetype=Age))
Not only is this plot noisy, it is hard to determine anything.
Let’s facet based on Ageggplot(alcohol.tall) +
geom_line(aes(x=Time, y=Reaction, group=Subject,
color=Alcohol)) +
facet_wrap(~Age)
This second plot is improved but still quite noisy. Let’s plot
average profiles rather than the raw data.ggplot(alcohol.tall,
aes(x=Time, y=Reaction, group=Alcohol, color=Alcohol)) +
stat_summary(fun.y=mean, geom="line") +
facet_wrap(~Age)
These average profiles are fairly telling and maybe even a little
surprising. Overall you see the High aclohol group (blue line)
shows an increase in reaction time over the time of the study.
The Control group shows a near decrease in the 30s group but
also note the spead is only about a half a unit decrease.Model
fitting and analysis
We fit a 2 factor repeated measure model and look at the
output.fit <- aov(Reaction ~ Age*Alcohol*Time +
Error(Subject/Time), data=alcohol.tall)
summary(fit)##
## Error: Subject
## Df Sum Sq Mean Sq F value Pr(>F)
## Age 1 18 17.72 0.254 0.616
## Alcohol 2 143 71.47 1.026 0.366
## Age:Alcohol 2 93 46.31 0.665 0.519
## Residuals 51 3553 69.66
##
## Error: Subject:Time
## Df Sum Sq Mean Sq F value Pr(>F)
5. ## Time 6 50.3 8.386 6.929 6.45e-07 ***
## Age:Time 6 10.3 1.714 1.416 0.20786
## Alcohol:Time 12 40.0 3.330 2.752 0.00145 **
## Age:Alcohol:Time 12 13.8 1.150 0.950 0.49702
## Residuals 306 370.4 1.210
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
First we look at the most complicated interaction term, in this
case Age:Alcohol:Time and it is NOT significant. So we follow
up by considering the two-way interaction terms. We see
Age:Alcohol and Age:Time are not significant but
Alcohol:Time is. There is an interaction between Alcohol group
and Time. Given the interactions involving Age are not
significnat, we can also consider the Age main effect, but see it
is also insignificant (F-stat 0.252 on 1 and 51 degrees of
freedom, (p)-value=0.616). Age appears to have no influence
on the reaction times. We follow up with conditional multiple
comparisons.Multiple Comparison Follow ups
Note: We have two levels of control in this study, there is an
explicit Control group and at time point T0 no subjects had been
given a treatment, so it also operates as a control. Dunnett’s
method for multiple comparison is most appropriate (see chapter
2.7 of the text).
We see that Alcohol and Time both matter, but perhaps in
different ways. We consider both conditional comparisons. First
we run the emmeans() codemc.alc <- emmeans(fit, ~ Alcohol |
Time)## Warning in emm_basis.aovlist(object, ...): Some
predictors are correlated with the intercept - results are biased.
## May help to re-fit with different contrasts, e.g. 'contr.sum'##
NOTE: Results may be misleading due to involvement in
interactionsmc.time <- emmeans(fit, ~ Time | Alcohol)##
Warning in emm_basis.aovlist(object, ...): Some predictors are
correlated with the intercept - results are biased.
## May help to re-fit with different contrasts, e.g. 'contr.sum'##
NOTE: Results may be misleading due to involvement in
interactions
6. First we consider the effects of alcohol conditioning at different
time points.contrast(mc.alc, "trt.vs.ctrl", ref=1)## Time = T0:
## contrast estimate SE df t.ratio p.value
## Moderate - Control 1.077143 1.0774800 51.00 1.000
0.5097
## High - Control 1.400260 0.9861516 51.00 1.420 0.2799
##
## Time = T1:
## contrast estimate SE df t.ratio p.value
## Moderate - Control 1.753810 1.2014014 78.06 1.460
0.2590
## High - Control 1.816169 1.0995693 78.06 1.652 0.1841
##
## Time = T2:
## contrast estimate SE df t.ratio p.value
## Moderate - Control 1.947143 1.2014014 78.06 1.621
0.1950
## High - Control 2.023896 1.0995693 78.06 1.841 0.1274
##
## Time = T3:
## contrast estimate SE df t.ratio p.value
## Moderate - Control 2.133810 1.2014014 78.06 1.776
0.1450
## High - Control 2.613442 1.0995693 78.06 2.377 0.0380
##
## Time = T4:
## contrast estimate SE df t.ratio p.value
## Moderate - Control 2.405476 1.2014014 78.06 2.002
0.0907
## High - Control 2.814351 1.0995693 78.06 2.560 0.0239
##
## Time = T5:
## contrast estimate SE df t.ratio p.value
## Moderate - Control 2.365476 1.2014014 78.06 1.969
0.0975
## High - Control 3.206623 1.0995693 78.06 2.916 0.0090
7. ##
## Time = T6:
## contrast estimate SE df t.ratio p.value
## Moderate - Control 2.487143 1.2014014 78.06 2.070
0.0781
## High - Control 3.517532 1.0995693 78.06 3.199 0.0039
##
## Results are averaged over the levels of: Age
## P value adjustment: dunnettx method for 2
testsplot(contrast(mc.alc, "trt.vs.ctrl", ref=1))
First note, that in all seven comparisons, the Moderate group is
never different than the Control group (this is true for all time,
smallest adjusted (p)-value is 0.0781). Thus, the profiles of
the Moderate group and the Control group are statistically the
same.
We can see that in the early time points, there was no difference
between the treatment groups receiving alcohol and those not
but as time progressed the “High” alcohol group had higher
reaction times than the control (starting at T3, it always
significant with adjusted (p)-value of 0.0380).
Next we compare the effects of time conditioning on the alcohol
group.contrast(mc.time, "trt.vs.ctrl", ref=1)## Alcohol =
Control:
## contrast estimate SE df t.ratio p.value
## T1 - T0 0.1700000 0.3478929 306 0.489 0.9675
## T2 - T0 0.1750000 0.3478929 306 0.503 0.9647
## T3 - T0 0.2600000 0.3478929 306 0.747 0.8938
## T4 - T0 0.0700000 0.3478929 306 0.201 0.9976
## T5 - T0 -0.1750000 0.3478929 306 -0.503 0.9647
## T6 - T0 -0.1600000 0.3478929 306 -0.460 0.9727
##
## Alcohol = Moderate:
## contrast estimate SE df t.ratio p.value
## T1 - T0 0.8466667 0.4017122 306 2.108 0.1603
## T2 - T0 1.0450000 0.4017122 306 2.601 0.0492
8. ## T3 - T0 1.3166667 0.4017122 306 3.278 0.0065
## T4 - T0 1.3983333 0.4017122 306 3.481 0.0032
## T5 - T0 1.1133333 0.4017122 306 2.771 0.0309
## T6 - T0 1.2500000 0.4017122 306 3.112 0.0111
##
## Alcohol = High:
## contrast estimate SE df t.ratio p.value
## T1 - T0 0.5859091 0.3398943 306 1.724 0.3302
## T2 - T0 0.7986364 0.3398943 306 2.350 0.0929
## T3 - T0 1.4731818 0.3398943 306 4.334 0.0001
## T4 - T0 1.4840909 0.3398943 306 4.366 0.0001
## T5 - T0 1.6313636 0.3398943 306 4.800 <.0001
## T6 - T0 1.9572727 0.3398943 306 5.758 <.0001
##
## Results are averaged over the levels of: Age
## P value adjustment: dunnettx method for 6
testsplot(contrast(mc.time, "trt.vs.ctrl", ref=1))
We see that the Control group never deviates from the control
time point (T0). This should not be surprising given they
remained sober for the entire study. In both of the other
treatments we see the influence of Time (and thus alcohol
consumption) on reaction times.
Even though the profile of the Moderate group was not
significantly different than the Control group, they did
experience an increase in reaction times with the consumption
of alcohol (just not enough to deviate overall from the Control
group). We see that the High consumption did deviate from the
Control group sometime around time point T3 (90
minutes).Conclusions
We established above that the key finding is that those with a
high dosage of alcohol had a longer reaction time compared to
the the control group as time progressed. We also find that
those receiving a moderate amount of alcohol performed
similarly to the control group. We close by building a profile
plot to summarize the findings (remember, Age was not
9. important).
First we plot the profiles of the three alcohol treatments
summarizing over all ages.alcohol.summary <- alcohol.tall
%>%
group_by(Alcohol, Time) %>%
summarize(Mean=mean(Reaction),
SE= sd(Reaction)/sqrt(n()))
ggplot(alcohol.summary, aes(x=Time, y=Mean, color=Alcohol))
+
geom_errorbar(aes(ymin=Mean-SE, ymax=Mean+SE),
width=0.1, position=position_dodge(0.3)) +
geom_line(aes(group=Alcohol), position=position_dodge(0.3))
+
geom_point(position=position_dodge(0.3))
Note this plot is a bit misleading since we have plotted the
moderate group even though it is statistically similar to the
control group (note the SE bars overlap for all time points for
the moderate and contrl groups). To link the control and
moderate groups, we have to do a bit more data processing. In
the below code we recast the Alcohol variable to only two
groups.alcohol.summary2 <- alcohol.tall %>%
mutate(Alcohol = case_when(Alcohol=="High" ~ "Legally
Drunk",
TRUE ~ "Legally Sober")) %>% # `TRUE
~` is everything else
group_by(Alcohol, Time) %>%
summarize(Mean=mean(Reaction),
SE= sd(Reaction)/sqrt(n()))
The TRUE ~ "Legally Sober" line essentially tells R that in any
other case (TRUE is always True) to mark it as Legally Sober.
In the first line of the case_when statement we use the ==
notation to compare for equality.
Now we make an overall plot summarizing the findings of our
study. To demonstrate the level of sophistication we can include
in a plot, I do quite a bit with axes, labeling and color choices.
10. Note this is sort of thing covered in detail in STA404. Here we
demonstrate the functionality.ggplot(alcohol.summary2,
aes(x=Time, y=Mean, color=Alcohol)) +
geom_errorbar(aes(ymin=Mean-SE, ymax=Mean+SE),
width=0.1, position=position_dodge(0.3)) +
geom_line(aes(group=Alcohol), position=position_dodge(0.3))
+
geom_point(position=position_dodge(0.3)) +
scale_x_discrete(name="Minutes since start of study",
labels=c("0","30","60","90", "120", "150", "180")) +
scale_color_manual(name="Alcohol level",
values=c("darkgreen", "cyan")) +
labs(y="Mean Reaction Time (ms)") +
theme_bw() +
ggtitle("Alcohol effects on Reaction time to Visual Stimulus")
+
theme(legend.position=c(0.125,0.85)) # 0.125 (ie 12.5%) from
the left edge and 0.85 from the bottom edge
UVA-M-0677H
Rev. Dec. 9, 2015
This user guide was prepared by Paul W. Farris, Landmark
Communications Professor of Business Administration; Gerry
Yemen, Senior Researcher;
University of Virginia Darden School Foundation,
Charlottesville, VA. All rights reserved.
To order copies, send an e-mail to [email protected] No part of
this publication may be reproduced, stored in a retrieval system,
11. used in a spreadsheet,
or transmitted in any form or by any means—electronic,
mechanical, photocopying, recording, or otherwise—without the
permission of the Darden School Foundation.
Positioning Game
User Guide
Overview
The Positioning Game simulation (UVA-M-0677) was designed
to offer an opportunity to actively
experiment with realistic problems in product marketing—
market definition, segmentation, and positioning.
The game has a focus on perceptual mapping; players will be
asked to make decisions, often quickly, in the
context of a new product launch and impending competition.
There are anywhere from two to six players in each market who
are competing for customers in the
segment. All players must be logged in at the same time to play.
Your instructor will set the number of rounds
to be played as well as the duration of each round. A timer has
been built into the game and coordinates the
rounds moving forward. No new round of positioning can occur
until all players in the market have completed
the round.
If a player’s computer is disconnected, the simulation will wait
for that player to reconnect and either click
“Submit,” or allow the timer to expire. The simulation advances
12. automatically when the first of the following
is true:
d to run out while open on the
computers of all users in the market
A password is required to access the simulation. If you don’t
have one prior to playing, please contact your
instructor. Once the game URL is available, your instructor will
invite you to start the game. But before you
do, please read the instructions with care.
Signing In and Starting the Game
To access the exercise, open the URL you receiveds either from
your instructor in class or in an e-mail
message sent through the Forio.com simulation platform. You
will be prompted with a log-in screen (Figure 1).
Please enter your e-mail address or username and the assigned
password, then click “Log In.”
mailto:[email protected]
Page 2 UVA-M-0677H
Figure 1. User log-in screen.
Source: All figures created by case writer.
13. At the start of the first round, brief instructions will ask you to
consider where the best position for your
product would be on the grid (see Figure 2). Please note that
there are two tags in the upper right corner:
People icon:
Player Identification icon:
The People icon indicates the number of players who have
already logged in to your market. Clicking on
or moving your mouse over the People icon provides the status
of all players in the group—if green, that player
is logged in, and if red, that player is not logged in (see Figure
2). The color-coded Player Identification icon
features your log-in name.
Figure 2. Game instructions.
Page 3 UVA-M-0677H
Clicking the “Join Game” button will either display a
notification in a dialog box with a progress bar
indicating the number of other players in your market who are
ready to start playing (see Figure 3) or will begin
the round and start the timer if all other players are ready. The
round cannot start until all players are logged in
and have joined the game. If you are the last player to click Join
14. Game, the progress bar will not appear. If for
some reason a player has to log out, the game will resume at the
place where it was left.
Once everyone has joined the game, the market interface will
appear (see Figure 4, which is the beverage
industry default market. Your instructor may have chosen a
different industry, in which case the corner products
will look different).
Each corner of the market shows one of the four product choices
based upon the extremes of taste. In this
example, in the top right quadrant is cola, which is very sweet
and fizzy. Soda water (lower right quadrant) is
plain and fizzy. Water (lower left quadrant) is plain and flat.
And juice (top left quadrant) is very sweet and flat.
On the first round, the user sees customers’ preferences graphed
within the market space. These customers are
currently buying cola, soda water, water, or juice based on how
closely their tastes are aligned to each of these
four products. The challenge is to enter the market space with a
product that matches the taste of as many
customers as possible. The trick is that there are five other
products that will be launching the same week, and
you do not yet know where they fit within the market. Your
product icon will be round with a light gray
background and your competitors will be squares with black
backgrounds. Note that there is also a timer at the
bottom of the interface that starts running when all players are
ready. There is also a cost indicator set at $0 for
the first round.
Figure 3. Number of players ready to start.
15. Figure 4. The market.
Page 4 UVA-M-0677H
Positioning the Product
Players will need to click on the grid to set a position
somewhere among customer preferences as an initial
product placement. The product will then be displayed on the
map with a border that matches the color of
your username in the upper right side of the header of the
simulation interface (see Figure 5). Note that players
can click elsewhere on the map (or click and drag their product)
to change their position for the week. During
the first round, there are no costs to position a product wherever
the player wants. This can be done as many
times as desired until either the Submit button is clicked or the
round timer runs out. A check mark will appear
on products whose players have already clicked Submit (see
Figure 6). This means that player can no longer
reposition their product, and their final position for that round
has been recorded. A progress bar will appear
at the bottom of the grid. The next round will not be launched
until all players in the market have submitted
their placement or their timer has run out.
Figure 5. Product placement.
Figure 6. Checked product waiting for competitors.
16. Page 5 UVA-M-0677H
As each round is played, the “Results of Week [#]” box toward
the upper right side of the interface will
display the development costs for the previous week as well as
income from orders, marketing and development
cost, and operating profit for the previous round (see Figure 7).
After the first round, the more a player changes
the taste of his or her product (the farther his or her product is
moved on the grid), the more the development
costs will increase. The simulation will automatically advance
as soon as all of the players in the same group
have pressed their Submit buttons, or their round timers have
run out. The next round starts once all players
have submitted choices from the previous round. Players will be
able to see where their competitors have
positioned their products, as well as how much of the market
their product has captured. The “Cumulative
Profits” box in the lower right side of the interface shows the
total cumulative profits each product captured
in the previous rounds.
Figure 7. End of first week in a six-player, low-cost, 15-week
game.
Each player will then have the chance to modify the taste of his
or her product by repositioning it within
the market space in the next round. Just as in the previous
round, the farther a product moves from the place
it started at the beginning of the round, the more costly the
modification to the product will be (see Figure 8).
17. Page 6 UVA-M-0677H
Figure 8. Week two of six-player, low-cost, 15-week game.
The game will end after a predetermined number of rounds have
been played (see Figure 9), and your
instructor will then be able to review each week’s results with
the class.
Figure 9. Simulation ended.
MARKETING EXERCISE:
The Positioning
Game
In this multi-player exercise, students compete
within a single market to maximize profit and
market share for their specific product. Students
make key decisions regarding product features
and gain insights into market definition, market
segmentation, and the critical role of product
positioning in marketing strategy.
A perceptual map captures the features and benefits that
consumers seek most and displays them in 2 dimensions for
easy analysis. In The Positioning Game, groups of students
compete by using the perceptual map to position their
18. product at an ideal point in the market. Through a series of
rounds, students decide whether—and where—to move their
product’s position based on market conditions, competitors’
choices, and their own results. As in the real world, students
face time pressure, costs associated with product changes,
and the unseen decisions of competitors.
This online exercise can be played simultaneously by groups
of 2–6 students. Instructors can customize elements including
group size, customer preferences, timing and number of
rounds, cost to move, and variability of customer choices.
The default market is the beverage industry, with product
characteristics of “sweet,” “plain,” “fizzy,” or “flat,” but
instructors can designate alternate industries or product
characteristics. Student performance can be assessed based
on cumulative profits, average profits, development cost,
product position, and/or market share.
Developed by the University of Virginia Darden School of
Business, The Positioning Game shows students the importance
of planning and the potential positioning complexities of
launching a new or updated product. Students learn key
lessons regarding market structure and segments, brand
perception, competitive analysis, and consumer-driven
product development.
LEARN MORE ON OUR WEB SITE:
hbsp.harvard.edu
IDEAL FOR COURSES IN:
MARKETING, NEW PRODUCT DEVELOPMENT
ENGAGING AND FAST-PACED PLAY
SIMPLE, CUSTOMIZABLE SETUP AND
ADMINISTRATION
20. Powerful administrative tools allow for
easy management of teams and real-time
reporting of student decisions. Summary
and analysis screens provide valuable
information for post-play class discussion.
Instructors can accommodate different
class sizes and learning objectives by
customizing variables such as:
¾ Group size
¾ Timing and number of rounds
¾ Cost to move
¾ Distribution of customers on map
¾ Product properties
¾ Product names
¾ Welcome and end-game messages
CUSTOMER SERVICE AND TECH SUPPORT
6 am - 8 pm ET
Monday through Friday
9 am - 5 pm ET
Saturday and Sunday
Customer Service:
1-800-545-7685
(1-617-783-7600 outside the U.S. and Canada)
21. [email protected]
Technical Support:
1-800-810-8858
(1-617-783-7700 outside the U.S. and Canada)
[email protected]
EDUCATORS Get updates from us at [email protected]
EXECUTIVE
EDUCATION
ACADEMIC
DISCOUNT
ACADEMIC DISCOUNTS FOR STUDENTS
Cases $3.95 $6.95
Articles $3.95 $6.95
Simulations $15 $45
Online Courses $45–$75 $90–$150
Exercises $10 $25
Similar discounts apply to all teaching
materials at hbsp.harvard.edu. Prices
subject to change without notice.
Results can be displayed by Cumulative Profits, Average
Profits,
Development Costs, and Product Positions.