This document proposes a comprehensive approach to testing data warehouse systems, with a focus on testing individual data marts. It introduces the following:
1. A classification of testing activities based on what is tested (conceptual schema, logical schema, ETL procedures, database, front-end) and how it is tested (functional, usability, performance, etc.).
2. A discussion of testing activities related to each phase of a reference data mart design methodology (requirements analysis, analysis, conceptual design, etc.).
3. A recommendation to define metrics and acceptance thresholds for testing activities to make the tests quantifiable and automatable.
The document argues that testing should begin early in the design process to reduce
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
From the Art of Software Testing to Test-as-a-Service in Cloud Computingijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
From the Art of Software Testing to Test-as-a-Service in Cloud Computingijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
With interconnectivity between IT Service Providers and their customers and partners growing, fueled by
proliferation of IT Services Outsourcing, with some providers gaining leading positions in marketplace
today, challenges are faced by teams who are tasked to deliver integration projects with much desired
efficiencies both in cost and schedule. Such integrations are growing both in volume and complexity.
Integrations between different autonomous systems such as workflow systems of the providers and their
customers are an important element of this emerging paradigm. In this paper we present an efficient model
to implement such interfaces between autonomous workflow systems with close attention given to major
phases of these projects, from requirement gathering/analysis, to configuration/coding, to
validation/verification, several levels of testing and finally deployment. By deploying a comprehensive
strategy and implementing it in a real corporate environment, a 10%-20% reduction in cost and schedule
year over year was achieved for past several years primarily by improving testing techniques and detecting
bugs earlier in the development life-cycle. Some practical considerations are outlined in addition to
detailing the strategy for testing the autonomous system integrations domain.
Testing throughout the software life cycle & statistic techniquesNovika Damai Yanti
CATEGORIES OF TEST DESIGN TECHNIQUES
Recall reasons that both specification-based (black-box) and structure-based (white-box) approaches to test case design are useful, and list the common techniques for each. (K1)
Characterization of Open-Source Applications and Test Suites ijseajournal
Software systems that meet the stakeholders needs and expectations is the ultimate objective of the software
provider. Software testing is a critical phase in the software development lifecycle that is used to evaluate
the software. Tests can be written by the testers or the automatic test generators in many different ways and
with different goals. Yet, there is a lack of well-defined guidelines or a methodology to direct the testers to
write tests
We want to understand how tests are written and why they may have been written that way. This work is a characterization study aimed at recognizing the factors that may have influenced the development of the test suite. We found that increasing the coverage of the test suites for applications with at least 500 test
cases can make the test suites more costly. The correlation coeffieicent obtained was 0.543. The study also found that there is a positive correlation between the mutation score and the coverage score.
Testing throughout the software life cycle - Testing & Implementationyogi syafrialdi
The development process adopted for a project will depend on the project aims and goals. There are numerous development life cycles that have been developed in order to achieve different required objectives.
Software testing for project report .pdfKamal Acharya
Methods of Software Testing There are two basic methods of performing software testing: 1. Manual testing 2. Automated testing Manual Software Testing As the name would imply, manual software testing is the process of an individual or individuals manually testing software. This can take the form of navigating user interfaces, submitting information, or even trying to hack the software or underlying database. As one might presume, manual software testing is labor-intensive and slow.
Alex Swandi
Program Studi S1 Sistem Informasi
Fakultas Sains dan Teknologi
Universitas Islam Negeri Sultan Syarif Kasim Riau
http://sif.uin-suska.ac.id/
http://fst.uin-suska.ac.id/
http://www.uin-suska.ac.id/
According to Davenport (2014) social media and health care are c.docxmakdul
According to Davenport (2014) social media and health care are collaborating in meeting the needs of health care providers and patients. Social media is taking a step towards focusing on an analytic model to evaluate the value of social media in healthcare. For this assignment you research and investigate the areas of social media that might embrace and benefit from an analytic model combining acquired data and value-based analytics. You will then evaluate the resource addressing the following points:
· Five major stakeholder roles of social media—patients, physicians (and other outpatient care), hospitals, payers (employers, health plans), and health information technology (IT)
· Will social media improve a practice? How so? Provide a thorough rationale.
· Provide a conclusion with the main points .
format:
· Must be two to four
· Must use at least three scholarly sources
.
According to (Fatehi, Gordon & Florida, N.D.) theoretical orient.docxmakdul
According to (Fatehi, Gordon & Florida, N.D.) theoretical orientation represent styles of mind for understanding reality. This theoretical orientation can be organized as a continuum from theoretical constructs that are independent and concrete as with the Behavioral/ CBT theories, to theoretical constructs that are interdependent and abstract as with the Psychodynamic theories (Fatehi, Gordon & Florida, N.D.). Family systems and Humanistic/Existential are theoretical midpoints (Fatehi, Gordon & Florida, N.D.). Trait theory tends to focus on the premise that we are born with traits or characteristics that make us unique and explain our behaviors (Cervone& Pervin, 2019). For example, introversion, extroversion, shyness, agreeableness, kindness, etc. all these innate characteristics that we are born help to explain why we behave in a certain manner according to the situations we face, (Cervone& Pervin, 2019). Psychoanalytic perspective on the other hand focuses on childhood experiences and the unconscious mind which plays a role in our personality development, (Cervone& Pervin, 2019).
According to Freud, (Cervone& Pervin, 2019) our unconscious mind includes all our hidden desires and conflicts which form the root cause of our mental health issues or maladaptive behaviors. The main difference between these two perspectives is that trait theory helps to explain why we behave in a certain manner, whereas psychoanalytic theory only describes the personality and predicting behavior and not really explaining why we behave the way we do. There is no such evident similarity between the two perspectives, but kind of rely on underlying mechanisms to explain personality. Also, there is some degree of subjectivity present in both the perspectives. Trait theories involve subjectivity regarding interpretations of which can be considered as important traits that explain our behaviors, and psychoanalytic theory is subjective and vague in the concepts been used like the unconscious mind. My opinions accord with the visible contrasts between the two, one focused on internal features describing our behaviors in clearer words, whilst other concentrating on unconscious mind in anticipating behavior which is ambiguous and harder to grasp.
References
Cervone, D., & Pervin, L. A. (2019). Personality: Theory and research (14th ed.). Wiley.
Fatehi, M., Gordon, R. M., & Florida, O. A Meta-Theoretical Integration of Psychotherapy Orientations.
.
More Related Content
Similar to A Comprehensive Approach to Data Warehouse TestingMatteo G.docx
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
With interconnectivity between IT Service Providers and their customers and partners growing, fueled by
proliferation of IT Services Outsourcing, with some providers gaining leading positions in marketplace
today, challenges are faced by teams who are tasked to deliver integration projects with much desired
efficiencies both in cost and schedule. Such integrations are growing both in volume and complexity.
Integrations between different autonomous systems such as workflow systems of the providers and their
customers are an important element of this emerging paradigm. In this paper we present an efficient model
to implement such interfaces between autonomous workflow systems with close attention given to major
phases of these projects, from requirement gathering/analysis, to configuration/coding, to
validation/verification, several levels of testing and finally deployment. By deploying a comprehensive
strategy and implementing it in a real corporate environment, a 10%-20% reduction in cost and schedule
year over year was achieved for past several years primarily by improving testing techniques and detecting
bugs earlier in the development life-cycle. Some practical considerations are outlined in addition to
detailing the strategy for testing the autonomous system integrations domain.
Testing throughout the software life cycle & statistic techniquesNovika Damai Yanti
CATEGORIES OF TEST DESIGN TECHNIQUES
Recall reasons that both specification-based (black-box) and structure-based (white-box) approaches to test case design are useful, and list the common techniques for each. (K1)
Characterization of Open-Source Applications and Test Suites ijseajournal
Software systems that meet the stakeholders needs and expectations is the ultimate objective of the software
provider. Software testing is a critical phase in the software development lifecycle that is used to evaluate
the software. Tests can be written by the testers or the automatic test generators in many different ways and
with different goals. Yet, there is a lack of well-defined guidelines or a methodology to direct the testers to
write tests
We want to understand how tests are written and why they may have been written that way. This work is a characterization study aimed at recognizing the factors that may have influenced the development of the test suite. We found that increasing the coverage of the test suites for applications with at least 500 test
cases can make the test suites more costly. The correlation coeffieicent obtained was 0.543. The study also found that there is a positive correlation between the mutation score and the coverage score.
Testing throughout the software life cycle - Testing & Implementationyogi syafrialdi
The development process adopted for a project will depend on the project aims and goals. There are numerous development life cycles that have been developed in order to achieve different required objectives.
Software testing for project report .pdfKamal Acharya
Methods of Software Testing There are two basic methods of performing software testing: 1. Manual testing 2. Automated testing Manual Software Testing As the name would imply, manual software testing is the process of an individual or individuals manually testing software. This can take the form of navigating user interfaces, submitting information, or even trying to hack the software or underlying database. As one might presume, manual software testing is labor-intensive and slow.
Alex Swandi
Program Studi S1 Sistem Informasi
Fakultas Sains dan Teknologi
Universitas Islam Negeri Sultan Syarif Kasim Riau
http://sif.uin-suska.ac.id/
http://fst.uin-suska.ac.id/
http://www.uin-suska.ac.id/
According to Davenport (2014) social media and health care are c.docxmakdul
According to Davenport (2014) social media and health care are collaborating in meeting the needs of health care providers and patients. Social media is taking a step towards focusing on an analytic model to evaluate the value of social media in healthcare. For this assignment you research and investigate the areas of social media that might embrace and benefit from an analytic model combining acquired data and value-based analytics. You will then evaluate the resource addressing the following points:
· Five major stakeholder roles of social media—patients, physicians (and other outpatient care), hospitals, payers (employers, health plans), and health information technology (IT)
· Will social media improve a practice? How so? Provide a thorough rationale.
· Provide a conclusion with the main points .
format:
· Must be two to four
· Must use at least three scholarly sources
.
According to (Fatehi, Gordon & Florida, N.D.) theoretical orient.docxmakdul
According to (Fatehi, Gordon & Florida, N.D.) theoretical orientation represent styles of mind for understanding reality. This theoretical orientation can be organized as a continuum from theoretical constructs that are independent and concrete as with the Behavioral/ CBT theories, to theoretical constructs that are interdependent and abstract as with the Psychodynamic theories (Fatehi, Gordon & Florida, N.D.). Family systems and Humanistic/Existential are theoretical midpoints (Fatehi, Gordon & Florida, N.D.). Trait theory tends to focus on the premise that we are born with traits or characteristics that make us unique and explain our behaviors (Cervone& Pervin, 2019). For example, introversion, extroversion, shyness, agreeableness, kindness, etc. all these innate characteristics that we are born help to explain why we behave in a certain manner according to the situations we face, (Cervone& Pervin, 2019). Psychoanalytic perspective on the other hand focuses on childhood experiences and the unconscious mind which plays a role in our personality development, (Cervone& Pervin, 2019).
According to Freud, (Cervone& Pervin, 2019) our unconscious mind includes all our hidden desires and conflicts which form the root cause of our mental health issues or maladaptive behaviors. The main difference between these two perspectives is that trait theory helps to explain why we behave in a certain manner, whereas psychoanalytic theory only describes the personality and predicting behavior and not really explaining why we behave the way we do. There is no such evident similarity between the two perspectives, but kind of rely on underlying mechanisms to explain personality. Also, there is some degree of subjectivity present in both the perspectives. Trait theories involve subjectivity regarding interpretations of which can be considered as important traits that explain our behaviors, and psychoanalytic theory is subjective and vague in the concepts been used like the unconscious mind. My opinions accord with the visible contrasts between the two, one focused on internal features describing our behaviors in clearer words, whilst other concentrating on unconscious mind in anticipating behavior which is ambiguous and harder to grasp.
References
Cervone, D., & Pervin, L. A. (2019). Personality: Theory and research (14th ed.). Wiley.
Fatehi, M., Gordon, R. M., & Florida, O. A Meta-Theoretical Integration of Psychotherapy Orientations.
.
According to Libertarianism, there is no right to any social service.docxmakdul
According to Libertarianism, there is no right to any social services besides those of a night-watchman state, protecting citizens from harming each other via courts, police, and military.
Consider this town
that decided to remove fire rescue as a basic social service. To benefit from it, one had to pay a yearly fee. Do you think libertarians would generally have to support such a policy in order to be consistent? Why or why not? Also, can you think of any other social services that might no longer exist in a libertarian society? (Btw, none has ever existed).
.
According to Kirk (2016), most of your time will be spent working wi.docxmakdul
According to Kirk (2016), most of your time will be spent working with your data. The four following group actions were mentioned by Kirk (2016):
Data acquisition: Gathering the raw material
Data examination: Identifying physical properties and meaning
Data transformation: Enhancing your data through modification and consolidation
Data exploration: Using exploratory analysis and research techniques to learn
Select 1 data action and elaborate on the actions performed in that action group.
.
According to cultural deviance theorists like Cohen, deviant sub.docxmakdul
According to cultural deviance theorists like Cohen, deviant subcultures have their own value system that often opposes those of society at large. These contradictory "values" have been embraced by generations within that culture—and as a way to act out against the majority value system from which they feel excluded. Write an essay of 750-1,000 words that addresses the following:
How has rap culture perpetuated subcultural values, and promoted violence and crime among young men?
Given its sharp deviation from conventional values and norms, how and why would theorists explain the persistence and popularity of this subculture? (See examples Tupac Shakur page 109-110 and 50 Cent page 135).
Be sure to cite three to five relevant scholarly sources in support of your content
.
According to Gray et al, (2017) critical appraisal is the proce.docxmakdul
According to Gray et al, (2017) “critical appraisal is the process of carefully and systematically assessing the outcome of all aspects of a study, judging the strengths, limitation, trustworthiness, meaning, and its applicability to practice”. The steps involved in critical appraisal include “identifying the study's elements or processes, determining the strengths and weaknesses, and evaluating the credibility and trustworthiness of the study” (Gray et al., 2017). The journal article chosen is
“change in staff perspectives on indwelling urinary catheter use after implementation of an intervention bundle in seven Swiss acute care hospitals: a result of a before/after survey study”
by Niederhauser, Zullig, Marschall, Schweiger, John, Kuster, and Schwappach. (2019).
Identifying the study's elements or processes
A significant issue addressed by the study is the nursing “staffs’ perspective towards indwelling urinary catheter (IUC) and evaluation of changes in their perspectives towards indwelling urinary catheter (IUC) use after implementation of a 1-year quality improvement project” (Niederhauser et al, 2019). the process of the research was conducted in “seven acute care hospitals in Switzerland” (Niederhauser et al, 2019). With a “sample size of 1579 staff members participated in the baseline survey and 1527 participated in the follow-up survey. The survey captures all nursing and medical staff members working at the participating hospitals at the time of survey distribution, using a multimodal intervention bundle, consisting of an evidence-based indication list, daily re-evaluation of ongoing catheter needs, and staff training were implemented over the course of 9 months” (Niederhauser et al, 2019).
Determining the strengths and weaknesses
A great strength of the study is a large sample size of over 1000 and the use of well-constructed and easy-to-read heading for better understanding. Also, the use of figures, graphs, and tables make the article less cumbersome to read. Another strength is the implementation of the ethical principles of research by enabling informed consent and voluntary participation as well as confidentiality and anonymity of information.
On the other hand, the study has several weaknesses such as the use of “the theory of planned behavior to model intentions to reduce catheter use, but it is not possible to know if changes observed in staff perception led to a true change in practice” (Niederhauser et al, 2019). Another weakness of the study is the repeated survey design which allows assessment of changes in staff perspectives after implementation of a quality improvement intervention but the sustainability of the effects over time could not be evaluated.
Evaluating the credibility and trustworthiness of the study
Although the study used a larger sample size of over 1000, the “use of a single-group design and no control group weakens its credibility and trustworthiness because there are no causal inferences abou.
According to article Insecure Policing Under Racial Capitalism by.docxmakdul
According to article "Insecure: Policing Under Racial Capitalism" by Robin D.G. Kelley and the article "Yes, We Mean Literally Abolish the Police" by Mariame Kaba, the police are no longer an attribute of safety and security. The facts that are given in the articles are similar within the meaning of the content. The police do not serve for the benefit of the whole community. Racial and class division according to social status became the basis of lawlessness and injustice on the part of the police. Kaaba in his article cites several stories confirming the racial hatred that led to the murder of African Americans. After that, people massively took to the streets of many cities in several countries, demanding an end to racial discrimination and the murder of African Americans. Kelley's article describes numerous manifestos where demands for police abolition have been raised, but all have been rejected. In the protests, people suggested that they themselves would take care of each other, which the police could not do. I understand that the police system is far from ideal and the permissiveness of police representatives should be limited. Ruth Wilson Gilmore says that "capitalism is never racial." I think that this phrase she wants to say that the stronger people take away from the weak people and use them for their own well-being. And since the roots of history go back to slavery, then African Americans are the weak link. In this regard, a huge number of prisons and police power appeared. The common and small class do not feel protected, on the contrary; they expect a threat from people who must protect them. The police take an oath to respect and protect human and civil rights and freedoms, regardless of skin color and social status. If this does not happen, then you need to change the system.
.
Abstract In this experiment, examining the equivalence poi.docxmakdul
Abstract:
In this experiment, examining the equivalence point in a titration with NaOH identified an
unknown diprotic acid. The molar mass of the unknown was found to be 100.78 g/mol with pKa
values of 2.6 and 6.6. The closest diprotic acid to this molar mass is malonic acid with a percent
error of 3.48%.
Introduction:
The purpose of the experiment was to determine the identity of an unknown diprotic acid. The
equivalence and half-equivalence points on the titration curve give important information, which
can then be used to calculate the molecular weight of the acid. The equivalence point is the
moment when there is an equal amount of acid and NaOH. Knowing the concentration and
volume of added NaOH at that moment, the amount of moles of NaOH can be determined. The
amount of moles of NaOH is then equivalent to the amount of acid present. Dividing the original
mass of the acid by the moles present gave the molar mass of the acid.
In this particular titration, there were two equivalence points as the acid is diprotic.
Consequently, the titration curve had two inflection points. The acid dissociated in a two-step
process with the net reaction being:
H2X + 2 NaOH Na2X + 2 H2O
This was important to take into consideration when calculating the molar mass of the diprotic
acid. If the first equivalence point was to be used, the ratio of acid to NaOH was 1:1. If the
second equivalence point was used in the calculations, the ratio became 1:2 as now a second
set of NaOH molecules reacted with the acid to dissociate the second hydrogen ion. The
titration curve also showed the pKa values of the acid. This happened at the half-equivalence
point where half of the acid was dissociated to its conjugate base (again, because of the diprotic
properties of the acid, this happens twice on the curve). The Henderson Hasselbalch equation
pH = pKa+log(A-/HA)
shows that at the half-equivalence point, the pKa value equaled the pH and was visually
represented by the flattest part of the graphs.
Discussion:
The titration graph showed that the data was consistent with the methodology and proved to be
an precise execution of the procedure and followed the expected shape. One possible source of
error was the actual mass of the acid solid. While transferring the dust from the weigh boat to
the solution, some remained in the weigh boat this could have altered the molar mass
calculations and shifted the final the final mass lighter than actual.
The Vernier pH method was definitely a much more concrete method of interpreting the results.
It was possible to see which addition of NaOH gave the greatest increase in pH ( greatest 1st
derivative of the titration graph). The relying solely on the indicator color would make it very
difficult to judge at which precise point the color shifted most, as the shift was a lot more gradual
compared to the precise numbers. This may have been a more reliable method if there was a
de.
ACC 403- ASSIGNMENT 2 RUBRIC!!!
Points: 280
Assignment 2: Audit Planning and Control
Criteria
UnacceptableBelow 60% F
Meets Minimum Expectations60-69% D
Fair70-79% C
Proficient80-89% B
Exemplary90-100% A
1. Outline the critical steps inherent in planning an audit and designing an effective audit program. Based upon the type of company selected, provide specific details of the actions that the company should undertake during planning and designing the audit program.
Weight: 15%
Did not submit or incompletely outlined the critical steps inherent in planning an audit and designing an effective audit program. Did not submit or incompletely provided specific details of the actions that the company should undertake during planning and designing the audit program, based upon the type of company selected.
Insufficiently outlined the critical steps inherent in planning an audit and designing an effective audit program. Insufficiently provided specific details of the actions that the company should undertake during planning and designing the audit program, based upon the type of company selected.
Partially outlined the critical steps inherent in planning an audit and designing an effective audit program. Partially provided specific details of the actions that the company should undertake during planning and designing the audit program, based upon the type of company selected.
Satisfactorily outlined the critical steps inherent in planning an audit and designing an effective audit program. Satisfactorily provided specific details of the actions that the company should undertake during planning and designing the audit program, based upon the type of company selected.
Thoroughly outlined the critical steps inherent in planning an audit and designing an effective audit program. Thoroughly provided specific details of the actions that the company should undertake during planning and designing the audit program, based upon the type of company selected.
2. Examine at least two (2) performance ratios that you would use in order to determine which analytical tests to perform. Identify the accounts that you would test, and select at least three (3) analytical procedures that you would use in your audit.
Weight: 15%
Did not submit or incompletely examined at least two (2) performance ratios that you would use in order to determine which analytical tests to perform. Did not submit or incompletely identified the accounts that you would test; did not submit or incompletely selected at least three (3) analytical procedures that you would use in your audit.
Insufficiently examined at least two (2) performance ratios that you would use in order to determine which analytical tests to perform. Insufficiently identified the accounts that you would test; insufficiently selected at least three (3) analytical procedures that you would use in your audit.
Partially examined at least two (2) performance ratios that you would use in order to determine which analytical tests .
ACC 601 Managerial Accounting Group Case 3 (160 points) .docxmakdul
ACC 601 Managerial Accounting
Group Case 3 (160 points)
Instructions:
1. As a group, complete the following activities in good form. Use excel or
word only. Provide all supporting calculations to show how you arrived at
your numbers
2. Add only the names of group members who participated in the completion
of this assignment.
3. Submit only one copy of your completed work via Moodle. Do not send it to
me by email.
4. Due: No later than the last day of Module 7. Please note that your professor
has the right to change the due date of this assignment.
Part A: Capital Budgeting Decisions
Chee Company has gathered the following data on a proposed investment project:
Investment required in equipment ............. $240,000
Annual cash inflows .................................. $50,000
Salvage value ............................................ $0
Life of the investment ............................... 8 years
Required rate of return .............................. 10%
Assets will be depreciated using straight
line depreciation method
Required:
Using the net present value and the internal rate of return methods, is this a good investment?
Part B: Master Budget
You have just been hired as a new management trainee by Earrings Unlimited, a distributor of
earrings to various retail outlets located in shopping malls across the country. In the past, the
company has done very little in the way of budgeting and at certain times of the year has
experienced a shortage of cash. Since you are well trained in budgeting, you have decided to
prepare a master budget for the upcoming second quarter. To this end, you have worked with
accounting and other areas to gather the information assembled below.
The company sells many styles of earrings, but all are sold for the same price—$10 per pair. Actual
sales of earrings for the last three months and budgeted sales for the next six months follow (in pairs
of earrings):
January (actual) 20,000 June (budget) 50,000
February (actual) 26,000 July (budget) 30,000
March (actual) 40,000 August (budget) 28,000
April (budget) 65,000 September (budget) 25,000
May (budget) 100,000
The concentration of sales before and during May is due to Mother’s Day. Sufficient inventory should
be on hand at the end of each month to supply 40% of the earrings sold in the following month.
Suppliers are paid $4 for a pair of earrings. One-half of a month’s purchases is paid for in the month
of purchase; the other half is paid for in the following month. All sales are on credit. Only 20% of a
month’s sales are collected in the month of sale. An additional 70% is collected in the following
month, and the remaining 10% is collected in the second month following sale. Bad debts have been
negligible.
Monthly operating expenses for the company are given below:
Variable:
Sales commissions 4 % of sales
.
Academic Integrity A Letter to My Students[1] Bill T.docxmakdul
Academic Integrity:
A Letter to My Students[1]
Bill Taylor
Professor of Political Science
Oakton Community College
Des Plaines, IL 60016
[email protected]
Here at the beginning of the semester I want to say something to you about academic integrity.[2]
I’m deeply convinced that integrity is an essential part of any true educational experience, integrity on
my part as a faculty member and integrity on your part as a student.
To take an easy example, would you want to be operated on by a doctor who cheated his way through
medical school? Or would you feel comfortable on a bridge designed by an engineer who cheated her
way through engineering school. Would you trust your tax return to an accountant who copied his
exam answers from his neighbor?
Those are easy examples, but what difference does it make if you as a student or I as a faculty member
violate the principles of academic integrity in a political science course, especially if it’s not in your
major?
For me, the answer is that integrity is important in this course precisely because integrity is important in
all areas of life. If we don’t have integrity in the small things, if we find it possible to justify plagiarism or
cheating or shoddy work in things that don’t seem important, how will we resist doing the same in areas
that really do matter, in areas where money might be at stake, or the possibility of advancement, or our
esteem in the eyes of others?
Personal integrity is not a quality we’re born to naturally. It’s a quality of character we need to nurture,
and this requires practice in both meanings of that word (as in practice the piano and practice a
profession). We can only be a person of integrity if we practice it every day.
What does that involve for each of us in this course? Let’s find out by going through each stage in the
course. As you’ll see, academic integrity basically requires the same things of you as a student as it
requires of me as a teacher.
I. Preparation for Class
What Academic Integrity Requires of Me in This Area
With regard to coming prepared for class, the principles of academic integrity require that I come having
done the things necessary to make the class a worthwhile educational experience for you. This requires
that I:
reread the text (even when I’ve written it myself),
clarify information I might not be clear about,
prepare the class with an eye toward what is current today (that is, not simply rely on past
notes), and
plan the session so that it will make it worth your while to be there.
What Academic Integrity Requires of You in This Area
With regard to coming prepared for class, the principles of academic integrity suggest that you have a
responsibility to yourself, to me, and to the other students to do the things necessary to put yourself in
a position to make fruitful contributions to class discussion. This will require you to:
read the text before.
Access the Center for Disease Control and Prevention’s (CDC’s) Nu.docxmakdul
Access the Center for Disease Control and Prevention’s (CDC’s)
“Nutrition, Physical Activity, and Obesity: Data, Trends and Maps”
database. Choose a state other than your home state and compare their health status and associated behaviors. What behaviors lead to the current obesity status?
Initial discussion post should be approximately 300 words. Any sources used should be cited in APA format.
.
According to DSM 5 This patient had very many symptoms that sugg.docxmakdul
According to DSM 5 This patient had very many symptoms that suggested Major Depressive Disorder.
Objective(s)
Analyze psychometric properties of assessment tools
Evaluate appropriate use of assessment tools in psychotherapy
Compare assessment tools used in psychotherapy
.
Acceptable concerts include professional orchestras, soloists, jazz,.docxmakdul
Acceptable concerts include professional orchestras, soloists, jazz, Broadway musicals and instrumental or vocal ensembles, and comparable college or community groups performing music relevant to the content of this class. (Optionally, either your concert report
or
your concert review - but not both unless advance permission is given - may be based on a concert of non-western music selected from events on the concert list.)
Acceptable concerts include the following:
• Symphony orchestras • Concert bands and wind ensembles • Chamber Music (string quartets, brass and woodwind quintets, etc.) • Solo recitals (piano, voice, etc.) • Choral concerts • Early music concerts • Non-western music • Some jazz concerts • Opera• Broadway Musicals• Flamenco• Ballet• Tango
Assignment Format
The following are required on the concert review assignment and, thus, may affect your grade.
• Must be typed• Must be double-spaced• Must be between
2 and 4 pages
in length
not including the cover sheet
.• Must use conventional size and formatting of text - e.g. 10-12 point serif or sans serif fonts with normal margins. • Must include the printed program from the concert and/or your ticket stubs. Photocopies are unacceptable. (Contact me at least 24 hours before due date if any materials are unavailable.)• All materials (text, program, ticket stub) must be
stapled
together securely. Folded corners, paper clips, etc. instead of staples will not be accepted.• Careful editing, proofreading, and spelling are expected, although minor errors will not affect your grade.
Papers that do not follow these format guidelines may be returned for resubmission, and late penalties will apply.
Concert Review Assignment Content
I. Cover Sheet:
Include the following on a cover sheet attached to the front of your review:
• Title or other description of the event/performers you heard, along with the date and location of the performance. For example:
New World Symphony Orchestra
1258 Lincoln Road
Saturday, June 5, 2013
Lincoln Road Theater, Miami Beach
• Your name, assignment submission date, course. For example:
Pat Romero
October 31, 2013
Humanities 1020 MWF 8:05 a.m.
II. Descriptions
The main body of the concert review should include brief discussions of
three of the
pieces
in the concert you attend. In most cases, a single paragraph for each piece should be sufficient, although you may wish to break descriptions of longer pieces into separate short paragraphs, one per movement.
Your description of each piece (song) should include:
• The title of the piece and the composer's name if possible, as listed in the concert program.• A brief description of your reaction to the piece. For example:
When the piece started I thought it was going to be slow and boring, but the faster section in the first movement made it more exciting. A really great flute solo full of fast and high notes in the third movement caught my attention. I'm not sure, but I thought that som.
ACA was passed in 2010, under the presidency of Barack Obama. Pr.docxmakdul
ACA was passed in 2010, under the presidency of Barack Obama. Prior to this new act, there were plenty of votes that did not agree with the notion of accessible insurance. Before 2010, The private sector had been given coverage in such a way that Milstead and Short (2019) called it sickness insurance; meaning companies will risk incurring medical expenses as long as it was balanced by healthy people. They were doing so by excluding people that had pre-existing conditions, becoming a very solvent business (Milstead & Short, 2019). After ACA was passed that was no longer the case. When President Trump came into term he did so by bringing his own healthcare agenda, which attempted to repeal ACA, but ultimately failed to come up with a replacement.
In 2016, the Republican's party platform was to repeal ACA, while continuing Medicare and Medicaid, but on the other hand, democrats put down that Obamacare is a step towards the goals of universal health care, and that this was just the beginning (Physicians for a National Health Program, n.d.). As for the cost analysis of repealing the Affordable Care Act, this would increase the number of uninsured people by 23 million, and it will cost about 350 billion through 2027, as well as creating costly coverage provisions to replace it (Committee for a Responsible Federal Budget, 2017).
(2 references required)
.
Access the FASB website. Once you login, click the FASB Accounting S.docxmakdul
Access the FASB website. Once you login, click the FASB Accounting Standards Codification link. Review the materials in the FASB Codification, especially the links on the left side column. Next, write a 1-page memo to a friend introducing and explaining this new accounting research resource that you have found. Provide at least one APA citation to the FASB Codification and reference that citation using the APA guidelines.
.
Academic Paper Overview This performance task was intended to asse.docxmakdul
Academic Paper Overview This performance task was intended to assess students’ ability to conduct scholarly and responsible research and articulate an evidence-based argument that clearly communicates the conclusion, solution, or answer to their stated research question. More specifically, this performance task was intended to assess students’ ability to: • Generate a focused research question that is situated within or connected to a larger scholarly context or community; • Explore relationships between and among multiple works representing multiple perspectives within the scholarly literature related to the topic of inquiry; • Articulate what approach, method, or process they have chosen to use to address their research question, why they have chosen that approach to answering their question, and how they employed it; • Develop and present their own argument, conclusion, or new understanding while acknowledging its limitations and discussing implications; • Support their conclusion through the compilation, use, and synthesis of relevant and significant evidence generated by their research; • Use organizational and design elements to effectively convey the paper’s message; • Consistently and accurately cite, attribute, and integrate the knowledge and work of others, while distinguishing between their voice and that of others; and • Generate a paper in which word choice and syntax enhance communication by adhering to established conventions of grammar, usage, and mechanics.
.
Academic Research Team Project PaperCOVID-19 Open Research Datas.docxmakdul
Academic Research Team Project Paper
COVID-19 Open Research Dataset Challenge (CORD-19)
An AI challenge with AI2, CZI, MSR, Georgetown, NIH & The White House
(1) FULL-LENGTH PROJECT
Dataset Description
In response to the COVID-19 pandemic, the White House and a coalition of leading research groups have prepared the COVID-19 Open Research Dataset (CORD-19). CORD-19 is a resource of over 44,000 scholarly articles, including over 29,000 with full text, about COVID-19, SARS-CoV-2, and related corona viruses. This freely available dataset is provided to the global research community to apply recent advances in natural language processing and other AI techniques to generate new insights in support of the ongoing fight against this infectious disease. There is a growing urgency for these approaches because of the rapid acceleration in new coronavirus literature, making it difficult for the medical research community to keep up.
Call to Action
We are issuing a call to action to the world's artificial intelligence experts to develop text and data mining tools that can help the medical community develop answers to high priority scientific questions. The CORD-19 dataset represents the most extensive machine-readable coronavirus literature collection available for data mining to date. This allows the worldwide AI research community the opportunity to apply text and data mining approaches to find answers to questions within, and connect insights across, this content in support of the ongoing COVID-19 response efforts worldwide. There is a growing urgency for these approaches because of the rapid increase in coronavirus literature, making it difficult for the medical community to keep up.
A list of our initial key questions can be found under the
Tasks
section of this dataset. These key scientific questions are drawn from the NASEM’s SCIED (National Academies of Sciences, Engineering, and Medicine’s Standing Committee on Emerging Infectious Diseases and 21st Century Health Threats)
research topics
and the World Health Organization’s
R&D Blueprint
for COVID-19.
Many of these questions are suitable for text mining, and we encourage researchers to develop text mining tools to provide insights on these questions.
In this project, you will follow your own interests to create a portfolio worthy single-frame viz or multi-frame data story that will be shared in your presentation. You will use all the skills taught in this course to complete this project step-by-step, with guidance from your instructors along the way. You will first create a project proposal to identify your goals for the project, including the question you wish to answer or explore with data. You will then find data that will provide the information you are seeking. You will then import that data into Tableau and prepare it for analysis. Next, you will create a dashboard that will allow you to explore the data in-depth and identify meaningful insights. You will then give structure .
AbstractVoice over Internet Protocol (VoIP) is an advanced t.docxmakdul
Abstract
Voice over Internet Protocol (VoIP) is an advanced telecommunication technology which transfers the voice/video over
high speed network that provides advantages of flexibility, reliability and cost efficient advanced telecommunication
features. Still the issues related to security are averting many organizations to accept VoIP cloud environment due to
security threats, holes or vulnerabilities. So, the novel secured framework is absolutely necessary to prevent all kind of
VoIP security issues. This paper points out the existing VoIP cloud architecture and various security attacks and issues
in the existing framework. It also presents the defense mechanisms to prevent the attacks and proposes a new security
framework called Intrusion Prevention System (IPS) using video watermarking and extraction technique and Liveness
Voice Detection (LVD) technique with biometric features such as face and voice. IPSs updated with new LVD features
protect the VoIP services not only from attacks but also from misuses.
A Comprehensive Survey of Security Issues and
Defense Framework for VoIP Cloud
Ashutosh Satapathy* and L. M. Jenila Livingston
School of Computing Science and Engineering, VIT University, Chennai - 600127, Tamil Nadu, India;
[email protected], [email protected]
Keywords: Defense Mechanisms, Liveness Voice Detection, VoIP Cloud, Voice over Internet Protocol, VoIP Security Issues
1. Introduction
The rapid progress of VoIP over traditional services is
led to a situation that is common to many innovations
and new technologies such as VoIP cloud and peer to
peer services like Skype, Google Hangout etc. VoIP is the
technology that supports sending voice (and video) over
an Internet protocol-based network1,2. This is completely
different than the public circuit-switched telephone net-
work. Circuit switching network allocates resources to
each individual call and path is permanent throughout
the call from start to end. Traditional telephony services
are provided by the protocols/components such as SS7, T
carriers, Plain Old Telephone Service (POTS), the Public
Switch Telephone Network (PSTN), dial up, local loops
and anything under International Telecommunication
Union. IP networks are based on packet switching and
each packet follows different path, has its own header and
is forwarded separately by routers. VoIP network can be
constructed in various ways by using both proprietary
protocols and protocols based on open standards.
1.1 VoIP Layer Architecture
VoIP communication system typically consist of a front
end platform (soft-phone, PBX, gateway, call manager),
back end platform (server, CPU, storage, memory, net-
work) and intermediate platforms such as VoIP protocols,
database, authentication server, web server, operating sys-
tems etc. It is mainly divided into five layers as shown in
Figure1.
1.2 VoIP Cloud Architecture
VoIP cloud is the framework for delivering telephony
services in which resourc.
Abstract
Structure of Abstract
Background on the problem
purpose/objective of the study
Method used
Interpretation of results
Conclusion&Recommendation for future research
.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Overview on Edible Vaccine: Pros & Cons with Mechanism
A Comprehensive Approach to Data Warehouse TestingMatteo G.docx
1. A Comprehensive Approach to Data Warehouse Testing
Matteo Golfarelli
DEIS - University of Bologna
Via Sacchi, 3
Cesena, Italy
[email protected]
Stefano Rizzi
DEIS - University of Bologna
VIale Risorgimento, 2
Bologna, Italy
[email protected]
ABSTRACT
Testing is an essential part of the design life-cycle of any
software product. Nevertheless, while most phases of data
warehouse design have received considerable attention in the
literature, not much has been said about data warehouse
testing. In this paper we introduce a number of data mart-
specific testing activities, we classify them in terms of what
is tested and how it is tested, and we discuss how they can
be framed within a reference design methodology.
Categories and Subject Descriptors
H.4.2 [Information Systems Applications]: Types of
Systems—Decision support; D.2.5 [Software Engineering]:
Testing and Debugging
General Terms
2. Verification, Design
Keywords
data warehouse, testing
1. INTRODUCTION
Testing is an essential part of the design life-cycle of any
software product. Needless to say, testing is especially crit-
ical to success in data warehousing projects because users
need to trust in the quality of the information they access.
Nevertheless, while most phases of data warehouse design
have received considerable attention in the literature, not
much has been said about data warehouse testing.
As agreed by most authors, the difference between test-
ing data warehouse systems and generic software systems or
even transactional systems depends on several aspects [21,
23, 13]:
• Software testing is predominantly focused on program
code, while data warehouse testing is directed at data
and information. As a matter of fact, the key to data
Permission to make digital or hard copies of all or part of this
work for
personal or classroom use is granted without fee provided that
copies are
not made or distributed for profit or commercial advantage and
that copies
bear this notice and the full citation on the first page. To copy
otherwise, to
republish, to post on servers or to redistribute to lists, requires
prior specific
permission and/or a fee.
DOLAP’09, November 6, 2009, Hong Kong, China.
3. Copyright 2009 ACM 978-1-60558-801-8/09/11 ...$10.00.
warehouse testing is to know the data and what the
answers to user queries are supposed to be.
• Differently from generic software systems, data ware-
house testing involves a huge data volume, which sig-
nificantly impacts performance and productivity.
• Data warehouse testing has a broader scope than soft-
ware testing because it focuses on the correctness and
usefulness of the information delivered to users. In
fact, data validation is one of the main goals of data
warehouse testing.
• Though a generic software system may have a large
number of different use scenarios, the valid combina-
tions of those scenarios are limited. On the other hand,
data warehouse systems are aimed at supporting any
views of data, so the possible combinations are virtu-
ally unlimited and cannot be fully tested.
• While most testing activities are carried out before de-
ployment in generic software systems, data warehouse
testing activities still go on after system release.
• Typical software development projects are self-contained.
Data warehousing projects never really come to an end;
it is very difficult to anticipate future requirements for
the decision-making process, so only a few require-
ments can be stated from the beginning. Besides, it
is almost impossible to predict all the possible types
of errors that will be encountered in real operational
data. For this reason, regression testing is inherently
involved.
4. Like for most generic software systems, different types of
tests can be devised for data warehouse systems. For in-
stance, it is very useful to distinguish between unit test,
a white-box test performed on each individual component
considered in isolation from the others, and integration test,
a black-box test where the system is tested in its entirety.
Also regression test, that checks that the system still func-
tions correctly after a change has occurred, is considered to
be very important for data warehouse systems because of
their ever-evolving nature. However, the peculiar character-
istics of data warehouse testing and the complexity of data
warehouse projects ask for a deep revision and contextualiza-
tion of these test types, aimed in particular at emphasizing
the relationships between testing activities on the one side,
design phases and project documentation on the other.
From the methodological point of view we mention that,
while testing issues are often considered only during the very
17
last phases of data warehouse projects, all authors agree that
advancing an accurate test planning to the early projects
phases is one of the keys to success. The main reason for
this is that, as software engineers know very well, the ear-
lier an error is detected in the software design cycle, the
cheapest correcting that error is. Besides, planning early
testing activities to be carried out during design and before
implementation gives project managers an effective way to
regularly measure and document the project progress state.
Since the correctness of a system can only be measured
with reference to a set of requirements, a successful test-
ing begins with the gathering and documentation of end-
5. user requirements [8]. Since most end-users requirements
are about data analysis and data quality, it is inevitable
that data warehouse testing primarily focuses on the ETL
process on the one hand (this is sometimes called back-end
testing [23]), on reporting and OLAP on the other (front-end
testing [23]). While back-end testing aims at ensuring that
data loaded into the data warehouse are consistent with the
source data, front-end testing aims at verifying that data are
correctly navigated and aggregated in the available reports.
From the organizational point of view, several roles are in-
volved with testing [8]. Analysts draw conceptual schemata,
that represent the users requirements to be used as a refer-
ence for testing. Designers are responsible for logical schemata
of data repositories and for data staging flows, that should be
tested for efficiency and robustness. Testers develop and ex-
ecute test plans and scripts. Developers perform white box
unit tests. Database administrators test for performance
and stress, and set up test environments. Finally, end-users
perform functional tests on reporting and OLAP front-ends.
In this paper we propose a comprehensive approach to
testing data warehouse systems. More precisely, consider-
ing that data warehouse systems are commonly built in a
bottom-up fashion, by iteratively designing and implement-
ing one data mart at a time, we will focus on the test of a
single data mart. The main features of our approach can be
summarized as follows:
• A consistent portion of the testing effort is advanced
to the design phase to reduce the impact of error cor-
rection.
• A number of data mart-specific testing activities are
identified and classified in terms of what is tested and
how it is tested.
6. • A tight relationship is established between testing ac-
tivities and design phases within the framework of a
reference methodological approach to design.
• When possible, testing activities are related to quality
metrics to allow their quantitative assessment.
The remainder of the paper is organized as follows. Af-
ter briefly reviewing the related literature in Section 2, in
Section 3 we propose the reference methodological frame-
work. Then, Section 4 classifies and describes the data mart-
specific testing activities we devised, while Section 5 briefly
discusses some issues related to test coverage. Section 6 pro-
poses a timeline for testing in the framework of the reference
design methodology, and Section 7 summarizes the lessons
we learnt.
2. RELATED WORKS
The literature on software engineering is huge, and it in-
cludes a detailed discussion of different approaches to the
testing of software systems (e.g., see [16, 19]). However,
only a few works discuss the issues raised by testing in the
data warehousing context.
[13] summarizes the main challenges of data warehouse
and ETL testing, and discusses their phases and goals dis-
tinguishing between retrospective and prospective testing. It
also proposes to base part of the testing activities (those
related to incremental load) on mock data.
In [20] the authors propose a basic set of attributes for a
data warehouse test scenario based on the IEEE 829 stan-
dard. Among these attributes, the test purpose, its perfor-
mance requirements, its acceptance criteria, and the activi-
7. ties for its completion.
[2] proposes a process for data warehouse testing centered
on a unit test phase, an integration test phase, and a user
acceptance test phase.
[21] reports eight classical mistakes in data warehouse
testing; among these: not closely involving end users, testing
reports rather than data, skipping the comparison between
data warehouse data and data source data, and using only
mock data. Besides, unit testing, system testing, acceptance
testing, and performance testing are proposed as the main
testing steps.
[23] explains the differences between testing OLTP and
OLAP systems and proposes a detailed list of testing cate-
gories. It also enumerates possible test scenarios and differ-
ent types of data to be used for testing.
[8] discusses the main aspects in data warehouse testing.
In particular, it distinguishes the different roles required in
the testing team, and the different types of testing each role
should carry out.
[4] presents some lessons learnt on data warehouse testing,
emphasizing the role played by constraint testing, source-to-
target testing, and validation of error processing procedures.
All papers mentioned above provide useful hints and list
some key testing activities. On the other hand, our paper
is the first attempt to define a comprehensive framework for
data mart testing.
3. METHODOLOGICAL FRAMEWORK
To discuss how testing relates to the different phases of
8. data mart design, we adopt as a methodological framework
the one described in [7]. As sketched in Figure 1, this frame-
work includes eight phases:
• Requirement analysis: requirements are elicited from
users and represented either informally by means of
proper glossaries or formally (e.g., by means of goal-
oriented diagrams as in [5]);
• Analysis and reconciliation: data sources are inspected,
normalized, and integrated to obtain a reconciled schema;
• Conceptual design: a conceptual schema for the data
mart –e.g., in the form of a set of fact schemata [6]– is
designed considering both user requirements and data
available in the reconciled schema;
• Workload refinement: the preliminary workload ex-
pressed by users is refined and user profiles are singled
out, using for instance UML use case diagrams;
18
Figure 1: Reference design methodology
• Logical design: a logical schema for the data mart (e.g.,
in the form of a set of star schemata) is obtained by
properly translating the conceptual schema;
• Data staging design: ETL procedures are designed con-
sidering the source schemata, the reconciled schema,
and the data mart logical schema; and
• Physical design: this includes index selection, schema
9. fragmentation, and all other issues related to physical
allocation.
• Implementation: this includes implementation of ETL
procedures and creation of front-end reports.
Note that this methodological framework is general enough
to host both supply-driven, demand-driven, and mixed ap-
proaches to design. While in a supply-driven approach con-
ceptual design is mainly based on an analysis of the available
source schemata [6], in a demand-driven approach user re-
quirements are the driving force of conceptual design [3]. Fi-
nally, in a mixed approach requirement analysis and source
schema analysis are carried out in parallel, and requirements
are used to actively reduce the complexity of source schema
analysis [1].
4. TESTING ACTIVITIES
In order to better frame the different testing activities, we
identify two distinct, though not independent, classification
coordinates: what is tested and how it is tested.
As concerns the first coordinate, what, we already men-
tioned that testing data quality is undoubtedly at the core of
data warehouse testing. Testing data quality mainly entails
an accurate check on the correctness of the data loaded by
ETL procedures and accessed by front-end tools. However,
in the light of the complexity of data warehouse projects and
of the close relationship between good design and good per-
formance, we argue that testing the design quality is almost
equally important. Testing design quality mainly implies
verifying that user requirements are well expressed by the
conceptual schema of the data mart and that the conceptual
and logical schemata are well-built. Overall, the items to be
10. tested can then be summarized as follows:
• Conceptual schema: it describes the data mart from an
implementation-independent point of view, specifying
the facts to be monitored, their measures, the hierar-
chies to be used for aggregation. Our methodological
framework adopts the Dimensional Fact Model to this
end [6].
• Logical schema: it describes the structure of the data
repository at the core of the data mart. If the im-
plementation target is a ROLAP platform, the logical
schema is actually a relational schema (typically, a star
schema or one of its variants).
• ETL procedures: the complex procedures that are in
charge of feeding the data repository starting from
data sources.
• Database: the repository storing data.
• Front-end: the applications accessed by end-users to
analyze data; typically, these are either static reporting
tools or more flexible OLAP tools.
As concerns the second coordinate, how, the eight types
of test that, in our experience, best fit the characteristics of
data warehouse systems are summarized below:
• Functional test: it verifies that the item is compliant
with its specified business requirements.
• Usability test: it evaluates the item by letting users
interact with it, in order to verify that the item is easy
to use and comprehensible.
11. • Performance test: it checks that the item performance
is satisfactory under typical workload conditions.
• Stress test: it shows how well the item performs with
peak loads of data and very heavy workloads.
• Recovery test: it checks how well an item is able to re-
cover from crashes, hardware failures and other similar
problems.
• Security test: it checks that the item protects data and
maintains functionality as intended.
• Regression test: It checks that the item still functions
correctly after a change has occurred.
Remarkably, these types of test are tightly related to six of
the software quality factors described in [12]: correctness,
usability, efficiency, reliability, integrity, flexibility.
The relationship between what and how is summarized in
Table 1, where each check mark points out that a given type
of test should be applied to a given item. Starting from this
table, in the following subsections we discuss the main test-
ing activities and how they are related to the methodological
framework outlined in Section 3.
19
Table 1: What vs. how in testing
C
o
n
14. Regression X X X X X
Analysis & design Implementation
We preliminarily remark that a requirement for an effec-
tive test is the early definition, for each testing activity, of
the necessary conditions for passing the test. These condi-
tions should be verifiable and quantifiable. This means that
proper metrics should be introduced, together with their ac-
ceptance thresholds, so as to get rid of subjectivity and am-
biguity issues. Using quantifiable metrics is also necessary
for automating testing activities.
While for some types of test, such as performance tests,
the metrics devised for generic software systems (such as
the maximum query response time) can be reused and ac-
ceptance thresholds can be set intuitively, for other types of
test data warehouse-specific metrics are needed. Unfortu-
nately, very few data warehouse-specific metrics have been
defined in the literature. Besides, the criteria for setting
their acceptance thresholds often depend on the specific fea-
tures of the project being considered. So, in most cases, ad
hoc metrics will have to be defined together their thresholds.
An effective approach to this activity hinges on the following
main phases [18]:
1. Identify the goal of the metrics by specifying the un-
derlying hypotheses and the quality criteria to be mea-
sured.
2. Formally define the metrics.
3. Theoretically validate the metrics using either axiomatic
approaches or measurement theory-based approaches,
to get a first assessment of the metrics correctness and
applicability.
15. 4. Empirically validate the metrics by applying it to data
of previous projects, in order to get a practical proof of
the metrics capability of measuring their goal quality
criteria. This phase is also aimed at understanding the
metrics implications and fixing the acceptance thresh-
olds.
5. Apply and accredit the metrics. The metrics defini-
tions and thresholds may evolve in time to adapt to
new projects and application domains.
4.1 Testing the Conceptual Schema
Software engineers know very well that the earlier an er-
ror is detected in the software design cycle, the cheapest
correcting that error is. One of the advantages of adopt-
ing a data warehouse methodology that entails a conceptual
design phase is that the conceptual schema produced can
be thoroughly tested for user requirements to be effectively
supported.
We propose two main types of test on the data mart con-
ceptual schema in the scope of functional testing. The first,
that we call fact test, verifies that the workload preliminar-
ily expressed by users during requirement analysis is actu-
ally supported by the conceptual schema. This can be easily
achieved by checking, for each workload query, that the re-
quired measures have been included in the fact schema and
that the required aggregation level can be expressed as a
valid grouping set on the fact schema. We call the second
type of test a conformity test, because it is aimed at as-
sessing how well conformed hierarchies have been designed.
This test can be carried out by measuring the sparseness
of the bus matrix [7], which associates each fact with its
dimensions, thus pointing out the existence of conformed
hierarchies. Intuitively, if the bus matrix is very sparse,
16. the designer probably failed to recognize the semantic and
structural similarities between apparently different hierar-
chies. Conversely, if the bus matrix is very dense, the de-
signer probably failed to recognize the semantic and struc-
tural similarities between apparently different facts.
Other types of test that can be executed on the conceptual
schema are related to its understandability, thus falling in
the scope of usability testing. An example of a quantitative
approach to this test is the one described in [18], where a set
of metrics for measuring the quality of a conceptual schema
from the point of view of its understandability are proposed
and validated. Example of these metrics are the average
number of levels per hierarchy and the number of measures
per fact.
4.2 Testing the Logical Schema
Testing the logical schema before it is implemented and
before ETL design can dramatically reduce the impact of
errors due to bad logical design. An effective approach
to functional testing consists in verifying that a sample of
queries in the preliminary workload can correctly be for-
mulated in SQL on the logical schema. We call this the
star test. In putting the sample together, priority should
be given to the queries involving irregular portions of hier-
archies (e.g., those including many-to-many associations or
cross-dimensional attributes), those based on complex ag-
gregation schemes (e.g., queries that require measures ag-
gregated through different operators along the different hi-
erarchies), and those leaning on non-standard temporal sce-
narios (such as yesterday-for-today).
In the scope of usability testing, in [17] some simple met-
rics based on the number of fact tables and dimension ta-
bles in a logical schema are proposed. These metrics can be
17. adopted to effectively capture schema understandability.
Finally, a performance test can be carried out on the log-
ical schema by checking to what extent it is compliant with
the multidimensional normal forms, that ensure summariz-
ability and support an efficient database design [11, 10].
Besides the above-mentioned metrics, focused on usabil-
ity and performance, the literature proposes other metrics
20
for database schemata, and relates them to abstract quality
factors. For instance, in the scope of maintainability, [15]
introduces a set of metrics aimed at evaluating the quality
of a data mart logical schema with respect to its ability to
sustain changes during an evolution process.
4.3 Testing the ETL Procedures
ETL testing is probably the most complex and critical
testing phase, because it directly affects the quality of data.
Since ETL is heavily code-based, most standard techniques
for generic software system testing can be reused here.
A functional test of ETL is aimed at checking that ETL
procedures correctly extract, clean, transform, and load data
into the data mart. The best approach here is to set up unit
tests and integration tests. Unit tests are white-box test
that each developer carries out on the units (s)he devel-
oped. They allow for breaking down the testing complexity,
and they also enable more detailed reports on the project
progress to be produced. Units for ETL testing can be ei-
ther vertical (one test unit for each conformed dimension,
18. plus one test unit for each group of correlated facts) or hor-
izontal (separate tests for static extraction, incremental ex-
traction, cleaning, transformation, static loading, incremen-
tal loading, view update); the most effective choice mainly
depends on the number of facts in the data marts, on how
complex cleaning and transformation are, and on how the
implementation plan was allotted to the developers. In par-
ticular, crucial aspects to be considered during the loading
test are related to both dimension tables (correctness of roll-
up functions, effective management of dynamic hierarchies
and irregular hierarchies), fact tables (effective management
of late updates), and materialized views (use of correct ag-
gregation functions).
After unit tests have been completed, an integration test
allows the correctness of data flows in ETL procedures to be
checked. Different quality dimensions, such as data coher-
ence (the respect of integrity constraints), completeness (the
percentage of data found), and freshness (the age of data)
should be considered. Some metrics for quantifying these
quality dimensions have been proposed in [22].
During requirement analysis the designer, together with
users and database administrators, should have singled out
and ranked by their gravity the most common causes of
faulty data, aimed at planning a proper strategy for dealing
with ETL errors. Common strategies for dealing with errors
of a given kind are “automatically clean faulty data”, “reject
faulty data”, “hand faulty data to data mart administrator”,
etc. So, not surprisingly, a distinctive feature of ETL func-
tional testing is that it should be carried out with at least
three different databases, including respectively (i) correct
and complete data, (ii) data simulating the presence of faulty
data of different kinds, and (iii) real data. In particular, tests
using dirty simulated data are sometimes called forced-error
tests: they are designed to force ETL procedures into error
19. conditions aimed at verifying that the system can deal with
faulty data as planned during requirement analysis.
Performance and stress tests are complementary in as-
sessing the efficiency of ETL procedures. Performance tests
evaluate the behavior of ETL with reference to a routine
workload, i.e., when a domain-typical amount of data has
to be extracted and loaded; in particular, they check that
the processing time be compatible with the time frames ex-
pected for data-staging processes. On the other hand, stress
tests simulate an extraordinary workload due to a signifi-
cantly larger amount of data.
The recovery test of ETL checks for robustness by simu-
lating faults in one or more components and evaluating the
system response. For example, you can cut off the power
supply while an ETL process is in progress or you can set
a database offline while an OLAP session is in progress to
check for restore policies’ effectiveness.
As concerns ETL security, there is a need for verifying
that the database used to temporarily store the data being
processed (the so-called data staging area) cannot be vio-
lated, and that the network infrastructure hosting the data
flows that connect the data sources and the data mart is
secure.
4.4 Testing the Database
We assume that the logical schema quality has already
been verified during the logical schema tests, and that all is-
sues related to data quality are in charge of ETL tests. Then,
database testing is mainly aimed at checking the database
performances using either standard (performance test) or
heavy (stress test) workloads. Like for ETL, the size of the
20. tested databases and their data distribution must be dis-
cussed with the designers and the database administrator.
Performance tests can be carried out either on a database
including real data or on a mock database, but the database
size should be compatible with the average expected data
volume. On the other hand, stress tests are typically car-
ried out on mock databases whose size is significantly larger
than what expected. Standard database metrics –such as
maximum query response time– can be used to quantify the
test results. To advance these testing activities as much as
possible and to make their results independent of front-end
applications, we suggest to use SQL to code the workload.
Recovery tests enable testers to verify the DBMS behav-
ior after critical errors such as power leaks during update,
network fault, and hard disk failures.
Finally, security tests mainly concern the possible adop-
tion of some cryptography technique to protect data and the
correct definition of user profiles and database access grants.
4.5 Testing the Front-End
Functional testing of the analysis front-ends must neces-
sarily involve a very large number of end-users, who gener-
ally are so familiar with application domains that they can
detect even the slightest abnormality in data. Nevertheless,
wrong results in OLAP analyses may be difficult to recog-
nize. They can be caused not only by faulty ETL proce-
dures, but even by incorrect data aggregations or selections
in front-end tools. Some errors are not due to the data mart;
instead, they result from the overly poor data quality of the
source database. In order to allow this situation to be rec-
ognized, a common approach to front-end functional testing
in real projects consists in comparing the results of OLAP
analyses with those obtained by directly querying the source
21. databases. Of course, though this approach can be effective
on a sample basis, it cannot be extensively adopted due to
the huge number of possible aggregations that characterize
multidimensional queries.
A significant sample of queries to be tested can be selected
in mainly two ways. In the “black-box way”, the workload
specification obtained in output by the workload refinement
phase (typically, a use case diagram where actors stand for
21
Table 2: Coverage criteria for some testing activities; the
expected coverage is expressed with reference to
the coverage criterion
Testing activity Coverage criterion Measurement Expected
coverage
fact test
each information need expressed
by users during requirement anal-
ysis must be tested
percentage of queries in the prelim-
inary workload that are supported
by the conceptual schema
partial, depending on the ex-
tent of the preliminary work-
load
conformity test
all data mart dimensions must be
22. tested
bus matrix sparseness total
usability test of the
conceptual schema
all facts, dimensions, and measures
must be tested
conceptual metrics total
ETL unit test all decision points must be tested correct loading
of the test data sets total
ETL forced-error test
all error types specified by users
must be tested
correct loading of the faulty data
sets
total
front-end unit test
at least one group-by set for each
attribute in the multidimensional
lattice of each fact must be tested
correct analysis result of a real
data set
total
user profiles and use cases represent the most frequent anal-
ysis queries) is used to determine the test cases, much like
23. use case diagrams are profitably employed for testing-in-the-
large in generic software systems. In the “white-box way”,
instead, the subset of data aggregations to be tested can be
determined by applying proper coverage criteria to the mul-
tidimensional lattice1 of each fact, much like decision, state-
ment, and path coverage criteria are applied to the control
graph of a generic software procedure during testing-in-the-
small.
Also in front-end testing it may be useful to distinguish
between unit and integration tests. While unit tests should
be aimed at checking the correctness of the reports involving
single facts, integration tests entail analysis sessions that ei-
ther correlate multiple facts (the so-called drill-across queries)
or take advantage of different application components. For
example, this is the case when dashboard data are “drilled
through” an OLAP application.
An integral part of front-end tests are usability tests, that
check for OLAP reports to be suitably represented and com-
mented to avoid any misunderstanding about the real mean-
ing of data.
A performance test submits a group of concurrent queries
to the front-end and checks for the time needed to process
those queries. Performance tests imply a preliminary spec-
ification of the standard workload in terms of number of
concurrent users, types of queries, and data volume. On the
other hand, stress tests entails applying progressively heav-
ier workloads than the standard loads to evaluate the system
stability. Note that performance and stress tests must be
focused on the front-end, that includes the client interface
but also the reporting and OLAP server-side engines. This
means that the access times due to the DBMS should be
subtracted from the overall response times measured.
24. Finally, in the scope of security test, it is particularly im-
portant to check for user profiles to be properly set up. You
should also check for single-sign-on policies to be set up prop-
erly after switching between different analysis applications.
4.6 Regression Tests
A very relevant problem for frequently updated systems,
1The multidimensional lattice of a fact is the lattice whose
nodes and arcs correspond, respectively, to the group-by sets
supported by that fact and to the roll-up relationships that
relate those group-by sets.
such as data marts, concerns checking for new components
and new add-on features to be compatible with the operation
of the whole system. In this case, the term regression test is
used to define the testing activities carried out to make sure
that any change applied to the system does not jeopardize
the quality of preexisting, already tested features and does
not corrupt the system performances.
Testing the whole system many times from scratch has
huge costs. Three main directions can be followed to reduce
these costs:
• An expensive part of each test is the validation of the
test results. In regression testing, it is often possible
to skip this phase by just checking that the test results
are consistent with those obtained at the previous it-
eration.
• Test automation allows the efficiency of testing ac-
tivities to be increased, so that reproducing previous
tests becomes less expensive. Test automation will be
briefly discussed in Section 6.
25. • Impact analysis can be used to significantly restrict
the scope of testing. In general, impact analysis is
aimed at determining what other application objects
are affected by a change in a single application ob-
ject [9]. Remarkably, some ETL tool vendors already
provide some impact analysis functionalities. An ap-
proach to impact analysis for changes in the source
data schemata is proposed in [14].
5. TEST COVERAGE
Testing can reduce the probability of a system fault but
cannot set it to zero, so measuring the coverage of tests is
necessary to assess the overall system reliability. Measuring
test coverage requires first of all the definition of a suitable
coverage criterion. Different coverage criteria, such as state-
ment coverage, decision coverage, and path coverage, were
devised in the scope of code testing. The choice of one or
another criterion deeply affects the test length and cost, as
well as the achievable coverage. So, coverage criteria are
chosen by trading off test effectiveness and efficiency. Ex-
amples of coverage criteria that we propose for some of the
testing activities described above are reported in Table 2.
22
Requirement
analysis
Analysis and
reconciliation
27. ETL: unit test
Front -end:
security test
ETL: integration
test
Front -end:
functional &
usability test
ETL: forced -error,
perf., stress, recov.
& secur. test
Front -end:
performance &
stress test
Analyst / designer Tester Developer DBA End -user
Database: all
tests
Design and testing process
28. Test planning
Figure 2: UML activity diagram for design and testing
6. A TIMELINE FOR TESTING
From a methodological point of view, the three main phases
of testing are [13]:
• Create a test plan. The test plan describes the tests
that must be performed and their expected coverage
of the system requirements.
• Prepare test cases. Test cases enable the implementa-
tion of the test plan by detailing the testing steps to-
gether with their expect results. The reference databases
for testing should be prepared during this phase, and
a wide, comprehensive set of representative workloads
should be defined.
• Execute tests. A test execution log tracks each test
along and its results.
Figure 2 shows a UML activity diagram that frames the
different testing activities within the design methodology
outlined in Section 3. This diagram can be used in a project
as a starting point for preparing the test plan.
It is worth mentioning here that test automation plays
a basic role in reducing the costs of testing activities (es-
pecially regression tests) on the one hand, on increasing
test coverage on the other [4]. Remarkably, commercial
tools (such as QACenter by Computerware) can be used
for implementation-related testing activities to simulate spe-
cific workloads and analysis sessions, or to measure a process
29. outcome. As to design-related testing activities, the metrics
proposed in the previous sections can be measured by writ-
ing ad hoc procedures that access the meta-data repository
and the DBMS catalog.
7. CONCLUSIONS AND LESSONS LEARNT
In this paper we proposed a comprehensive approach which
adapts and extends the testing methodologies proposed for
general-purpose software to the peculiarities of data ware-
house projects. Our proposal builds on a set of tips and sug-
23
gestions coming from our direct experience on real projects,
as well as from some interviews we made to data warehouse
practitioners. As a result, a set of relevant testing activi-
ties were identified, classified, and framed within a reference
design methodology.
In order to experiment our approach on a case study, we
are currently supporting a professional design team engaged
in a large data warehouse project, which will help us better
focus on relevant issues such as test coverage and test doc-
umentation. In particular, to better validate our approach
and understand its impact, we will apply it to one out of
two data marts developed in parallel, so as to assess the
extra-effort due to comprehensive testing on the one hand,
the saving in post-deployment error correction activities and
the gain in terms of better data and design quality on the
other.
To close the paper, we would like to summarize the main
lessons we learnt so far:
30. • The chance to perform an effective test depends on the
documentation completeness and accuracy in terms
of collected requirements and project description. In
other words, if you did not specify what you want from
your system at the beginning, you cannot expect to get
it right later.
• The test phase is part of the data warehouse life-cycle,
and it acts in synergy with design. For this reason, the
test phase should be planned and arranged at the be-
ginning of the project, when you can specify the goals
of testing, which types of tests must be performed,
which data sets need to be tested, and which quality
level is expected.
• Testing is not a one-man activity. The testing team
should include testers, developers, designers, database
administrators, and end-users, and it should be set up
during the project planning phase.
• Testing of data warehouse systems is largely based on
data. A successful testing must rely on real data, but
it also must include mock data to reproduce the most
common error situations that can be encountered in
ETL. Accurately preparing the right data sets is one
of the most critical activities to be carried out during
test planning.
• No matter how deeply the system has been tested: it is
almost sure that, sooner or later, an unexpected data
fault, that cannot be properly handled by ETL, will oc-
cur. So keep in mind that, while testing must come to
an end someday, data quality certification is an ever
lasting process. The borderline between testing and
certification clearly depends on how precisely require-
31. ment were stated and on the contract that regulates
the project.
8. REFERENCES
[1] A. Bonifati, F. Cattaneo, S. Ceri, A. Fuggetta, and
S. Paraboschi. Designing data marts for data
warehouses. ACM Transactions on Software
Engineering Methodologies, 10(4):452–483, 2001.
[2] K. Brahmkshatriya. Data warehouse testing.
http://www.stickyminds.com, 2007.
[3] R. Bruckner, B. List, and J. Schiefer. Developing
requirements for data warehouse systems with use
cases. In Proc. Americas Conf. on Information
Systems, pages 329–335, 2001.
[4] R. Cooper and S. Arbuckle. How to thoroughly test a
data warehouse. In Proc. STAREAST, Orlando, 2002.
[5] P. Giorgini, S. Rizzi, and M. Garzetti. GRAnD: A
goal-oriented approach to requirement analysis in data
warehouses. Decision Support Systems, 5(1):4–21,
2008.
[6] M. Golfarelli, D. Maio, and S. Rizzi. The dimensional
fact model: A conceptual model for data warehouses.
International Journal of Cooperative Information
Systems, 7(2-3):215–247, 1998.
[7] M. Golfarelli and S. Rizzi. Data warehouse design:
Modern principles and methodologies. McGraw-Hill,
2009.
[8] D. Haertzen. Testing the data warehouse.
32. http://www.infogoal.com, 2009.
[9] R. Kimball and J. Caserta. The Data Warehouse ETL
Toolkit. John Wiley & Sons, 2004.
[10] J. Lechtenbörger and G. Vossen. Multidimensional
normal forms for data warehouse design. Information
Systems, 28(5):415–434, 2003.
[11] W. Lehner, J. Albrecht, and H. Wedekind. Normal
forms for multidimensional databases. In Proc.
SSDBM, pages 63–72, Capri, Italy, 1998.
[12] J. McCall, P. Richards, and G. Walters. Factors in
software quality. Technical Report AD-A049-014, 015,
055, NTIS, 1977.
[13] A. Mookerjea and P. Malisetty. Best practices in data
warehouse testing. In Proc. Test, New Delhi, 2008.
[14] G. Papastefanatos, P. Vassiliadis, A. Simitsis, and
Y. Vassiliou. What-if analysis for data warehouse
evolution. In Proc. DaWaK, pages 23–33, Regensburg,
Germany, 2007.
[15] G. Papastefanatos, P. Vassiliadis, A. Simitsis, and
Y. Vassiliou. Design metrics for data warehouse
evolution. In Proc. ER, pages 440–454, 2008.
[16] R. Pressman. Software Engineering: A practitioner’s
approach. The McGraw-Hill Companies, 2005.
[17] M. Serrano, C. Calero, and M. Piattini. Experimental
validation of multidimensional data models metrics. In
Proc. HICSS, page 327, 2003.
33. [18] M. Serrano, J. Trujillo, C. Calero, and M. Piattini.
Metrics for data warehouse conceptual models
understandability. Information & Software Technology,
49(8):851–870, 2007.
[19] I. Sommerville. Software Engineering. Pearson
Education, 2004.
[20] P. Tanuška, W. Verschelde, and M. Kopček. The
proposal of data warehouse test scenario. In Proc.
ECUMICT, Gent, Belgium, 2008.
[21] A. van Bergenhenegouwen. Data warehouse testing.
http://www.ti.kviv.be, 2008.
[22] P. Vassiliadis, M. Bouzeghoub, and C. Quix. Towards
quality-oriented data warehouse usage and evolution.
In Proc. CAiSE, Heidelberg, Germany, 1999.
[23] Vv. Aa. Data warehouse testing and implementation.
In Intelligent Enterprise Encyclopedia. BiPM
Institute, 2009.
24
CONFIDENTIAL
[Dave and Jacky Delicious Incorporation]
34. Business Plan
Prepared May 17th, 2020
Table of Contents
Executive Summary 1
Opportunity 1
Expectations 1
Opportunity 3
Problem &
Solution
35. 3
Target Market 3
Competition 3
Execution 4
Marketing & Sales 4
Operations 4
Milestones And Metrics 5
Company 6
Overview 6
Team 6
Financial Plan 7
Forecast 7
Financing 9