Running head: TECHNOLOGY EVALUATION 1
TECHNOLOGY EVALUATION 2
Technology Evaluation
Student’s Name
Institutional Affiliation
Background
In 2004, the MITRE Intelligence Community Test and Integration Center in GO24 started to develop Standardized Technical Evaluation Process (STEP), with the aim of tracing previous assessment work and make sure there are superior, objective and reliable in future appraisals. From that time, Standardized Technical Evaluation Process (STEP) has grown by leaps and bounds in GO24 and GO25, GO27 as well as in G151 assessment tasks. Standardized Technical Evaluation Process’s (STEP) four stages follow a familiar structure for carrying out technological assessment. During the development and refining of Standardized Technical Evaluation Process (STEP), different resources and subject matter professionals were conducted both within and outside MITRE to have a broader understanding of assessment performance as well as theory. The workflow and method of Standardized Technical Evaluation Process (STEP) includes many of these practices and their recommendations.
MITRE carries out several technological appraisals meant for its sponsors every year. The appraisals have taken place for some time involving a number of products and technologies. To keep up with the dynamic technology and the requirements of the sponsors, the team needs a well-defined appraisal process that is cost-effective, repeatable and as fair as possible. Below are the benefits that accrue to adhering to a standardized, cost effective process;
· Reliability and better traceability through permanent steps and deliverables.
· Better effectiveness resulting in little effort needed for every assessment.
· Secure and repeatable outcomes.
· Improved communication within and among the assessment teams.
· Assessment that are comparable and can also be shared easily throughout the sponsor foundation.
· Provides a chance to develop leadership and file the lessons-learned for future appraisal.
Intended Audience
The Standardized Technical Evaluation Process is meant for MITRE programs leads and engineers carrying out technological assessments of one or more products, and is suitable for both experienced and in-experienced evaluators. The Standardized Technical Evaluation Process is also suitable for any software or data technology assessment, irrespective of the fact that it is meant for various GO24 security tool assessment (Sarah, 2007). Due to the tendency of evaluations to differ extensively in size and range, the Standardized Technical Evaluation Process offers alternatives for the evaluation teams that perfo ...
1. Running head: TECHNOLOGY EVALUATION
1
TECHNOLOGY EVALUATION
2
Technology Evaluation
Student’s Name
Institutional Affiliation
Background
In 2004, the MITRE Intelligence Community Test and
Integration Center in GO24 started to develop Standardized
Technical Evaluation Process (STEP), with the aim of tracing
previous assessment work and make sure there are superior,
objective and reliable in future appraisals. From that time,
Standardized Technical Evaluation Process (STEP) has grown
by leaps and bounds in GO24 and GO25, GO27 as well as in
G151 assessment tasks. Standardized Technical Evaluation
Process’s (STEP) four stages follow a familiar structure for
carrying out technological assessment. During the development
and refining of Standardized Technical Evaluation Process
(STEP), different resources and subject matter professionals
were conducted both within and outside MITRE to have a
broader understanding of assessment performance as well as
theory. The workflow and method of Standardized Technical
Evaluation Process (STEP) includes many of these practices and
their recommendations.
MITRE carries out several technological appraisals meant for
its sponsors every year. The appraisals have taken place for
some time involving a number of products and technologies. To
keep up with the dynamic technology and the requirements of
the sponsors, the team needs a well-defined appraisal process
that is cost-effective, repeatable and as fair as possible. Below
are the benefits that accrue to adhering to a standardized, cost
2. effective process;
· Reliability and better traceability through permanent steps and
deliverables.
· Better effectiveness resulting in little effort needed for every
assessment.
· Secure and repeatable outcomes.
· Improved communication within and among the assessment
teams.
· Assessment that are comparable and can also be shared easily
throughout the sponsor foundation.
· Provides a chance to develop leadership and file the lessons-
learned for future appraisal.
Intended Audience
The Standardized Technical Evaluation Process is meant for
MITRE programs leads and engineers carrying out technological
assessments of one or more products, and is suitable for both
experienced and in-experienced evaluators. The Standardized
Technical Evaluation Process is also suitable for any software
or data technology assessment, irrespective of the fact that it is
meant for various GO24 security tool assessment (Sarah, 2007).
Due to the tendency of evaluations to differ extensively in size
and range, the Standardized Technical Evaluation Process offers
alternatives for the evaluation teams that perform their duties
correspondingly for better effectiveness and for smaller teams
that work together at every stage. The STEP workflow and
methodology offer wide-ranging resources for teams that wish
to standardize their assessments and organize their day to day
tasks.
Challenges
There are four major challenges associated with technology
evaluations namely; creating a practical assessment timeline,
communicating with the sponsor, ensuring reliability and
3. defensibility. Preparing is a critical part of the technical
evaluation process to create reasonable timelines and prospects
(Sarah, 2007). The Standard Technical Evaluation Process
makes it easier for the teams to find out personal actions and
approximate the time needed to accomplish each one.
Evaluation teams can break bigger tasks into lesser tasks to
make sure that all the assessment tasks are well-defined. The
teams must also work in collaboration with their sponsor to find
out the suitable number of products to be tested, in view of the
time and resources obtainable. As such this calls for successful
planning and timelines.
According to Laakso and Kiviniemi (2012), it is important for
MITRE teams to carry out respectable evaluations that are wide
ranging and fair. Reliable documentation of test information and
criteria is critical in achieving evaluation reliability for review
by dealers, sponsors and stakeholders in case questions come
up. Effective communication among the evaluation team,
dealers, professionals and stakeholders is critical for successful
evaluation. The team must understand the issue and the best
solution.
Evaluation Process
There are three main stages of evaluation defined by the
Standardized Technological Evaluation Process namely;
Scoping and Test Strategy, Test Preparation, Testing Results
and the final report and Integration and Development that
depends on the sponsors or the kind of case in question (Dereli
and Altun, 2013). The fourth stage is not obligatory. Each stage
has different goals, actions and related document deliverables.
Control measures differentiate the stages, and each stage must
be finished before commencing the next stage. The control
measures assist in ensuring that there is evaluation reliability.
For instance, the teams must identify their method of evaluation
testing plans before installing or testing the evaluation
products. It is important for the teams to set the evaluation
method before commencing the testing of the product. The
4. strategy helps to prevent introduction of prejudice in the
assessment method, depending on previous knowledge of a
particular product’s characteristics or plan (Sarah, 2007).
During the Scoping and Test Strategy, the evaluation team is
informed of the mission goals and the technology gap and
agrees on major requirements by scoping with the government
sponsor. The teams come up with a project summary to assist in
clarification of the objectives, and range and carries out a
market research to find out potential products within the
technological field. The assessment team works hand-in-hand
with the government sponsor to pick some products for more
evaluation depending on the outcomes of the market research,
evaluation timeline and the obtainable resources. The team must
produce a project summary and an excellent test plan to get
ready for testing (Sarah, 2007).
In the Test Preparation Stage, the team gets the evaluation
products from the dealers as well as any other equipment needed
for testing. In this stage non-disclosure agreements are also
endorsed (NDAs), finding out the dealers points of conduct and
getting in touch with the dealer to talk about the disc plan
(Kuhrmann, Fernández and Steenweg, 2013), Similarly, the
team sets the evaluation parameters for testing the products and
any other scenario tests that will be carried out. After that, the
evaluation team puts the products in the environment that they
will be tested and takes on the dealer as technical questions
come up. The evaluation team may call for a technical exchange
meeting (TEM) to obtain more information and conditions from
the subject matter professionals (Sarah, 2007).
During the Testing Results and Closing Report, the team tests
and gives scores to the products against the test methodology.
The evaluation team must make sure that the testing for the
products is done under the same conditions. They have to finish
a complete crosswalk of the scores given to each product after
testing, to make sure that there is uniformity in the scoring.
After the crosswalk, the team carries out personal meetings with
every dealer to appraise the outcomes, correct any confusion
5. concerning the product’s functionality and where necessary a
retest to be done. A closing report that includes the evaluation
outcomes as well as any other important information is prepared
by the team (Sarah, 2007).
In the Integration and Deployment phase, the final report
handed over to the government offers a source of information
that is critical in decision-making, but is not a suggestion to buy
particular products or items. In case the government decides to
buy a product, then the evaluation team works with the
government, as well as other profit-making contractors to assist
in the operation and integration of the solutions into the
working environment. The steps during this stage may include
development of design guidance and vital documentation,
The STEP workflow for evaluation teams
The STEP workflow is meant for technological assessments that
involve many products. However, it can be customized to meet
the requirements of teams conducting evaluation of a single
product (Baggen, 2012).
Suggested Linear STEP workflow
Step Stage
Section
Action
Stage 1 Scoping and Test Strategy
1
2
3
4
5
Carry out preliminary Scoping
Scope with government sponsor
Conduct market research
6. Identify Test architecture
Draft high-level test plan
Check point- Stage 1
Stage 2 Test Preparation
1
2
3
4
5
6
7
Find out evaluation methodology, Priorities and Test
Requirements
Conduct government Requirements Mapping
Improve and Finalize Test Plan
Obtain the necessary hardware and software
Call technical exchange meetings (tem) optional
Check point-Stage 2
Stage 3 Testing, Results and Final Report
1
2
3
Carry out Testing and Compile Results
Conduct crosswalk
Share the outcomes with the dealers
Hand over the final report
7. Check point-Stage 3
Stage 4 Integration and Deployment
None
Depends on the sponsor
Ways used to Assess and score Products
In the Standard Technology Evaluation Process, the teams
evaluate and score products as per the set parameters to identify
the most appropriate choice to satisfy the requirements of the
sponsor. The teams have to come up with a definite evaluation
of the products and present an argument that justifies decisions
(Lim, 2012). The process involves finding out a set of
assessment methodology and a suitable separating methodology
between a set of different classes. A design for assigning score
to various products against the evaluation methodology must be
identified. Again, a set of statistical weights to find out the
relative importance of the methodology and classes of
evaluation must be provided. The process also involves
calculating the general scores for every product. In most cases,
the teams use spreadsheets to trace the appraisal methodology,
scores, as well as the weights and compute the weighted score
for every product (Sarah, 2007).
Determining the Evaluation Methodology
When getting ready for evaluation testing, the evaluation
methodology must be identified. It is a critical step, as the final
outcomes will portray how well the evaluation team set up their
evaluation methodology. To come with this methodology, the
team must carry out an independent survey and ask for guidance
on aspects and goals of the issue from the government sponsor
and the subject matter professionals. The research will enable
the team to ensure that the principle requirements of the sponsor
are met as well as important functional issues. The evaluation
methodology must be specific, Boolean (two-valued) kinds of
questions that are well stated and can be clearly tested (Kim and
Yoon, 2012).
8. Conclusion
MITRE carries out various technological evaluations across a
wide range of products as well as technologies. To keep pace
with the dynamic technology and sponsor needs the evaluation
teams need a well-defined assessment process that is efficient,
repeatable, and as objective as possible. The Standardized
Technology Evaluation process (STEP) developed in GO24 lays
out thorough procedure for technology evaluations of one or
more products. The process is suitable for different areas of
technology and has many benefits for both the evaluation teams
and government sponsors. However, there are some challenges
that are associated with the Standard Technology Evaluation
Process.
References
Baggen, R., Correia, J. P., Schill, K., & Visser, J. (2012).
Standardized code quality benchmarking for improving software
maintainability. Software Quality Journal, 20(2), 287-307.
Dereli, T., & Altun, K. (2013). Technology evaluation through
the use of interval type-2 fuzzy sets and systems. Computers &
Industrial Engineering, 65(4), 624-633.
Kim, S., & Yoon, B. (2012). Developing a process of concept
generation for new product-service systems: a QFD and TRIZ-
based approach. Service Business, 6(3), 323-348.
Kuhrmann, M., Fernández, D. M., & Steenweg, R. (2013, May).
Systematic software process development: Where do we stand
today?. In Proceedings of the 2013 International Conference on
Software and System Process (pp. 166-170). ACM.
Laakso, M., & Kiviniemi, A. O. (2012). The IFC standard: A
review of history, development, and standardization,
information technology. ITcon, 17(9), 134-161.
Lim, C. H., Kim, K. J., Hong, Y. S., & Park, K. (2012). PSS
9. Board: a structured tool for product–service system process
visualization. Journal of Cleaner Production, 37, 42-53.
Sarah, B. (2007). Standardized Technical Evaluation Process.
User’s Guide and Methodology for Evaluation Teams. (1-51).
PAGE
Running head: [Replace this text and brackets with a short title
of your paper in ALL CAPS]
[Replace this text and brackets with 1 flush right on same line
as Running head.]
[Replace text and brackets with cover page’s short title (ALL
CAPS); omit “Running head:”]
[Replace this text and brackets with the full title of your paper]
[Replace this text and brackets with your name]
[Replace this text and brackets with class information]
[Replace this text and brackets with University of Maryland
University College]
[Replace this text and brackets with the date]
Introduction
Select and delete this text and begin typing the text for this
section here; remember to remove all brackets in this template.
Place page numbers (1, 2, 3, etc.) flush right on the same line as
the title in the header. Double-space all text; use Times New
Roman, 12 pitch. Indent paragraphs one-half inch, the page
margins are one inch all around. Your introduction begins on
page 2; the cover page is page 1. A page of text has about 250
words. Provide background introducing the paper.Research And
10. Evaluation
Type the text for this section here. It begins on the next
available line. Set your paper for no widows or
orphans.Capabilities
Type the text for this section here. It begins on the next
available line. Set your paper for no widows or orphans.
Describe the capabilities of the new technology.Costs
Type the text for this section here. It begins on the next
available line. Set your paper for no widows or orphans.
Describe all costs involved with procuring new
technology.Maintenance Requirements
Type the text for this section here. It begins on the next
available line. Set your paper for no widows or orphans.
Describe the maintenance needs for the new
technology.Flexibility
Type the text for this section here. It begins on the next
available line. Set your paper for no widows or orphans.
Describe the flexibility of the new technology.Feasibility of
Implementation
Type the text for this section here. It begins on the next
available line. Set your paper for no widows or orphans.
Describe the feasibility of implementing the new
technology.Issues
Type the text for this section here. It begins on the next
available line. Set your paper for no widows or orphans.
Explain the pros/cons, barriers, issues, and vulnerabilities with
the new technology.Conclusion
Type the text for this section here. This also begins on the next
available line after the issues analysis (discussion body) of your
paper. Summarize the main points from the paper.
11. References
Reynolds, G. W. (2010). Ethics in information technology (3rd
ed.). Boston, MA: Thompson Course Technology.
Author, A. B. (2009). Ethics…… (5th ed.). NY: Prentice-Hall
x x x x x x x x x x x x x x x x x x x (Note how the
hanging indent continues here, automatically, once you set this
up correctly.)
References and sources begin on a separate page at the end of
your expository narrative, even for the shorter papers. Wording
for the references and sources is different —look up each one.
Please make all references clickable by providing the persistent
link (if available) or URL link of the information referenced so
that the instructor can easily re-access your reference. Students
should test the URL links to ensure that each link goes back to
the intended citation.
Note: Above is an example of the hanging indent,* single-space
style for APA that you must use. Normally, research references
will not include your course text. Your text will undoubtedly be
an initial resource for many assignments in the course.
However, you should find additional resources to support your
work beyond the text information, even for brief papers.
Remember to put references in alphabetical order by the
author’s last name.
* A hanging indent, single-space is easily created in MSWord
from <Home>, <Paragraph>, Indentation section, select
<Hanging> in the Special Box. Be sure you have also changed
the line spacing from double-space used in the body of your
research paper to single-space for References. However;
remember to double-space between references.
PAGE
2
12. Teaching Case
Bank
Solution
s Disaster Recovery and Business
Continuity: A Case Study for CSIA 485
Steve Camara
Senior Manager, KPMG LLP
1021 E Cary Street, Suite 2000
Richmond, VA 23219[email protected]
Robert Crossler Vishal Midha Assistant Professor
Computer Information Systems
The University of Texas – Pan American[email protected]du,
[email protected]
Linda Wallace
Associate Professor
Accounting and Information Systems Virginia
Tech[email protected]
13. ABSTRACT
Disaster Recovery and Business Continuity (DR/BC) planning is
an issue that students will likely come in contact with as they
enter industry. Many different fields require this knowledge,
whether employees are advising a company implementing a new
DR/BC program, auditing a company’s existing program, or
implementing and/or serving as a key participant in a company
program. Often times in the classroom it is difficult to find real
world practice for students to apply the theories taught. The
information in this case provides students with real world data
to practice what they would do if they were on an engagement
team evaluating a DR/BC plan. Providing students with this
opportunity better prepares them for one of the jobs they could
perform after graduation.
Keywords: Case study, Computer security, Critical thinking,
Experiential learning & education, Information assurance and
security, Role-play, Security, Team projects