1. www.aacei.org
ENGINEERINGENGINEERING
January/February 2015
THE JOURNAL OF AACE® INTERNATIONAL -
THE AUTHORITY FOR TOTAL COST MANAGEMENT®
COSTCOST
FORENSIC SCHEDULE
ANALYSIS METHODS:
RECONCILIATION OF
DIFFERENT RESULTS
RISK ANALYSIS
AT THE EDGE OF CHAOS
INTEGRATED PROJECT REPORTING
USING DASHBOARDS:
HARNESSING THE POWER OF
PRIMAVERAP6
INTEGRATED PROJECT REPORTING
USING DASHBOARDS:
HARNESSING THE POWER OF
PRIMAVERAP6
2.
3. 1COST ENGINEERING JANUARY/FEBRUARY 2015
4 Integrated Project Reporting Using
Dashboards: Harnessing the Power of
Primavera P6
John W. Blodgett and Brian Criss, PSP
13 Forensic Schedule Analysis Methods:
Reconciliation of Different Results
John C. Livengood, CFCC PSP and Patrick M. Kelly, PE PSP
28 Risk Analysis at the Edge of Chaos
John K. Hollmann, PE CEP DRMP
CONTENTS
TECHNICAL ARTICLES
COSTENGINEERING
2 AACE International Board of Directors
2 Cost Engineering Journal Information
27 Sam Griggs Awarded ICEC Fellow
38 SOMA Beirut Residential Project
41 Professional Services Directory
41 Index to Advertisers
42 AACE International Online Store
44 Calendar of Events
ALSO FEATURED
6. O
rganizations striving for
effective project controls
are consistently looking
for better ways to
communicate among the various
stakeholder groups. Typically, within
organizations, one finds that the
project controls umbrella actually
entails the involvement of several
groups within the organization, each
with their own specific agenda (or
criteria for success by which they are
measured), as well as, software
applications used to meet these goals.
The challenge for executives and
project controls managers alike is the
inherent desire to consolidate the
information captured and maintained
by these various groups within the
organization, (See figure 0).
The common reference heard in
reference to organizational structures
is the term “silos.” This often has a
negative connotation; however, this
dynamic may often be seen as a
necessity in order for the organization
to function properly as a whole.
Charles Duhigg, in his book “The
Power of Habit,” examines the
makeup of large companies [1]. In his
book, he discusses the makeup of
companies as less than a single whole,
but rather as a fiefdom of various
smaller components of the
organization. These differing
components of the company have
evolved independently over time as
are the rules (both written and
otherwise) by which these different
groups interact with each other.
Further, Duhigg maintains that such
separation of these groups is essential
to the overall efficiency of the
organization and allows for the day-to-
day work to get done without elevated
levels of bureaucracy to govern
decisions.
For project controls professionals,
this dynamic is often the central area
of concern which needs to be
overcome. Namely, how best to
achieve interoperability among tools
used by the various internal interest
groups to arrive at a consolidation of
information by which decisions may
be made in a timely manner? Often,
very manual and disjointed reporting
has been developed and leads to
conflicting “your spreadsheet vs. my
spreadsheet” dynamics.
In addition, project controls often
seeks to integrate various tools
together so that they can see
consistent data across the various
tools. There is the concept of the
“system of record” for certain types of
4 COST ENGINEERING JANUARY/FEBRUARY 2015
TECHNICAL ARTICLE
Integrated Project Reporting
Using Dashboards:
Harnessing the Power of
Primavera P6
John W. Blodgett and Brian Criss, PSP
Abstract:"Integrated reporting" can be accomplished by using reporting and
dashboard tools that support accessing multiple data sources directly or
through a data warehouse. The data sources can include Primavera P6 for
planning and scheduling; a financial tool such as SAP; and/or internal home-
grown databases. The data warehouse is the "central" database containing
data with differing structures used in producing reports that incorporate the
data directly from their auditable source databases. The report/dashboard
solution is supported by programming to align the data based on common
elements to provide enterprise level information, supported by individual
system audit reports. A further objective is to move beyond siloed spread-
sheet data by incorporating various sources into a single source of consoli-
dated information. The data warehouse can also be used to store snapshots
of datasets in order to perform trending and analytics. This article will discuss
the business case for developing integrated reporting dashboards; an exam-
ple case-study of an existing dashboard developed and used by a utility com-
pany; and alternative approaches, such as off-the shelf analytics offerings.
This article was first presented as OWN.1670 at the 2014 AACE International
Annual Meeting in New Orleans, LA.
Key Words: Analytics, integrated reporting dashboards, data, databases, Pri-
mavera P6, planning and scheduling, SAP, and trends.
7. data (i.e., schedule or financial data),
whereby people trust the data coming
out of a particular system, and they
often look for how that data “ties” to
data from other systems. This can be
difficult because of differing data
structures in different systems. For
example: a department’s budget may
be seen in SAP and a home-grown
database, but if the report parameters
and data structures are not consistent,
the data will not tie-out. Thus, people
lose confidence in project controls
ability to provide accurate data.
One method by which this aspect
of effective project controls may be
addressed is via integrated reporting.
This method seeks to harvest
information from various systems for
purposes of reporting; however, this
solution does not attempt to insert
data from one system into another. In
short, data can (and will) continue to
exist within its existing application,
maintained by the system owners,
only now it will be combined with
other data related to the same project.
For purposes of this discussion,
examination and examples will be
presented based on the use of
Primavera P6 as the application used
for project execution, SAP as the
solution for financial information, and
a “home grown” internal database of
information related to management of
the projects. These unique
applications are deployed and
managed by various individuals and
groups within the organization and
represent the “systems of record” for
project controls information.
The Problem
Organizations seeking efficient
and effective project controls are
always searching for new ways to
make information available to user
groups and decision makers. This
information is often held in different
software applications and the ability to
allow executives to see the “full
picture” is often a challenging task.
More often than not, an integrated
solution is desired by which system
information is passed to another
solution via automated routines and
custom programming. Evidence of this
fact is found in the preponderance of
integration tools on the market which
seek to provide organizations with
“plug and play” utilities which connect
unique systems in an effort to provide
integrated software tools.
Systems of record are an
important concept for integrated
reporting for the simple fact that all
information disseminated in the form
of reports or interactive dashboards
need to “tie back” to a single source of
information. For those who seek to
challenge the project controls
leadership and objectives, this will be
one of the primary methods of
validation. To produce a project
controls solution which effectively
drives decisions and shares
information, confidence in the data
which is being reviewed is vital to long
term success. When information
presented in reports or shared in
meetings cannot or does not align with
the system of record, the end result
may be further division within the
organization, leading to slow or
misinformed decisions, inefficient
labor intensive data crunching, and
ultimately, ineffective project controls.
Many organizations use Primavera
P6 as the execution arm of the overall
project controls environment.
Primavera provides a database
platform by which projects schedules
are maintained. Fundamental to these
project schedules is the use of
activities and logic to drive key
milestones for execution of the work;
and these schedules, in turn, are
updated on a routine basis (i.e.,
monthly, weekly, or daily depending
on the direction of management).
Additionally, Primavera P6 provides
the ability to code projects to reflect
unique elements such as where the
job is scheduled to take place, and
who is managing the work. These
project codes provide an excellent
source of information in the
establishment of portfolios of projects,
which allow project controls
organizations to break down projects
and information into meaningful
groups for reporting and analysis. One
challenge is that Primavera has
evolved from a single project local
environment, to an enterprise
database environment. As a result, the
P6 database was configured to clearly
distinguish between project, activity,
and resource types of data. The net
result of this evolution is that reports
generated from within the application
cannot easily traverse these unique
types of data. For example,
management wants to see reports
with a single line per project. The
single line combines project level data,
such as project name, project manager
name, Location, etc.; with activity level
data, such as construction start date
and construction finish date. No
project or activity level layouts native
to Primavera are capable of this.
Oftentimes, Primavera experts are
called on to develop macro work-
arounds in Excel reports to bring
activity milestone dates up to the
project level.
Cost information is primarily the
criteria by which most organizations
judge the effectiveness or inadequacy
of their project controls solutions.
Financial information related to a
project has many unique aspects
beyond just a simple budgeted
amount. Rather, organizations often
have a desire to track much more
discreet information such as a life of
project budget (multi-year), FY annual
budget, amount funded, multiple
funding mechanisms, committed
costs, contingency, and amount
authorized to spend. This information
is often included within the system of
record for the simple fact that
someone within the organization
believes this data to be important and
5COST ENGINEERING JANUARY/FEBRUARY 2015
Figure 0 — Example of “Silos”
Organizational Structure
8. those responsible for establishing
effective project controls solutions
must, in turn, find a way to address
this in the overall reporting of
projects. In large organizations, the
financial system of record is often a
large Enterprise Resource Planning
(ERP) application such as SAP or
Oracle. Most of these ERP systems are
built well to handle company
financials, logistics, purchasing, and
operations; but they are not leading in
the area of capital project controls.
Integration of information from
systems of record entails that the
desired information be exchanged
between two unique systems. That is
to say, there is a desire on the part of
the organization that when data in
one system is updated or changed that
this, in turn updates fields in another
project controls software application.
This interoperability is a technical
reality given today’s project controls
solutions and open architecture
(integration API and web services
functionality) found in many of the
software products, including
Primavera P6.
In order to achieve
interoperability between two systems,
there needs to be an effort to map
data between the two systems. The
process of data mapping entails that
the receiving application provides a
place for the integrated data to reside.
For example, when looking at
integration of financial information
into P6, this information needs to
populate within the P6 database as
either a cost value on a resource, an
expense item at the activity level, or a
user defined field (UDF). These would
represent the typical places which a
cost value would be represented
within the application (there may also
be options to explore for accepting
cost data at the project level).
The key question often pondered
during the data mapping process is
whether or not the integration is
transforming the receiving application
from its intended purpose? Primavera
P6 is an excellent tool for planning and
forecasting of costs and unit
information from activity data. But is
there value in attempting to transform
P6 into a tool with the same robust
accounting information as the
financial tools from which it may be
receiving information? Is the product
being migrated from its intended use?
Again, the key driver for this decision
is the organizational desire to achieve
consolidated sources of information
6 COST ENGINEERING JANUARY/FEBRUARY 2015
Figure 1 — Portfolio Level Data Sample (1)
Figure 2 — Portfolio Level Data Sample (2)
9. with reduced effort in handling the
information.
A key example of this is the use of
“Store Financial Period Performance”
in Primavera P6, which must be done
project-by-project, and cannot be
done en masse for the enterprise at
the end of each month. This poses a
challenge of getting hundreds or
thousands of projects all synced up
with actual costs, and having multiple
people trying to do Store Period
Performance all on the same date with
the same accuracy as that which is
already produced by the controlled
7COST ENGINEERING JANUARY/FEBRUARY 2015
Figure 5 — Project Page View (2)
Figure 4 — Project Page View (1)
Figure 3 — Project List View
10. financial close of the company’s ERP
system.
Another consideration is whether
or not the organization will require
more resources to handle the
increased data which needs to be
maintained within a system like P6 to
facilitate the integration. Often,
integration routines are predicated on
coding information (project, activity,
or resource coding). Depending on the
scope of the integration pondered,
there is a strong likelihood that the key
integration codes will need to be
maintained by P6 users on the front-
end of the application. This will place a
greater emphasis on quality control
maintained within a system like P6 to
prevent the integration from failing at
routine intervals. After all, integration
solutions essentially replace what
front-end application users would do
to align the information, and
automate the process; however, the
routines do not usee judgment. The
routine is established to execute
specific tasks at triggered intervals,
and as such, relies on the structure of
the information found in each
application to perform properly.
Recommended Solution
Integrated reporting provides a
method for delivery of the core
requirements to provide centralized
reporting for the project controls
organization. The idea is similar to the
concept of integrated application
data; however, the difference is that
the data does not actually transfer to
applications outside of the system of
record. In this method, the financial
data is entered and continues to
reside in SAP, and schedule
(milestone) data continues to reside
within Primavera P6. This method
8 COST ENGINEERING JANUARY/FEBRUARY 2015
Figure 6 — Primavera Notebook Topics
Figure 7 — Data Quality View
11. 9COST ENGINEERING JANUARY/FEBRUARY 2015
allows for unique software
applications to continue supporting
the lines of business, their
constituents, and the desired use case
without having to compromise the
way they are used in order to facilitate
software interoperability.
The use of custom programming
is still a requirement in the
development of an integrated
reporting solution. Namely, the
objective is to create automated
routines to mine the desired
information from the unique systems
of record. In the case of Primavera P6,
web services are used to generate a
nightly ETL (extract, transfer, and load)
into a data warehouse. A similar
process will be required for data
coming from other systems with the
objective being to create automated
routines which allow for data to be
refreshed on a regular basis.
Additionally, the use of a “flat file”
transfer to a secure FTP site may also
substitute for the use of a web
services extract of financial data
(Note: direct access to financial
systems if often tightly controlled and
the preference is a file based transfer
of information as requested by the
project controls organization). Further,
the owners of the data within the ERP
are often very nervous and resistant to
providing data outside the system. The
project controls organization will need
to go through a process to explain
what they are doing with the data and
why it’s needed. There will often be a
push to bring data into the ERP system
and use the ERP system’s data
warehouse and reporting tools. This
can be something to consider,
however oftentimes, there is a much
larger volume of project data that
resides outside of the ERP (schedule
data, text status fields, and risk
management information), so it does
not seem efficient to push a large
volume of data into the ERP system. In
addition, there are often tighter
controls around report development
and modification within the ERP
system.
In order to facilitate an integrated
reporting platform, a unique project
identifier must be present to make the
solution viable. Ideally, this should be
a consideration when setting up the
Primavera P6 database and
specifically, how the projects will be
identified in P6. When the information
is compiled within the data
warehouse, there will need to be a
common method of aligning the P6
project data along with the financial
information. To best facilitate this
operation, it is useful to evaluate fields
within the software applications as
needing to be unique. In the case of
Primavera P6, the project ID field is a
requirement for all projects and
cannot have duplicates within the
database.
Once the methods of transferring
data from the systems of record into
the reporting data warehouse are
established, the next step is the
development of the actual reporting
interface. The actual delivery of the
reporting solution is one area where
organizations have a great deal of
flexibility. The organization should, at
this point, be prepared to address
questions related to the overall design
of the interface including usability
options and data display.
Depending on the size of the
organization, and the various roles of
those using the integrated reporting
solution, the interface should be
designed with the idea that it will
need to provide flexibility and use for
all members of the organization.
Executive level personnel will often
have a desire to consume information
at a summary level with graphical
displays which provide actionable
items and KPI’s for the organization.
However, in order to achieve the
maximum benefit for the organization,
the integrated reporting solution
should also possess the ability to
provide detailed information related
to specifics of the projects such as
detailed cost data, schedule and
milestone deliverables, as well as
written text information from those
responsible for the project delivery.
To address these variable levels of
reporting detail, design of an interface
to allow users of all levels within the
organization to navigate to the most
beneficial level of information is
essential. A quality integrated
reporting solution should allow for
high level information to be traversed
down to the lower levels of meta data
to perform root cause analysis. This
capability to “drill down” from high
level metric information to lower
levels of detailed data should be an
overall requirement for any integrated
reporting solution.
For the solution developed as part
of this example, the overall solution is
architected to provide 3 levels of
information:
1. Portfolio Level—This is the
primary landing page for users. The
objective is to display aggregate
project level information related to
schedule and costs. This level of detail
is controlled via filter parameters to
allow for projects to be displayed in
meaningful groups. These filter
parameters are based on the project
code assignments in P6. The portfolio
level is primarily graphical and
intended to provide intuitive data
across multiple projects.
2. Project List View—This level
represents the second level of data for
display to users. The projects
represented are based on the
portfolio level selections and the
project list view is intended to provide
greater levels of detail when selecting
a portfolio graphic. This view provides
much greater levels of detail via
column data which can be customized
by the user at run time. The ability to
sort and show/hide selected columns
are available, as well as the ability to
export the information into excel.
3. Project Page View—The
project page view displays both
graphical, as well as narrative data
(based on P6 notebook topics) for a
single project.
The ability to traverse project
information within the application is
an essential element to achieve the
desired adoption. By building the
solution to accommodate the
12. 10 COST ENGINEERING JANUARY/FEBRUARY 2015
intended use case for executive level,
as well as PM level consumption, the
integrated reporting solution will
provide significant return on
investment.
The use of project level notebook
topics in P6 is a good way to get
different categories of textual status
for projects. Different notebook topics
can give status of project elements
such as environmental, permitting,
variance explanations, etc. Project
managers like to keep a history of
status updates, so in order to prevent
a long historical narrative from
printing in your current status report,
a different notebook topic can be used
to store the archive status. The
movement of current status report to
archive can be a simple copy and paste
by the project manager during each
update. This process could also be
automated through the dashboard
front end.
A further functionality that was
desired in this dashboard solution was
trending data. Managers wanted to
know what changed from the last time
they saw the data. The dashboard
employed a data warehouse capability
where snapshots of the dataset were
taken every night. This allowed
management to easily compare
current data to any past period down
to the day, in order to see what
changed (see right side graphic pane
on figure 1 for an example of this
trend data). All of the trend graphics
offered drill-downs to lists that
showed the old data details on top of
the new data, so it could be easily
seen what was different at any given
reporting period.
There are many different
technology alternatives available to
produce integrated reporting and
dashboards. The technology
employed on this particular
dashboard solution is just one of the
options for integrated reporting. This
particular dashboard was built using
Ruby on Rails JavaScript type
programming with a SQL database.
Highcharts JS and Ext JS were used for
graphics and tables. Web services was
used to integrate pulls from various
data sources. Future-proofing was
accomplished through the use of
HTML5 and CSS3. Other programming
languages and technology available
include Python, Tableau, and Pentaho.
Most ERPs, including SAP and
Oracle also employ native reporting
and analytics systems: SAP Business
Warehouse and Business Objects 4.0
and Oracle Business Intelligence (BI)
11g. Primavera P6 Analytics 3.0 is built
on Oracle BI 11g. These systems offer
great graphics and analytics capability,
but did not allow for the same
flexibility and control that was desired
for this particular dashboard
application. None of the project cost
data was being integrated into
Primavera P6, so that also limited the
use of Primavera P6 Analytics.
Historical data snapshots were also
not available in Primavera P6
Analytics. SAP Business Objects was
not used because most of the project
data resided outside of SAP, and it
would have been a much larger data
transfer to get data into SAP, rather
than the current solution that
transfers data out of SAP.
Lessons Learned
In the development of the
integrated reporting solution, the idea
of “horizontal” traceability is an
important factor. Horizontal refers to
the idea that a unique project is
commonly identified across all of the
different software applications from
which data will be used in the
reporting solution. In this specific
instance, the SAP generated order
number was used as the project ID in
P6 and is also referenced within the
homegrown project database system.
What this provided was a single
unique identifier across all systems,
even if they were used differently in
other applications (i.e., not necessarily
the project ID). The key consideration
in deciding what identifier to use is
whether the system will allow
duplicates. Only the project ID in
Primavera P6 will not allow duplicates.
When developing the output for
reports and consolidating the data,
programmers may more easily align
the SAP, project database, and P6 data
by referencing this field.
Should there be a lack of
traceability across the other systems,
the net result would have been a
much slower development time along
with additional costs for development
of translation tables to align data
programmatically. When establishing
project controls software applications
within any organization, it is a good
strategy to evaluate how financial
systems identify projects and make
certain that a project found in other
systems (such as P6), may be easily
tied back to the data for that project in
the financial system.
Another key lesson learned is
around data quality in Primavera P6.
The group was very new to Primavera
P6, and data quality issues were
present and hard to detect. The
dashboard exposed these issues in a
very public way, which led to
improvements because people did not
want to be embarrassed with blanks
or incorrect data. Managers used the
dashboard on their tablets in
meetings, which helped encourage
the proper use of Primavera P6.
However, good CPM scheduling
practices were not a strong skill set in
the group; therefore some quick and
easy data quality metrics were
developed to expose incorrect CPM
practices. A dashboard page was
developed that grouped individuals by
the number of activities “riding” the
data date, and the number of out-of-
sequence activities. Claim digger,
schedule logs, and reports are
sometimes used for this, but the
dashboard gave the capability to
quickly access this information across
the entire portfolio, grouped and
ranked by individual. This allowed the
data quality analysts to focus their
attention on acute problem areas and
people who needed additional
training and support.
One key aspect of the overall
reporting solution is to clearly identify
who is the target audience for the
information. For example, data
required by executives within the
13. 11COST ENGINEERING JANUARY/FEBRUARY 2015
organization to make informed
decisions may require high level
summary information which may be
quickly consumed and graphical in
nature. On the other hand, project
managers may desire far greater levels
of detail in the tracking and managing
the day-to-day operations of the
project. The key point here is to
consider the audience for each of the
elements represented in the
integrated reporting solution. These
considerations may be seen as part of
the overall scoping exercise when
organizations are developing the
content and delivery methods of
reports (as well as data required for
inclusion in the data warehouse).
If the chosen method of delivery
is to have the reporting solution
accessible via a web browser, then it is
important to know what the
supported web browser is for the
overall organization. If the reporting
solution has developed custom
graphics and views, it is important that
those be developed for use with the
proper browser. Failure to realize this
after the fact may cause delays with
the production deployment as
modifications to the programming
code used in the development of the
reporting solution may require
modification for earlier browser
versions. For large organizations, it is
not uncommon to see older browser
versions deployed since the testing of
newer versions by the organizations IT
staff may be time consuming. In
addition, users are unlikely to have
administrative rights on their
machines to download or use newer
browser versions so the lesson here is
to make sure the development of the
reporting solution is in alignment with
the supported browser which will
operate it.
Another key consideration is to
have a browser-neutral or
obsolescence considerations. The
solution should have the ability to
easily work on tablet computers, as
well as traditional desktops. While
companies are allowing bring your
own (BYO) devices for tablets and
smart phones; often the access to a
company’s intranet is restricted or
involves an extra login step via a
remote access solution. An externally
hosted reporting system can help
make this access easier.
Conclusion
Integrated reporting through the
use of dashboards and analytics tools
is quickly becoming as important as
implementing the tools themselves.
The employer of one of this article’s
authors recently presented on the fact
that their last three employee hires
have been programmers, as the
demand for integrated reporting
solutions increases. As tools, such as
Primavera P6 and Primavera Unifier
become more powerful and
integrated, so too will the demand to
integrate them in other ways that the
software developer has not yet
conceived of. In a large enterprise,
integrated reporting plays a
particularly important role because of
vast, and often siloed groups using
different tools and the need for
management to see “one view of the
work.” As technology improves, and
various software enhancements are
released, there will be more tools that
try to do more project management
functions. However, there are often
“tool wars” within organizations
where it is difficult to get groups to all
agree on a one size fits all solution.
Integrated reporting helps solve that
issue because reports and graphics
can consolidate information from
multiple, auditable data sources. Then
the project controls group can become
more focused on data quality and
analytics of the results, rather than
spending time manually consolidating
and scrubbing data into reports.
This article presented just one
example of how integrated reporting
can be done through a dashboard.
There are many other options
available. Future considerations could
be given to building user-friendly
front-ends and automations to tools
that were once the realm of the
highly-experienced professional user.
The use of large databases and new
technology is making reporting and
graphical options available that are
really only limited by the imagination.
With the right programming or
solution, companies can achieve
integrated reporting success that will
lead to efficiencies and improved
employee engagement with the tools.
◆
REFERENCE
1. Duhigg, Charles, 2012, Chapter 6,
The Power of a Crisis, The Power
of Habit, First Edition, Pages 223-
226, Random House Publishing
Group, New York
ABOUT THE AUTHORS
John W. Blodgett is
with the Pacific Gas
& Electric Co. He
can be contacted by
sending e-mail to:
j7b2@pge.com
Brian Criss, PSP, is
with DR McNatty &
Associates Inc. He
can be contacted by
sending e-mail to:
bcriss@drmcnatty.com
FOR OTHER RESOURCES
To view additional resources on
this subject, go to:
www.aacei.org/resources/vl/
Doan“advancedsearch”by“au-
thor name” for an abstract listing of
all other technical articles this author
has published with AACE. Or, search
by any total cost management sub-
ject area and retrieve a listing of all
available AACE articles on your area
of interest. AACE also offers pre-
recorded webinars, an Online Learn-
ing Center and other educational
resources. Check out all of the avail-
able AACE resources.
14. OPEN DOORS.
Some Universities
We Chart New Courses.
The MS in Project Management program at the GWSB blends the
study of advanced project management techniques with general
management principles.
The innovative, ethics-infused curriculum balances real-world practice
and academic theory, developing the skills managers need to integrate
complex projects, motivate people, and achieve cost-effective results.
A highly specialized program
applicable to all industries.
Rigorous projects that provide
exposure to all aspects of
project management
Unique, discipline-
independent curriculum
Courses that immerse you
in teamworkand team dynamics,
developing the skills to tackle
complex projects and synthesize
diverse perspectives
Learn on campus in Washington,
DC, or wherever it’s convenient
for you.
The flexibility of our program allows you
to create a course of study that works with
your schedule. Our online and on-campus
courses share the same professors,
lectures, projects and assignments, and
university resources.
The MSPM degree may be completed:
On campus
Entirely online—
study from any location
Both on campus and online
On a full-time or part-time basis
MASTER OF SCIENCE
IN PROJECT
MANAGEMENT
(MSPM)
Innovative curriculum goes
beyond the PMBOK® Guide
to deliver the skills you need
to integrate complex projects,
motivate people and achieve
cost-effective results—from
$10,000 projects to billion-
dollar programs.
Innovative curriculum that
can be applied to any industry
World-class faculty with
deep expertise in international
methodologies, advanced
project management
applications, decision
sciences and management
For more information, detailed course descriptions,
or to apply, visit: go.gwu.edu/costengineering
Spring 2015 Semester: December 14
SB_1314_16
Application
Deadline:
ojects,x prcomple
and academic theory
ethics-infused curriculum balances rtive,he innovaT
management principles.
advanced prstudy of
oject Management prrhe MS in PT
and achieve cost-efte people,motiva
developing the skills managers need to integr,oryy,
ethics-infused curriculum balances r
management principles.
oject management techniques with generadvanced pr
am aogroject Management pr
ff esults.fective rchieve cost-ef
developing the skills managers need to integr
acticeal-world preethics-infused curriculum balances r
aloject management techniques with gener
WSB blends thet the Gam a
beyond theesults.
teadeveloping the skills managers need to integr
actice
al
WSB blends the
tive curriculum goesInnova
(MSPM)
MANA
IN PRO
TER OF SCIENCEMAS
PMBOK® Guidebeyond the
tive curriculum goes
(MSPM)
GEMENTMANA
TJECIN PRO
TER OF SCIENCE
e the same prcourses shar
Our online and on-campusyour schedule.
te a course ofaeto cr
our prxibility ofhe fleT
for you.
ever itor wherDC,
arn on campus ineL
applicable to all industries.
A
ofessors,e the same pr
Our online and on-campus
s witht workstudy thate a course of
am allows youogrour pr
s convenient’ever it
ashington,Warn on campus in
applicable to all industries.
highly specialized prAA
oject managementpr
e to all aspects ofxposure
ovidet projects thaous prRigor
discipline-Unique,
ogr
applicable to all industries.
amogrhighly specialized pr
e to all aspects of
ovide dollar pr
$10,000 pr
feccost-efff
te people and achievemotiva
ato integr
to deliver the skills you need
can be applied
Innova
ams.ogr
ojects to billion-$10,000 pr
omesults—frfective r
te people and achieve
ojects,x prte complea
to deliver the skills you need
industryto anycan be applied
tcurriculum thativeInnova
ee may be completed:he MSPM degrT
ces.esouruniversity r
ojects and assignments,pres,lectur
e the same pr
On campus
om any locastudy fr
—ely onlineEntir
on campus andBoth
ee may be completed:
andojects and assignments,
ofessors,e the same pr
tionom any loca
onlineon campus and
curriculumindependent
diverse perspectives
synthesizednaojectsx prcomple
atotslliksthedeveloping
am dynamics,and teamworktein
t immerse youourses thaC
sciences a
curriculum
synthesize
elkca
am dynamics,
t immerse you
applica
oject managementpr
methodologies,
deep e
orld-clW
anagementnd msciences a
decisiontions,applica
oject management
advancedmethodologies,
tionaltise in internaxperrtdeep e
ass faculty withorld-cl
orfull-timeOn a basist-timepar
g g gg g
15. 13COST ENGINEERING JANUARY/FEBRUARY 2015
Forensic Schedule Analysis Methods
F
orensic Schedule Analysis
(FSA) is the applied use of
scientific and mathematical
principles, within a context
of practical knowledge about
engineering, contracting, and
construction means and methods, in
the study and investigation of events
that occurred during the design and
construction of various structures,
using Critical Path Method (CPM) or
other recognized schedule calculation
methods [5]. An analyst begins an FSA
with a review and analysis of the
planned construction sequencing in
the baseline model, calculation and
analysis of activity duration (with
respect to planned quantities,
estimated resources, and productivity
levels), activity sequencing, resource
scheduling, and evaluation of the
trade-offs between cost and time. The
analyst then, either by using the
existing model (CPM schedule) or by
creating mathematical or statistical
model, in order to analyze, in a
verifiable and repeatable manner, how
actual events interacted with the
baseline model and its updates in
order to determine the significance of
a specific deviation or series of
deviations from the baseline model
and their role in determining the
ultimate sequence and duration of
tasks within the complex network [4].
The form that the mathematical or
statistical model takes defines the
analysis “method.”
AACE’s Recommended Practice
on Forensic Schedule Analysis, (RP
29R-03), is a unifying technical
reference developed with the
cooperation of dozens of experienced
FSA experts, and the analyses
performed for this article were
conducted in keeping with the
principles and method
implementation protocols (MIPs)
described therein. There are nine MIPs
overall; however, the RP breaks the
methods into four major families: the
As-Planned versus As-Built (APAB/MIP
3.2), the Contemporaneous Period
Analysis (CPA/MIP 3.5, sometimes
commonly called the “Windows”
method), the Retrospective Time
Impact Analysis (RTIA/MIP 3.7), and
the Collapsed As-Built (CAB/MIP 3.9).
The further breakdown of the families
into the nine methods – the MIPs – is
defined by factors such as timing of
the analysis, whether the model relies
on active CPM calculations or not,
whether the model adds or subtracts
fragmentary networks (“fragnets”) to
simulate the effects of delays, or
TECHNICAL ARTICLE
Forensic Schedule Analysis
Methods: Reconciliation of
Different Results
John C. Livengood, CFCC PSP and
Patrick M. Kelly, PE PSP
Abstract: Perceived wisdom within the construction industry is that different
Forensic Schedule Analysis (FSA) methods produce different results on the
same set of facts. Although there are many potential variables that could
cause this, such as bias of the analyst or the quality of the implementation
of a method, some experts have expressed concern that the methods them-
selves generate different results, and therefore some may be potentially de-
fective. But, do the different methods actually generate different answers
when applied properly to the same set of facts, or are the observed differ-
ences natural aspects of the methods that can be documented and quanti-
fied? This article will explore that question by examining a specific set of
facts and applying each of the four major FSA methods – the As-Planned vs.
As-Built, Contemporaneous Period Analysis, Retrospective TIA, and Collapsed
As-Built – to those facts. Further, if the methods do generate different results,
the article will explain how and why that occurs, how to quantify and recon-
cile the differences, and what conclusions a FSA expert should draw from
those differences. This article was first presented as CDR-1593 at the 2014
AACE International Annual Meeting in New Orleans, LA.
Key Words: Forensic Schedule Analysis, construction, as-planned, as-built,
contemporaneous period analysis, retrospective TIA, and collapsed as-built
16. 14 COST ENGINEERING JANUARY/FEBRUARY 2015
whether the analysis is performed
globally or in periodic steps [6].
A definition of the term,
“window,” is necessary to avoid
confusion between the many uses of
the term. Although the term
“Windows” is sometimes used as a
term to describe a specific analysis
method, it is important to understand
that a window is a slice of time in the
life of a project, within which the
analyst will use the selected method to
examine that window’s events. Most
of the methods can be implemented in
a way that subdivides the project
duration into windows. The choice and
definition of the periods of time used
to form the windows will be
dependent on the circumstances.
However, it is common practice to
time the start and finish of the
windows to coincide with the monthly
progress update and pay application.
Occasionally, the start and finish
points for windows are identified to
correspond with specific delay events
which are of interest to the analyst.
Although this is potentially valuable, it
is inadvisable to have analysis
windows which are wider than the
period encompassed by the progress
updates. The monthly update (or pay
application date, if no updates exist)
should be the maximum width of a
window.
In a CPA, wide windows (greater
than a month) are undesirable. One of
the major benefits of a CPA is to track
the movement of the critical path,
which is known to be variable based
on progress and evolving means and
methods. Using wide windows opens
the possibility that the critical path will
undergo multiple shifts during the
window and will not be cataloged by
the analyst. This would allow delays to
be misallocated to specific events and
parties. A month-long window is
usually the maximum width of a
window because of the fact that the
pay applications—a useful back-check
on the state of progress to date—are
generally submitted on a monthly
basis.
One of the more important
differences between the forensic
methods relates to how it treats the
project management team’s
understanding of the critical path
work, and whether the contractor and
the owner used the schedules during
the project to establish their beliefs
regarding which work was driving
project completion and then used that
knowledge to plan the upcoming
period’s work. What the project
management team knew is called its
“contemporaneous understanding of
criticality.” From the perspective of the
project management team that is
properly using their prospective
schedules for planning and executing
the next period of work, their
knowledge of what was critical to
project completion (and therefore the
explanation of their actions at the
time) is related to the status of the
critical path at the time in question.
Even in the case where future events
shift the final as-built critical path
away from an activity that was
considered critical at the time, the
understanding of the project
management team’s actions is
possible only by understanding what
they thought was critical at the time. A
major difference in the analysis
methods involves whether (and how)
they incorporate the
contemporaneous understanding of
criticality. Some methods rely heavily
on the contemporaneous view of
criticality, while others determine
criticality in a different way (such as
the determination or calculation of an
“as-built critical path” which may or
Table 1—The Role of the Contemporaneous View of Criticality in FSA Methods
17. 15COST ENGINEERING JANUARY/FEBRUARY 2015
may not have a relationship to the
contemporaneous critical path). The
authors and many commentators
believe FSA methodologies that
reflect the contemporaneous
understanding of criticality is
preferred [1] [20]. Table 1 provides a
general overview of the role of the
contemporaneous view of criticality in
FSA methods; however, the specifics
of that role will be discussed in more
detail later.
However, a contemporaneous
understanding of criticality can only
exist on projects that have a valid
contemporaneous schedule series.
The fact is that some projects have
schedule series that do not represent
the contemporaneous planning.
Sometimes such schedule series stem
from an adversarial relationship
between the parties that develops
during performance of the work.
These schedules are generally
unsuitable for use in a forensic
analysis [19]. Other projects do not
have schedules at all— even,
sometimes, despite the fact that the
contract mandated their use [14]. As
will be discussed, the perspective on
the contemporaneous understanding
of criticality, selected by the analyst,
will have a significant impact on the
results of the analysis.
Regarding the need to avoid
adversarial interests in the use of
schedules developed under an
adversarial relationship, the author’s
state: “In Nello L. Teer Co., the Board
found that the usefulness of a CPM
schedule tends to become suspect
when the contractor and the owner
have developed adversarial interests.
The Board noted that there are too
many variable subject to manipulation
to permit acceptance of the
conclusions of CPM consultants in
such circumstances. The Board also
noted that this is not to say that the
CPM analyses are not to be used in
connection with contract claims. On
the contrary, they often are the most
feasible way to determine
complicated delay issues. However,
the Board must have confidence in the
credibility of the consultants and the
cogency of their presentations. In
connection with the testimony of
Nello Teer’s scheduling expert, the
Board noted that the expert
continually expressed conclusions as
to construction management that
were beyond any expertise that the
Board considered the expert to have
demonstrated.”
The fact of whether or not a
contemporaneous understanding of
criticality was reflected in a schedule
should be a factor in determining
which method is best for analyzing a
given project’s delays. For instance, if
a project had update schedules
created by a scheduler off-site that
were never reviewed by the project
management team, it is probably not
appropriate for a forensic analyst to
choose a method like the CPA/MIP 3.3
to analyze that project. Conversely, an
analyst would likely be in error in
selecting the APAB/MIP 3.2, which
does not inherently consider the
contemporaneous understanding of
criticality, to analyze a project that had
a good series of schedules used by the
project manager and superintendent
to plan and execute the project. In
addition to the other factors to
consider in selecting an appropriate
FSA method, the analyst should also
consider how the schedules were
used and whether they influenced the
decision-making process during
execution.
The Differences in Results
A common criticism of the four
major methods of examining is that
different methods applied to the same
set of facts yield different results.
Several practitioners have previously
examined these criticisms [24].
Although there have been varied
results from the studies, it is generally
accepted wisdom in the industry that
the four major methods return
different results when applied to the
same set of facts. This has created a
perception in some that some or all of
the methods are invalid. Further
exacerbating the problem is the fact
that professional practitioners of FSA
seem apparently incapable of
explaining the differences and
reconciling them, which can result in
the analysts engaging in a “battle of
the scheduling experts” that does
little to efficiently resolve disputes.
First, many of the problems with
reconciling the results of competing
analyses stem from other, non-
mathematical sources. These
problems include, but are not limited,
to the incorrect selection of a method,
the poor implementation of a MIP, or
the use of a schedule series that is
unreliable, unverifiable, or otherwise
not capable of supporting a forensic
analysis. These factors continue to
cause problems with dispute
resolution where competing delay
analyses are involved; however, the
methods proposed in this article are
not expressly designed to correct for
these factors. Instead, the authors
anticipate these methods being
chiefly used when two competently
prepared analyses are in conflict as to
the existence, quantum, and
responsibility of delays. That being the
case, we do also anticipate that
aspects of these methods could be
employed to identify a poor analysis
and to highlight its deficiencies.
An aspect of the FSA methods
that is often disregarded in this
discussion, however, is that the
methods tend to analyze the schedule
model in different ways. The APAB, for
instance, measures “what actually
happened” by using hindsight to
calculate the As-Built Critical Path
(ABCP) and measuring delays along
this path. In contrast to this, the CPA
measures what the project team
believed to be critical as of a given
schedule’s data date, and the impact
that events had on the
contemporaneous CP. The shifting
nature of the CP is well documented
and understood, and the ABCP and
the contemporaneous CP may not be
the same. The CP shifts over time—
sometimes between updates—until it
ultimately comes to rest on the final
day of the project. Therefore, an
analyst performing an APAB may
determine that, for a given window,
the project lost, for instance, 23 CD as
18. 16 COST ENGINEERING JANUARY/FEBRUARY 2015
a result of activities on the APCP,
whereas the opposing analyst
performing a CPA would determine
that during the same window, the
project lost 30 CD as a result of an
activity on the contemporaneous CP
that does not ultimately appear on the
ABCP. This fundamental disagreement
between methods is common, but not
insurmountable.
In order to overcome the
problems caused by the differences in
the methods, we recommend a
common communication format: the
cumulative delay graph. ”Cumulative
delay” is the number of days of delay
that have accrued through a given
point in time. In order to generate a
cumulative delay graph, one must plot
the number of days of delay that an
analysis shows the project to have
suffered as a function of each date
during the project. The source and the
frequency of the data points for the
cumulative delay graphs will vary
slightly between methods. Most
notably, the cumulative delay graph
for the APAB should be plotted as the
Daily Delay Measure (DDM) graph
[16]. For the CPA, RTIA, and CAB, the
days of predicted delay should be
plotted as of the data date of the
schedule at which the delay days are
shown to have accrued. As will be
discussed further, the resulting graph
can assist in identifying reasons for
differences in specific windows of the
project, thereby facilitating resolution.
We see the cumulative delay
graph as part of a larger reconciliation
process between methodologies. For
our comparison of the number and
timing of delay days generated for
each methodology, we have
undertaken the following seven steps:
1. The source data is validated as a
prerequisite to method selection.
2. As part of the method selection
process, [7] the project records
are examined to determine
whether the contemporaneous
view of criticality should be a
primary determining factor in
deciding which method to use. As
with all parts of the method
selection process, this decision
should be supported with
evidence.
3. The causal activity for a window
must be identified. The causal
activity should be determined on
as frequent a periodicity as the
analysis method will allow.
4. The DDM line should be plotted.
This line will serve as a baseline
for comparison of all the other
analyses. The DDM will serve as
the cumulative delay graph for
the APAB analysis.
5. Each of the analyses is then
plotted on a cumulative delay
graphs. Each data point should be
the predicted completion date of
the schedule as a function of that
schedule’s data date. We overlaid
all the lines onto a single graph for
easy comparison.
6. Each window of the project
duration is reviewed, and the
causal activities identified by each
analysis, and the amount of delay
determined to have accumulated
as a result of that causal activity
are noted. Similarities in the
causal activity and the quantum
of delay allow for agreement
between the parties and
resolution of delay related to that
specific window.
7. Differences in either causal
identification or in quantum were
identified and explained. The
differences should be able to be
explained as resulting from the
differences in the perspectives of
the analysis methods.
The purpose of this procedure is
to first and foremost underline the
fact that there are documentable and
quantifiable reasons why two
competent analysts of the same
project could return different results.
This will not, of course, resolve
differences in opinion about the
underlying reason why a causal
activity was delayed. If both parties
identify the same activity and similar
quanta, but have different opinions
about why that specific activity was
delayed and therefore apportion
responsibility differently, this
reconciliation process will not help
resolve that issue. However, if that is
the case, then the dispute is no longer
about the schedule analyses and is
instead properly concerned with the
facts of the case.
Creation of the Test Schedule Series
The ability to reconcile the results
of different methods hinges in part on
an understanding of the normal
differences that will be exhibited by
the cumulative delay graphs of each
method. In order to establish and
analyze these differences, the authors
created a test schedule series
consisting of a baseline schedule, 37
updates, an as-built schedule, and a
collapsible as-built schedule. We did
this, rather than use an existing
schedule series from a past project, to
avoid as many of the problems
associated with poor scheduling
practices as possible. Additionally, it
allowed us to control the update
schedules and eliminate logic
revisions between the updates.
The baseline schedule was based
on a hypothetical bridge construction
project, wherein an existing bridge
with two separate spans was being
replaced, one span at a time, with
active traffic shifted to the other span.
The proposed maintenance of traffic
plan mandated that a single span be
open to two-way traffic during the
construction; therefore, the general
process for construction involved
switching all traffic to the existing
span, demolishing the abandoned
span, construction of the new span,
and switching all traffic to the new
span. The second existing span would
then be demolished and the second
new span constructed in its place. The
original baseline schedule contained
over 432 activities, and had a Notice
to Proceed date of 1-Mar-2010, and a
predicted completion date of 7-Jun-
2012.
In order to create the test series
of schedules for use in this analysis,
the authors took a copy of the
baseline schedule and created new
durations which would represent the
19. 17COST ENGINEERING JANUARY/FEBRUARY 2015
ultimate actual durations of the
activities. These durations were
created based on a series of
theoretical productivity problems that
a bridge project encountered. The
new durations were input into the
copy of the baseline schedule, and this
schedule was recalculated as of the
original Data Date of 01-Mar-2010.
The authors then created a total of 17
activities that represented delays that
occurred during this project. Five of
these activities represented
contractor-caused delays (such as
start delays or rework issues) while
the remaining 12 activities
represented owner delays. These 17
activities were tied into the network
of this schedule, with appropriate
predecessors and successors for the
issue described by the delay activity.
The schedule was recalculated, again
as of the original data date of 1-Mar-
2010. The new predicted completion
date of the schedule was 19-Apr-2013,
or 316 calendar days (CD) after the
baseline predicted completion date.
This schedule contained no dates
assigned to the actual start or finish
columns, and as a result the network
calculations were driving all the dates
and float calculations; however, it did
represent the actual progress of the
project. This schedule therefore was
capable of serving as the “Collapsible
As-Built” schedule [9].
The Collapsible As-Built schedule
was used to calculate the As-Built
Critical Path (ABCP) of the project, and
was also used in the performance of
the Collapsed As-Built analysis. To
create the fully actualized As-Built
schedule, the authors applied
progress across the entire project,
thereby making the start and finish
dates in the Collapsible As-Built
schedule into actual start and finish
dates. This As-Built schedule had a
data date of 1-May-2013.
To create the test series of 37
update schedules necessary for
portions of this analysis, the authors
extracted the actual start and finish
dates, and the actual durations, from
the As-Built schedule, and input them
into a de-progression spreadsheet.
This spreadsheet was designed to
allow the user to estimate a remaining
duration of an activity at a given point
in time. Therefore, we were able to
enter the desired data date of the first
update schedule (in this case, 1-Apr-
2010) and the spreadsheet would
return a list of activities that would
have started and finished, as well as a
list of activities that only would have
started. For these activities, the
spreadsheet also gave a remaining
duration, based on an assumption of
straight-line progress between actual
start and actual finish. The authors
then copied the baseline schedule and
imported the “actual starts, actual
finishes,” and remaining durations for
the activities that would have seen
progress during the update window.
The schedule was then recalculated as
of the new data date, and the
predicted completion date was
recorded. This process was repeated
for each of the 37 months for which
the project was in progress.
The schedule series was also
created with a “weather exclusion
period” that was simply a non-work
period in the calendar assigned to
asphalt work. According to the
calendar, no asphalt work could occur
between the start of the third week in
December and the end of the second
week in March. Any asphalt activities
that were pushed into this non-work
period would immediately jump
forward three months, when the
weather would presumably be warm
enough to place asphalt. This is a
Figure 1—Combined Cumulative Delay Graphs
20. 18 COST ENGINEERING JANUARY/FEBRUARY 2015
common technique in construction
schedules to represent periods during
which no work can be performed on a
type of work for a specified period,
and it has a magnifying effect on
delays.
For instance, assume that in the
test schedule update for June, an
asphalt activity is shown as
completing in early December. Lack of
progress in the window (June to July)
creates a three week delay that
pushes that asphalt activity into the
weather exclusion period. Because the
calendar with the weather exclusion
period will not allow the asphalt work
to start until mid-March, the three
week delay that occurred in June has
now become a three month delay. This
is also a common source of dispute in
apportionment of delays in a forensic
analysis, since in many cases there are
multiple parties responsible for the
delays leading up to the point where
the weather exclusion period is
affecting the predicted completion
date. As that is the case, disputes
often arise over who is assigned the
magnified delay that occurs when the
schedule’s predicted completion date
jumps across the wide non-work
period.
The 39 test series schedules that
were originally created represented
the contemporaneous updates that
the analyst would receive as the
project record schedules. These
schedules were then copied (as
necessary) and used to implement the
four analyses. Clearly, the four
methods require different schedules
for performance: the APAB requires
only the baseline schedule and the as-
built; the CAB requires the collapsible
as-built schedule; the CPA requires all
the schedules as they existed during
the project; and the RTIA requires all
the schedules, as well as the fragnets
for insertion into the schedules.
Creation of the Cumulative Delay
The combined cumulative delay
graph is shown in figure 1. The black
line represents the DDM line,
generated from the comparison of the
as-planned dates in the baseline to the
actual dates in the as-built. The
cumulative delay graph for each
method was developed by calculating
the predicted completion date for
each schedule in the analysis
method’s series of schedules, and
plotting that predicted completion
Table 2—Delay Totals by Method
Figure 2—Daily Delay Measure Graph
21. 19COST ENGINEERING JANUARY/FEBRUARY 2015
date as of the data date of the
schedule within which it was
calculated.
Generally, it is clear that the
cumulative delay graph for the CAB
(MIP 3.9) (in green) diverges the most
from the other three analyses. The
APAB (MIP 3.2) DDM line (in black), the
CPA (MIP 3.3) line (in blue), and the
RTIA (MIP 3.7) line (in orange) run
along a largely similar path between
March 2010 and December 2011; after
this point, the CPA/MIP 3.3 line and
the RTIA/MIP 3.7 line both drop
precipitously, whereas the APAB/MIP
3.2 DDM line continues along roughly
the same slope as before this point.
Analysts seeking to reconcile the
differences between methods must
understand the causes and
implications of these differences, and
how it relates to the specific way the
method analyzes the CPM schedule
and measures delay.
Note that the authors have
calculated the slope of the cumulative
delay lines in units of calendar days per
month (CD/Mo). Since a project
cannot experience more delay in a
month than the duration of that
month (in absence of an inserted
fragnet) the maximum natural slope of
an unedited network will not exceed
roughly 30 CD/Mo. Any time periods
with slopes greater than the maximum
natural slope result from edited
networks.
Table 2 shows the sum of delay
days attributable to each party, by
method. Recall that in this
hypothetical, responsibility for a
particular delay has been assigned to a
party, only the timing of the delay
during the course of the project is of
concern. For example, the contractor
was assigned delay days for
“Contractor Delay” activities and for
production delays. The owner was
assigned delay days for “Owner Delay”
activities. One window within CPA/MIP
3.3 had two concurrently critical
activities, one belonging to each party.
These 6 CD were therefore designated
as concurrent delay.
One very notable difference in the
results of the four methods stems from
the weather exclusion period. Note
that in CPA/MIP 3.3 analysis, the
weather exclusion period becomes a
primary driver of the predicted
completion date in January 2012,
whereas in RTIA/MIP 3.7, the
predicted completion date is driven by
the weather exclusion period starting
in December 2011. APAB/MIP 3.2 is
not affected by the weather exclusion
period, which is due to the
observational nature of the method.
CAB/MIP 3.9 is a modeled method,
and such methods could potentially
show effects of such large non-work
periods; however, the test series as it
was organized did not ultimately allow
the CAB/MIP 3.9 analysis to do so. The
implications of this will be discussed
further below; however, for the
purposes of table 2, the delay days
attributable to the effects of the
weather exclusion period is kept in a
separate column without
apportionment to one party.
As-Planned vs. As-Built analyses
compare a baseline schedule plan,
consisting of one set of network logic,
to the as-built state of the same
network [11]. The schedules can be
compared globally, or can be broken
into smaller windows that can increase
the granularity and precision of delay
determination. Additional
mathematical analyses (such as
productivity analysis, earned value
analysis, or measured mile analysis)
help establish the as-built critical path
and apportion responsibility for
specific periods of delay to specific
Figure 3—CPA/3.3 as Compared to APAB/3.2 DDM
22. 20 COST ENGINEERING JANUARY/FEBRUARY 2015
parties—so that the analysis does not
descend into the rightly rejected “total
time” analysis [2].
In its simplest implementation
that borders on a “total time”
methodology, the APAB/MIP 3.2 does
not consider contemporaneous
understanding of criticality; however,
more sophisticated implementations
attempt to identify the as-built critical
path through a careful examination of
the record. Identification of the as-
built critical path can take into account
a contemporaneous under-standing of
criticality, although this is not essential
to the method [8]. As a result, the
DDM line on the cumulative delay
graph does also not consider the
contemporaneous understanding of
criticality. It is not a projection of how
many days ahead or behind schedule
the project management team
believed themselves to be at a given
point in time —it is a mathematical
calculation of the actual number of
days of delay at the point of
measurement.
The calculations for the DDM
values were performed on a weekly
basis for the duration of the project,
and plotted on the graph in figure 2.
The slope of the DDM line does not
exceed the maximum natural slope.
Given that the APAB does not
recognize delays until they actually
occur (no project forward delay), this is
expected. As measured by the DDM,
the delay accumulated during a
window will not exceed the duration of
that window. In other words, the slope
of the DDM line will not exceed the
maximum natural slope.
The DDM line in figure 3 serves as
the basis of comparison for the
cumulative delay lines of the other
methods. Since it measures the actual
delay as it occurred, it provides a
useful reference point from an
observational perspective against
which analysts can compare the
modeled methods.
MIP 3.3: Contemporaneous Period
Analysis
The CPA uses the update
schedules created during construction
to reconstruct the events of the
project, and thereby demonstrating
the changing nature of the critical path
through each of the successive
updates [8, 18]. As project events, such
as progress and unforeseen conditions,
unfold and are reflected in the
contemporaneous schedules, the
effects of progress and subsequent
network revisions (hopefully linked to
the contractor’s revisions to intended
means and methods) will cause gains
and losses to each schedule’s
predicted completion date.
Additionally, subsequent schedules in
the contemporaneous series will show
when the critical path of the project
shifts from one area to another. The
size of the window to be analyzed is
variable: month-to-month is common,
but it is possible to make the windows
more narrow (such as week-to-week)
or define windows by alleged delay
events.
The CPA and its more complex
implementation, the Bifurcated CPA,
rely heavily upon the
contemporaneous understanding of
criticality because they are using the
existing schedule series to determine
what the project team though was
critical at the time [12, 13, 18]. Note
that there is a third type of
Contemporaneous Period Analysis—
the Recreated Contemporaneous
Period Analysis—which uses schedules
recreated by the analyst, presumably
because adequate schedules were not
created contemporaneously [10].
Since the schedules used in this
Figure 4—RTIA/MIP 3.7 as Compared to APAB/MIP 3.2 DDM
23. 21COST ENGINEERING JANUARY/FEBRUARY 2015
analysis did not exist on the project,
they could not have influenced
execution. The Recreated
Contemporaneous Period Analysis
does not, therefore, use a
contemporaneous understanding of
criticality. MIP 3.5 allows for a wide
range of after-the-fact reconstruction.
If only minor adjustments to the
contemporaneous schedule updates
are made, they may actually reflect the
contemporaneous understanding of
criticality.
The update schedules created for
the test series were used to create the
cumulative delay graph. When owner
delay activities start during the update
period, they were shown with their
actual start date and a remaining
duration proportional to the original
duration, assuming straight-line
progress across the activity. They were
not used to forward-project the
entirety of the delay, as is the case with
RTIA/MIP 3.7. Figure 3 shows this.
The most notable feature of the
CPA/MIP 3.3 line is that between NTP
and 1-Jan-2012, it generally follows a
similar path to the APAB/MIP 3.2 DDM
line; however, it is also clear that the
CPA/MIP 3.3 line tends to lead the
DDM line by some amount.
Specifically, the CPA/MIP 3.3 line
accrues delay between two and 30 CD
faster than the APAB/MIP 3.2 DDM
line. On average, the CPA/MIP 3.3 line
leads the DDM by approximately 5 CD.
This is consistent with
expectations: CPA/MIP 3.3 predicts the
upcoming window’s delay as of that
schedule update’s data date, whereas
the APAB/MIP 3.2 DDM line tracks
actual delay as it occurs. For instance,
in figure 4, on 1-Sep-2010 the
cumulative delay values for the
APAB/MIP 3.2 DDM show that the
project had accumulated 51 CD of
delay. On the other hand, CPA/MIP 3.3
determines that the project had
accumulated 82 CD of delay as of the
same date. This difference of 31 CD of
delay is superficially a significant
difference in the results of the two
analyses; however, as of 1-Oct-2010,
the APAB/MIP 3.2 DDM line shows
roughly 77 CD of delay, and the
CPA/MIP 3.3 CPA line continues to
show 82 CD of delay. The CPA simply
looked forward and predicted that in
the upcoming window, there would be
82 CD of delay. The DDM does not look
forward, and therefore the delay
accrued over the window until the
point where the two analyses are
largely in agreement.
In this test series, the two analyses
identify the same activities as the
cause of delay during this window. The
cumulative delay graphic therefore
primarily assists in this window with
quantification of delay associated with
the specific events. However, in the
event that the APAB/MIP 3.2 DDM and
the CPA/MIP 3.3 determined that the
ABCP was different than the
contemporaneous critical path during
this window, the discussion between
the parties should shift to whether the
CPA/MIP 3.3 is an appropriate method
for the analysis.
Assume that a CPA/MIP 3.3 shows
that a given activity was, as of the data
date of a particular update, predicted
to cause 15 CD of delay, and that the
analyst performing the CPA/MIP 3.3
asserts that the predicted delay is
proof of entitlement to an excusable
and compensable time extension.
Meanwhile, the APAB/MIP 3.2 DDM
for that same window shows that a
different activity drives the ABCP for
that same window and caused 17 CD
of delay. The quanta are roughly in
line; however, the cause of delay is in
dispute. The issue again becomes
Figure 5—RTIA/MIP 3.7 as Compared to CPA/MIP 3.3
24. 22 COST ENGINEERING JANUARY/FEBRUARY 2015
whether the schedule series affected
the contemporaneous understanding
of criticality. The analyst performing
the APAB has the benefit of
demonstrating what actually delayed
the project; however, as previously
discussed, the contemporaneous view
of criticality is preferred, if it can be
proven. The analyst performing the
CPA/MIP 3.3 cannot simply state that
the prediction showed a delay would
occur; he or she should also show that
the prediction affected the project
management team’s actions in some
way (such as shifting resources to the
activity perceived to be critical, or
planning for accelerated work in the
future). If the predicted delay existed
only on the scheduler’s software and
never influenced the project
management team’s actions, then the
contemporaneous understanding of
criticality was not affected and the
predicted delay is meaningless. In this
case, the authors believe that the
APAB/MIP 3.2 DDM line and its
associated as built critical path causal
activity are more appropriate to
determine the delay for the window.
In January 2012 the CPA/MIP 3.3
line drops from a predicted delay of
185 CD to 300 CD of delay. This is as a
result of the effects of the previously
discussed weather exclusion period.
This sudden drop of 115 CD is, again, a
predicted delay resulting from the
effects of the weather period. Note,
however, that the APAB/MIP3.2 DDM
continues to trend steadily downward
at an average slope of approximately 8
CD/ Mo. In practical terms, the analyst
performing the APAB/MIP 3.2 and
relying on the DDM would say that the
effects of the weather exclusion
period were irrelevant: the contractor
had been incapable of maintaining
schedule prior to 1-Jan-2012, and the
work excluded by the non-work
period would not have been available
for execution any earlier. In the
APAB/MIP 3.2 analysis, then, the 115
CD were a result of the contractor’s
poor progress. In contrast, the analyst
performing the CPA/MIP 3.3 would
argue that owner delay activities
(including differing site conditions,
design changes, etc.) pushed the work
into the weather exclusion period and
that therefore the 115 CD were the
responsibility of the owner.
In answer to the other party’s
charges that poor progress was the
cause, the contractor could mount a
defense of “pacing” of work. In other
words, the contractor would allege
that given his or her knowledge of the
future delay brought about by the
weather exclusion (linked with his
contemporaneous analysis that
attributed this delay to the owner),
the contractor deliberately slowed
production on available work so that it
would be complete only just in time
for the early start of the weather-
affected work. Once again, however,
this is an argument that rests heavily
with the contractor’s
contemporaneous understanding of
criticality. In order for this pacing
argument to be legitimate, the
contractor would need to show that
he had this understanding of the
weather delay as of 1-Jan-2012, and
that he or she took actions to slow the
production. Without this
demonstration, it will be difficult for
the owner to accept that the
production delays before 1-Jan-2012,
were not the result of the contractor’s
poor productivity, whereas the
production delays after 1- Jan-2012,
were the result of deliberate pacing.
The cumulative delay graph highlights
the need for proof of this
contemporaneous understanding of
criticality.
Therefore, for the purposes of
establishing that the CPA/MIP 3.3
graph is the appropriate
measurement tool and that it should
supersede the other method’s graph
for a given period, the analyst
performing the CPA/MIP 3.3 should
establish the following:
• The analyst must confirm that the
means and methods were
accurately represented in the
contemporaneous update.
• The analyst must confirm that the
schedule was used to plan and
execute the project, and that the
results of the CPM calculation
influenced the contemporaneous
understanding of criticality.
Figure 6—RTIA/MIP 3.7 as Compared to CPA/MIP 3.3 for November 2010 to April 2011
25. 23COST ENGINEERING JANUARY/FEBRUARY 2015
The analyst would conceivably
accomplish this through review of
project documentation such as
meeting minutes, daily reports, and
correspondence. This backup
information would be essential,
however, to justifying the use of a
specific method’s cumulative delay
graph and associated causal activities.
These recommendations are
made to be performed in concert with
the recommendations of AACE
Recommended Practice 29R-03’s
Section 2 on source validation.
MIP 3.7: Retrospective Time Impact
Analysis
The TIA is one of the most
common and widely accepted
methods to analyze project delays. A
TIA compares two schedules with the
same data date—one schedule (the
“unimpacted schedule”) that
represents the status of construction
and the critical path just before the
discovery of an event, and a second
schedule (the “impacted schedule”)
that represents what happens to the
critical path and the predicted
completion date once the delay event
occurs. The event, administrative
resolution time, and added work
necessary to return to original
contract work are represented in the
impacted schedule through the
addition of a fragnet consisting of
representative activities and logic. The
comparison of the predicted
completion dates of these two
schedules (before and after the
fragnet insertion) determines
whether there is entitlement to a time
extension.
Though widely popular and
commonly used, one important
aspect of the TIA is also widely
overlooked: the timing of the analysis.
If a TIA is conducted before the added
work is performed, it is a Prospective
TIA [3]. A Prospective TIA is an
essential tool for the project
scheduler to determine the likely
impacts of changed conditions on a
project and is often included as a
requirement in the contract as a
prerequisite for granting a time
extension. When the change
management plan on a project is
working properly, a Prospective TIA is
associated with a bilateral
modification that adds the time (and
money) to the contract necessary to
compensate the contractor for the
change [15].
However, as discussed, the
forensic analyst is constrained by the
fact that he or she joins the project
after project completion. Therefore,
any TIA that is performed is done after
the added work has been completed,
and is therefore a Retrospective TIA.
There is some controversy about the
use of Retrospective TIAs because of
the potential for manipulation, and
the fact that modeling events
retrospectively allows selective
modeling of only one party’s alleged
delays while excluding others [17]. A
Retrospective TIA that only models
owner delays will tend to conclude
that only the owner was responsible
for the delays, whereas one which
only models contractor delays will
show the opposite. This can lead to an
imbalanced view of responsibility of
delays. Despite this, it is conceivably
possible to perform an effective
Retrospective TIA. In selecting this
method, however, the analyst is
abandoning the contemporaneous
understating of criticality, because this
technique is creating new schedules,
not used on the project, while
modeling actual events
retrospectively.
Figure 7—CAB/MIP 3.9 as Compared to APAB/MIP 3.2 (DDM)
26. The RTIA/MIP 3.7 line tends to
lead the CPA/MIP 3.2 DDM line in a
manner similar, yet more pronounced,
than did the CPA/MIP 3.3 line. The
RTIA/MIP 3.7 RTIA/MIP 3.7 line leads
by an average of approximately 13 CD.
Again, this lead is related to the fact
that the RTIA/MIP 3.7 is predicting
delay rather than measuring actual
delay; however, in contrast to
CPA/MIP 3.3, the RTIA/MIP 3.7 is
predicting delay in inserted fragnets,
as well as in the original CPM network.
To better understand the differences
between the two, refer to figure 5.
The RTIA leads the CPA/MIP 3.3
line by an average of roughly 7 CD. In
addition, note that in Figure 6, the
number of days assigned to the
contractor (40 CD of delay) is much
lower than in the other methods. In
the CPA/MIP 3.3 analysis, the
apportioned delay between owner
and contractor was 51% to 49%; in
RTIA/MIP 3.7 RTIA/MIP 3.7, the
apportioned delay split was 71% to
17%. The contractor tends to receive a
lower apportionment of delay days in
methods that forward-project delays
associated only with the owner. In
other words, if the fragnets inserted
into an RTIA are always representative
of the other party’s alleged delays,
then the analysis will tend to show
that the other party is responsible for
most of the delays. For this reason, it
is not good practice to only model one
party’s delays. However, there are
conceivably occasions when such an
analysis could be appropriate, and
those would be times when the
inserted fragnets were representative
of the contemporaneous
understanding of criticality.
CPA/MIP 3.3 (including the
related bifurcated CPA/MIP 3.4) and
RTIA/MIP 3.7, each propose a
forward-looking modeled analysis
wherein the contemporaneous
understanding of criticality is
assumed, but must be proven.
However, CPA/MIP 3.3 only assumes
that the unimpacted CPM network
influenced this understanding, while
RTIA/MIP 3.7 assumes that both the
fragnet and the CPM network were
influential. Of course, this is not
always accurate. If the fragnet was
contemporaneously proposed and
established, it is likely that the use of
this fragnet in a RTIA/MIP 3.7 is
correct in its assumption that the
party inserting the fragnet had a
contemporaneous understanding of
criticality as projected by the fragnet
and schedule recalculation. This
assertion could of course be disputed
or refuted by the other party.
However, if the fragnets are created
after the fact and were never
considered by the project
management team during project
execution, then it is unlikely that the
RTIA/MIP 3.7 in this case is
representing any contemporaneous
understanding of criticality. In other
words, it pretends that the on-site
management would see future events
as the re-calculated after-the-fact
schedule depicts them.
The impact is seen in the
cumulative delay graphs in the way
that more delay accrues earlier in the
RTIA/MIP 3.7 graph. Figure 6 shows
the MIP 3.3 and the MIP 3.7 graph for
the period between November 2010
to April 2011.
Both cumulative graphs begin at
the same point of delay, each
calculating that the project was 76 CD
behind schedule as of 1-Nov-2010.
However, at the start of December
2010, RTIA/MIP 3.7 calculates that the
project is 100 CD behind schedule,
compared to only 84 CD for CPA/MIP
3.3. The RTIA/MIP 3.7 cumulative
delay graph stays flat from 1-Dec-2010
to 1-Feb-2011, at which point it begins
to accumulate delay again. The
authors reviewed the test schedules
to determine what the driving
activities were during this window,
and determined that in the 1-Nov-
2010 update schedule, the RTIA/MIP
3.7 included a fragnet representing a
differing site condition. The insertion
of the fragnet caused the sudden loss
of 24 CD during the month of
November. In comparison, the
CPA/MIP 3.3 line identifies only an 8
CD delay during the same month,
related to poor contractor production.
This dichotomy reveals the heart
of many disputes. One party uses a
modeled technique that “proves” that
the critical path ran through an owner-
caused differing site condition, while
the other party’s modeled technique
“proves” that the problem was
actually sustained poor production.
Particularly if the contractor is using
the RTIA/MIP 3.7 and the owner is
using the CPA/MPI 3.3, this argument
can go on without resolution.
However, the cumulative delay graph
highlights the timing of the delay
accrual, which relates directly to the
contemporaneous understanding of
criticality. The RTIA/MIP 3.7 effectively
alleges that, as of 1-Nov-2010 (or
reasonably close to that date) the
contractor had identified the differing
site condition, had estimated the
duration of time necessary to
overcome the change in order to
return to contract work, and had
perceived that the predicted
completion date was delayed by 24 CD
as a result. These are the facts that
must be proven to establish the
propriety of the RTIA/MIP 3.7’s
conclusions; without this, it is very
easy to foresee scenarios when one
party’s analyst simply forward-impacts
a CPM model with fragnets of the
other party’s delays until the analyst’s
client apparently bears no
responsibility for any delay. The
RTIA/MIP 3.7 line will simply stair-step
down through the project duration,
claiming that delay accrued earlier
than it actually did and was always the
responsibility of the other party.
Therefore, for the purposes of
establishing that the RTIA/MIP 3.7
graph is the appropriate measurement
tool and that it should supersede the
other method’s graph for a given
period, the analyst performing the
RTIA/MIP 3.7 should establish the
following: [25]
• The analyst must confirm that the
means and methods were
accurately represented in the
contemporaneous update.
• The analyst must confirm that the
schedule was used to plan and
24 COST ENGINEERING JANUARY/FEBRUARY 2015
27. execute the project, and that the
results of the CPM calculation
influenced the contemporaneous
understanding of criticality.
• The analyst must also confirm
that as of the Data Date of the
schedule (or reasonably soon
thereafter) the project
management team became
aware of the issue modeled in the
fragnet, that they impacted the
schedule with the fragnet, and
that the resulting shift in the CP
and later predicted completion
date influenced the project
management team’s
contemporaneous understanding
of criticality.
• The analyst should also be
prepared to discuss whether
there was contemporaneous
pacing.
MIP 3.9: Collapsed As-Built
The Collapsed As-Built method
recreates a CPM model of the as-built
schedule by creating logic and
durations that reflect the apparent
logic that drove the work and the
actual dates on which the work was
performed. The analyst then dissolves
selected delay activities recalculates
the schedule in order to show what
would have happened had a certain
event not taken place. The Collapsed
As-Built method can either be
performed in a single step (deleting all
alleged delay activities at once) or in
multiple steps (removing one activity
at a time and recalculating after each
deletion). A conceptual advantage to
the Collapsed As-Built method is that
the as-built schedule contains both
parties’ delays, so if the analyst
removes only one party’s delays from
the schedule, the other party’s delays
are still present. In other words, the
Collapsed As-Built naturally considers
both parties’ delays. Because this
technique involves creating a series of
CPM schedules which were not used
on the project, it does not rely upon
the contemporaneous understanding
of criticality.
For this analysis, the authors
started with the test series’
Collapsible As-Built Schedule, and
dissolved each owner delay activity in
turn, beginning with the activity with
the latest finish date and moving
backwards. After each dissolution, the
schedule is recalculated and the
change in the predicted completion
date was recorded. As shown in figure
7, after all the owner delay activities
were dissolved, the predicted
completion date had shifted 31 CD
earlier than the actual finish. As such,
these 31 CD were assigned to the
owner, while the remaining 285 CD
were assigned to the contractor.
The cumulative delay graph for
CAB/MIP 3.9’s analysis is clearly the
most divergent from the APAB/MIP
3.2 DDM line. This is understandable
in terms of the fact that the method is
attempting to account for both
parties’ delays by deleting only one
party’s and leaving the others in the
schedule series. This runs counter to
RTIA/MIP 3.7, in that with the RTIA
the additive modeling of, for instance,
the owner’s delays has a tendency to
mask the contractor’s. In the CAB, the
contractor’s delays remain after the
stepped deletion of the owner’s delay
activities. The cumulative delay graph
for RTIA/MIP 3.7 therefore includes
delays by both parties, while CAB/MIP
3.9 depicts only one party’s delays.
The conclusion that could be
drawn from the CAB analysis is that,
but for the delays of the owner, the
contractor would only have finished
31 CD earlier. Note that the weather
exclusion period was not regained
during the dissolution of the owner
delay activities; therefore, it is
possible to conclude that regardless of
the owner’s delays, the contractor
would have encountered the weather
exclusion period’s jump in predicted
completion date on its own. This
explains why the weather exclusion
period days are assigned to the
contractor in table 2.
The CAB measures delay in a
significantly different manner than
the other three methods. First, it does
not attempt to start at the NTP date,
where there were zero days of delay
accrued, and work forward through
each window. Instead, it analyzes the
project in reverse, starting with the
actual number of delay days accrued.
Second, the method is designed to
specifically leave behind one party’s
delays. As a result, the MIP 3.9 CAB
line will never return to zero. The
authors have concluded that at this
point the only technique for
reconciling the CAB with the other
methods would be to perform the
CAB/MIP 3.9 twice, once excluding
the owner’s delays and then excluding
the contractor’s delays.
Conclusions
The use of the cumulative delay
graph can be a useful tool in
reconciling the apparently different
results of methods. It is particularly
useful when used as part of a larger
process of putting the results of the
methods into a common format and a
collaborative effort between the
parties to establish periods of
similarity and differences. The
cumulative delay graph will aid in
establishing when delays accrued; it
will not, however, resolve disputes
where the causal activity is agreed
upon but the underlying reason for
delay is at issue.
Generally speaking, the
APAB/MIP 3.2 DDM line establishes
when the delay actually occurred. The
CPA/MIP 3.3 line tends to show that
delay accrues slightly earlier than the
APAB/MIP 3.2 DDM line, because the
CPA/MIP 3.2 is calculating the delay to
the predicted completion date based
on the unedited CPM network alone.
The RTIA/MIP 3.7 tends to show that
delay accrues earlier than the
CPA/MIP 3.3, because the RTIA/MIP
3.7 is calculating delay to the
predicted completion date based on
the CPM network as impacted by
fragnets. A longer fragnet will tend to
claim more delay earlier.
The cumulative delay graph
highlights when delay either actually
occurred, as in the APAB/CIP 3.2 DDM
line, or when it was perceived by the
parties to have occurred, as with the
CPA/MIP 3.2 and the RTIA/MIP 3.7. In
order to prove that this perception
25COST ENGINEERING JANUARY/FEBRUARY 2015