SlideShare a Scribd company logo
1 of 51
Download to read offline
Winter 2018 | Volume 7
ARTICLES
Proactively Responding to
Government Investigations Using
Data Analytics: An Examination of
Data Considerations in the Post-
Acute Context
Katie Pawlitz, Esq. and Greg Russo
Antitrust Market Definition—A
Review of Five Influential Papers
Audrey Boles, Sam Brott, and Michele Martin
Utilizing a Self-Financing Strategy
for Projects
Tito Cardoso
INTELLIGENCETHATWORKS
1
2
3
Review
iiReview
Editor
Cleve B. Tyler, PhD
Publication Coordinator
Matthew Caselli
Associate Editors
Volume 7
Kevin Christensen, PhD
Kelly Nordby, PhD
Jennifer Hull
Brad Noffsker
Brian Alg
Jay Seetharaman
Riley O’Connell
Review
iiiReview
Subscription. For subscription questions, problems, or address changes:
The BRG Review
Berkeley Research Group, LLC
1800 M Street NW
Second floor
Washington, DC 20036
202.480.2700
info@thinkbrg.com
Copyright: Copyright ©2018 by Berkeley Research Group, LLC. Except as may be expressly
provided elsewhere in this publication, permission is hereby granted to produce and distribute
copies of individual works from this publication for non-profit educational purposes, provided
that the author, source, and copyright notice are included on each copy. This permission is in
addition to rights of reproduction granted under Sections 107, 108, and other provisions of the
U.S. Copyright Act and its amendments.
Disclaimer: The opinions expressed in the BRG Review are those of the individual authors and
do not represent the opinions of BRG or its other employees and affiliates. The information
provided in the BRG Review is not intended to and does not render legal, accounting, tax, or other
professional advice or services, and no client relationship is established with BRG by making any
information available in this publication, or from you transmitting an email or other message
to us. None of the information contained herein should be used as a substitute for consultation
with competent advisors.
ivReview
Table of Contents
1.	 Letter from the Editor...........................................................................................................v
	 Cleve B. Tyler, PhD
2.	 Proactively Responding to Government Investigations Using Data Analytics:
An Examination of Data Considerations in the Post-Acute Context.............................. 1
	 Katie Pawlitz, Esq. and Greg Russo
3.	 Antitrust Market Definition—A Review of Five Influential Papers ..............................14
	 Audrey Boles, Sam Brott, and Michele Martin
4.	 Utilizing a Self-Financing Strategy for Projects ............................................................ 38	
Tito Cardoso
vReview
Letter from the Editor
Welcome to the seventh volume of the BRG Review, an official publication of Berkeley Research
Group, LLC. This publication reviews several topics based on independent analysis by our authors.
The breadth of material covered provides insight into some of the varied and interesting ongoing
research performed around the world by experts and staff throughout BRG. Our experts comprise
academics and private-sector professionals in fields including economics, finance, healthcare,
and data analytics. BRG has over 1,100 professionals in more than 40 offices worldwide who apply
innovative methodologies and analyses to complex problems in the business and legal arenas.
In our first paper, Greg Russo and attorney Katie Pawlitz address the role of data analytics in fraud
investigations in the healthcare industry, both proactively by providers and during the course of a
government investigation. The role of sampling in False Claims Act investigations and litigation is
also explored in-depth, including a review of case law regarding the legality of sampling for use in
addressing liability and damages.
In our second paper, Samuel Brott, Michele Martin, and Audrey Boles provide detailed reviews of
five influential papers regarding market definition for use in antitrust investigations and litigation.
They review key contributions of each paper and some of the critiques that the core idea in each
paper has faced. The authors are current and former staff members at BRG and highlight the depth
of talent that exists within BRG.
In our last paper, Tito Cardoso provides a strategy for investment that firms may employ when facing
restricted access to capital. This self-financing strategy envisions splitting a project into stages such
that the investment required at any time to advance the project is smaller.
Finally, a special thank you to the associate editors who work hard to ensure that the papers published
within the BRG Review reflect nothing short of excellence. To our readers, we hope these papers
stimulate discussion and discourse and deepen our relationships with fellow professionals, academics,
clients, government representatives, attorneys, and other interested individuals across the world.
Regards,
Cleve B. Tyler, PhD
Editor
1Review
Proactively Responding to Government
Investigations Using Data Analytics:
An Examination of Data Considerations in the
Post-Acute Context1
Katie Pawlitz, Esq., and Greg Russo*
Katie Pawlitz is a Partner in the Washington, DC, office of Reed Smith LLP. She represents a variety of healthcare
providers, suppliers, manufacturers, and associations regarding regulatory issues arising under the Medicare and
Medicaid programs and under the healthcare fraud and abuse laws. She also assists clients involved in anti-kickback,
Stark Law, and False Claims Act investigations and litigation matters. She can be reached at kpawlitz@reedsmith.com.
Greg Russo is a managing director in the Washington, DC, office of BRG who specializes in providing strategic advice
to healthcare organizations through his use of complex data analyses and financial modeling. He can be reached at
GRusso@thinkbrg.com.
Picture this:
It is 10 a.m. on a Friday morning. Your day as in house counsel for a nursing home
chain is moving along normally. You finished your morning staff meeting and are
prepping for a meeting with the Chief Financial Officer (“CFO”). Your administrative
assistant knocks on your door to say that a government subpoena directed to your
attention has just arrived. Your stomach sinks – not because you are aware of fraud but
because you are aware of the headaches that come with responding to a government
subpoena. You conduct an initial review of the subpoena and forward a copy for review
to another member of the company’s legal team. Your colleague reports to you that
he has heard rumors that the Department of Justice (“DOJ”) has been speaking with
a former employee who previously worked in your finance department. Rumor has it
that the former employee discussed being a party to several conversations in which
both the Chief Executive Officer (“CEO”) and CFO pressured individuals running your
different facilities to increase profits by admitting patients even if the circumstances
were not warranted. This former employee also indicated that the CEO and CFO
exerted pressure for lengths of stay to increase.
1	 Originally published in ABA The Health Lawyer 29(5) (June 2017): 23–30.
* 	 Katie Pawlitz, Esq. is a partner in the Washington, DC, office of Reed Smith LLP. She represents a variety of healthcare providers,
suppliers, manufacturers, and associations regarding regulatory issues arising under the Medicare and Medicaid programs and under
the healthcare fraud and abuse laws. She also assists clients involved in anti-kickback, Stark Law, and False Claims Act investigations
and litigation matters. She can be reached at kpawlitz@reedsmith.com.
	 Greg Russo is a managing director in the Washington, DC, office of BRG who specializes in providing strategic advice to healthcare
organizations through his use of complex data analyses and financial modeling. He can be reached at GRusso@thinkbrg.com.
	 Special thanks to Vicki Morris, Brady Fowler, and Elena Kuenzel for their assistance in researching and drafting this article.
2Proactively Responding to Government Investigations Using Data Analytics
You have worked in post-acute care long enough to understand a few things. First,
these allegations are serious. Second, these allegations, if true, would increase profits.
Third, this is going to be a long investigation. For these reasons, it is not surprising
that the DOJ’s interest was piqued as is yours.
You immediately sketch your next steps. Hire external counsel. Ensure a thorough
and expeditious investigation. Determine if the allegations are true. If not true, then
provide ample evidence to disprove the allegations. If true, then proactively calculate
damages and negotiate a settlement with the DOJ.
Any response to an investigation of this sort should involve the use of data analytics. The
government and its contractors are becoming increasingly more sophisticated in using data
to develop theories of wrongdoing and to identify suspected fraudulent behavior. As a result,
providers must be aware of their own data and the optics of that data. Providers should seek to
use data, and analyses related to the same, to proactively monitor risk; to respond to government
investigations; to dissuade the government from intervening in a False Claims Act (“FCA”) case;
as a point of consideration in settlement discussions; and, if necessary, as a defense tool in
FCA litigation.
While growth in post-acute spending has recently slowed, the Medicare program approximately
doubled its post-acute spending between 2000 and 2015. As a result, the government seeks to
ensure that the most appropriate (and cost-efficient) care is being provided and has relied on
standard data analytics to identify anomalies in care patterns. This article focuses on post-acute
providers and data analytics pertaining to these providers.
In making the case for the use of data in proactively responding to government investigations, this
article examines data considerations in the context of post-acute providers, although the same
concepts apply to all types of providers. This article also describes data monitoring activities
undertaken by the government and how similar monitoring can and should be proactively
implemented by providers. Finally, this article discusses the use of sampling in FCA investigations
and litigation, a common approach for which data can play a significant role.
Reimbursement Overview
In order to appreciate potential risks and allegations in the context of government fraud investigations,
and the use of data to respond to the same, one must consider the reimbursement methodology at issue.
The following sub-sections provide an overview of Medicare’s reimbursement methodologies for
various post-acute provider types and the key data elements for each. These key data elements
are items that may be indicative of a provider manipulating the reimbursement system to garner
more revenues/profits. As such, government prosecutors are increasingly relying on these key data
elements to support theories of wrongdoing. Recognizing this, providers should be proactively
monitoring their own metrics as they relate to the relevant data elements discussed below, and
3Review
in the face of a government investigation, developing a defense strategy that accounts for, or
puts in context, these data elements.
Skilled Nursing Facilities (“SNFs”)
Reimbursement Overview: SNFs are paid a per diem payment for the provision of
services to Medicare beneficiaries based on a prospective payment system (“PPS”),
which means Medicare pays for services based on a predetermined, fixed amount.
The SNF PPS payment covers all costs of furnishing covered Medicare Part A
SNF services (routine, ancillary, and capital-related costs), with limited exception.
The PPS payment for each resident is adjusted for case mix and geographic variation
in wages. Case-mix adjustments are based on residents’ assessments, which classify
residents into resource utilization groups (“RUGs”) based on the severity of residents’
medical conditions and skilled care needs. The determination of resource needs, or
RUG category, is established using the Minimum Data Set (“MDS”), a standardized
tool that assesses the resident’s clinical condition, functional status, and expected
use of services.
Key Data Elements: There are several data elements to analyze when responding to
DOJ investigations regarding whether or not a SNF has exploited incentives. These
data elements provide a retrospective view of the facility’s operations. Primary among
these data elements is the distribution of the number of days that a facility provides
SNF services at each RUG level and the number of minutes of therapy being provided.
It is helpful to understand how the distribution changes over time and whether the
number of minutes of therapy has materially changed. Other patterns that can be
assessed for abnormalities include the manner in which change-of-therapy assessments
occur and the distribution between group/concurrent therapy. Additional patterns
could include the percent of patients being readmitted to a short-term acute care
hospital as well as the overall length of stay (“LOS”), especially for patients staying
over 90 days. Another measure that will likely be considered when the DOJ investigates
is the activities of daily living recorded at each patient’s assessment. The activities of
daily living contain measures of how a patient performs daily living tasks (e.g., walking,
eating, dressing). These daily living measures are not intended to measure performance
or quality of care and a facility should be cautious when either proactively using them
or when responding to a DOJ inquiry.
Inpatient Rehabilitation Facilities (“IRFs”)
Reimbursement Overview: IRFs are free standing rehabilitation hospitals or units in
acute care hospitals that provide intensive rehabilitation services (i.e., at least three
hours of intense therapy per day). IRFs are paid an amount of money each time that a
4Proactively Responding to Government Investigations Using Data Analytics
patient leaves the facility (i.e., a per discharge payment) and this payment covers the
provision of services to Medicare beneficiaries based on a PPS. The IRF PPS covers all
costs of furnishing services (routine, ancillary, and capital related), with limited exception,
such as costs related to operating certain educational activities. Reimbursement for
each IRF patient is based on a patient assessment process where patients are classified
into distinct groups based on clinical characteristics and expected resource needs.
Patients are classified using the IRF Patient Assessment Instrument, which contains
clinical, demographic, and other information. Separate payments are calculated for
each group, including the application of case-mix and facility-level adjustments.
Key Data Elements: When responding to a DOJ investigation related to an IRF, it is
useful to understand the distribution of cases among the case-mix groups (“CMGs”)
that Medicare uses for reimbursement. Certain CMGs offer larger reimbursement
and/or a greater margin, making it imperative to understand the practice pattern at
a facility. In a similar vein, one should understand the extent to which Medicare made
outlier payments to the facility, for what cases, and for what time periods. Outlier
payments are Medicare payments that are in addition to the payment calculated in
accordance with the established payment methodology. This additional payment
covers additional care that the patient received and that is considered by Medicare to
be outside the normal amount of care expected by the payment methodology. A full
analysis of readmissions is useful to understand the quality of care being provided.
This would include readmissions to IRFs or short-term acute care hospitals. Lastly,
the overall LOS should be analyzed to understand how it may have shifted.
Long Term Acute Care Hospitals (“LTCHs”)
Reimbursement Overview: LTCHs treat patients with multi-comorbidities requiring
long-stay hospital-level care and are certified under Medicare as short-term acute
care hospitals. LTCHs are generally defined as having an average inpatient length of
stay (“ALOS”) of greater than 25 days and are excluded from the acute care hospital
inpatient PPS. Instead, LTCHs are paid by Medicare under the LTCH PPS, based on
prospectively set rates. The LTCH PPS classifies patients into distinct diagnostic
groups based on clinical characteristics and expected resource needs. Payment for a
Medicare patient will be made at a predetermined, per discharge amount pursuant
to the patient’s assigned Medicare Severity Long-Term Care Diagnosis-Related Group
(“MS-LTC-DRG”), which is based on diagnoses, procedures performed, age, gender, and
discharge status. Medicare calculates the ALOS for each MS-LTC-DRG. If a patient’s
stay is five-sixths of the ALOS calculated by Medicare for that MS-LTC-DRG then the
LTCH will receive the full amount of Medicare’s payment. If the patient stays for less
than five-sixths of the ALOS calculated by Medicare for that MS-LTC-DRG, then the
LTCH will only receive five-sixths of Medicare’s payment. For example, if a MS-LTC-
5Review
DRG has an ALOS of 30 days, and a patient stays in the LTCH for 25 days (5/6 of 30), the
LTCH will receive the entire Medicare payment. However, if the patient is discharged
on day 23, the facility will receive something less than the full payment.
Key Data Elements: It is imperative to measure outlier payments when considering
LTCHs as the outlier thresholds can provide significant insight into a facility’s
operations. A case at an LTCH can be considered a short stay and/or high cost outlier,
both of which have ramifications on the amount of money received by the LTCH. It is
important to analyze the historic readmission percentage either to the same LTCH or
to a short-term acute care hospital. Specific diagnoses and procedures should also be
analyzed, as these are often of interest during investigations since these diagnoses/
procedures contribute significant revenue.
Hospice Providers
Reimbursement Overview: Medicare hospice providers are paid a daily payment rate
for each day a patient is enrolled in the hospice benefit, which covers all costs incurred
in furnishing services identified in a patient’s plan of care (whether provided directly
by the hospice provider or arranged by the hospice provider), based on the level of care
required to meet the patient’s and family’s need. The levels of care are (i) routine home
care; (ii) continuous home care; (iii) inpatient respite care; and (iv) general inpatient
care. Payments are made regardless of amount of services furnished on any given day.
Effective January 1, 2016, a service intensity add-on (“SIA”) payment is available for
services furnished at the end of life.
Key Data Elements: Hospice providers must be aware and analyze the change in acuity
of patients over time in conjunction with the change in LOS. During an investigation,
it is also important to understand the distribution in each level of hospice care and
how this changed over time. A key measure to understand the admission policies of a
hospice is to analyze the discharge versus death ratio over time. Additional measures
to be analyzed include the duration and continuity of home care and the distribution
of the categories of care (e.g., routine home care, inpatient respite).
Home Health Agencies (“HHAs”)
Reimbursement Overview: HHAs are paid an episodic payment (for a 60-day episode
of care) for the provision of services to patients under a home health plan of care
based on a home health (“HH”) PPS. The HH PPS covers all services and supplies
(whether provided directly by the HHA or under arrangement), except certain covered
osteoporosis drugs and durable medical equipment. HH PPS payments are adjusted
for case-mix and geographic differences in wages. With respect to the case-mix
adjustment, payment rates are based on characteristics of the patient and his or her
6Proactively Responding to Government Investigations Using Data Analytics
corresponding resource needs (e.g., diagnosis, clinical factors, functional factors, and
service needs), as reflected in the Outcome and Assessment Information Set (“OASIS”).
Based on the OASIS, patients are classified into Home Health Resource Groups. The
HH PPS allows for outlier payments to be made for episodes with unusually large
costs that exceed a threshold amount. Low-utilization payment adjustments are
also available for patients who require four or fewer visits during the 60-day episode.
Finally, a partial episode payment adjustment is available when a patient elects to
transfer to another HHA or is discharged and readmitted to the same HHA during
the 60-day episode.
Key Data Elements: Similar to hospice providers, HHAs need to analyze the acuity
of patients over time. Unlike other post-acute care providers that are paid for a
single episode, Medicare pays HHAs for a length of time/episode. As such, one
must analyze the number of episodes for each beneficiary within this time period.
Additionally, understanding how, over time, both low and high utilization episodes
have changed is helpful.
The Government’s Data Monitoring Activities
In addition to understanding the key data elements at issue, it is also important to understand
how these data elements may be monitored or examined by the government.
In 2002, Congress passed the Improper Payment Information Act (“IPIA”) to “provide for estimates
and reports of improper payments by federal agencies.” This Act covered improper payments by
all federal agencies, and Congress did not constrain the law to the Medicare program. However,
as the Medicare program accounts for a significant portion of the federal budget, this law brought
additional scrutiny to the Medicare program. The law required Medicare, like other federal
programs, to estimate the amounts of payments improperly paid and report the measures taken
to reduce the improper payments.
Congress amended the IPIA in 2010 via the Improper Payments Elimination and Recovery Act
and in 2012 via the Improper Payments Elimination and Recovery Improvement Act,
expanding the requirements to include recovering improper payments.
The IPIA, as amended, provided for the creation of the Hospital Payment Monitoring Program,
which created several standard reports, including:
•	 Program for Evaluating Payment Patterns Electronic Report (“PEPPER”): supports compliance
efforts by publishing payment risks and targets tailored to facility type;
•	 First-Look Analysis Tool for Hospital Outlier Monitoring (“FATHOM”): supports Quality
Improvement Organizations in their identification of outlier facilities that require more
investigation; and
7Review
•	 Comparative Billing Reports (“CBRs”): focuses on a specific topic/service to determine
payment irregularities.
This article focuses on PEPPER reports, which are delivered to operators of many different types
of providers, including several post-acute provider types. “PEPPER provides provider-specific
Medicare data statistics for discharges/services vulnerable to improper payments. PEPPER can
support a hospital or facility’s compliance efforts by identifying where it is an outlier for these risk
areas. This data can help identify both potential overpayments as well as potential underpayments.”
The following types of providers receive PEPPER reports:
While the PEPPER program seeks to assist facilities in identifying “potential overpayments as well
as potential underpayments,” the value of these reports to investigators must be recognized by
the industry. The DOJ, the Department of Health and Human Services’ Office of Inspector General
(“OIG”), and other investigating agencies can utilize numerous metrics from the PEPPER reports
when analyzing the operations of a facility. The PEPPER reports also compare a facility to the
nation, the Medicare Administrative Contractor (“MAC”) jurisdiction,2
and the state.
The PEPPER reports define a set of metrics for each provider type. For each metric, the PEPPER
reports identify what may be indicated if a facility were to be considered an outlier. For example, see
an excerpt from the user’s guide for the SNF PEPPER report. The user’s guide3
provides suggested
interventions if a facility is at/below the 20th
percentile or at/above the 80th
percentile.
2	 “A Medicare Administrative Contractor (MAC) is a private health care insurer that has been awarded a geographic jurisdiction to
process Medicare Part A and Part B (A/B) medical claims or Durable Medical Equipment (DME) claims for Medicare Fee-For-Service
(FFS) beneficiaries.” MACs process all Medicare FFS claims for a given geographic area. More information on MACs can be found at:
https://www.cms.gov/medicare/medicare-contracting/medicare-administrative-contractors/what-is-a-mac.html.
3	 The PEPPER user’s guide is available at https://www.pepperresources.org/.
Short-Term Acute
Care Hospitals
Hospices
Long-Term Acute
Care Hospitals
Critical Access
Hospitals
Inpatient
Psychiatric
Partial
Hospitalization
Home Health
Agencies
Inpatient Rehab
Facilities
Skilled Nursing
Facilities
8Proactively Responding to Government Investigations Using Data Analytics
As suggested from the diagram above, PEPPER defines outliers as those facilities outside the 20th
or 80th
percentile of all facilities in the United States. With regards to a metric for which a facility is
an outlier, PEPPER indicates that a “provider may wish to review medical record documentation to
ensure that services beneficiaries receive are appropriate and necessary and that documentation
in the medical record supports the level of care and services for which the SNF received Medicare
reimbursement.”4
Although PEPPER recognizes that an outlier “does not necessarily indicate the
presence of improper payment or that the provider is doing anything wrong,”5
the investigating
agency/individual may not appreciate this possibility and may instead interpret the outlier status
as support for allegations of improper services or billing. With the analyses and benchmarks
available in the PEPPER reports, it is no surprise that investigators are becoming increasingly
comfortable relying on these reports as a front-line investigation tool.
Another consideration with respect to information gleaned from PEPPER reports is whether
such data implicates the 60-day overpayment rule. Section 6402(a) of the Patient Protection and
Affordable Care Act established a new section of the Social Security Act requiring a person who
has received an overpayment to report and return the overpayment to the Secretary, the state,
an intermediary, a carrier, or a contractor, as appropriate, by the later of 60 days from when the
overpayment is “identified” or the date any corresponding cost report is due, if applicable.6
Any
overpayment retained by a person after the deadline for reporting and returning an overpayment
is an obligation for purposes of the FCA (a reverse false claim).7
In February 2015 CMS published a final rule related to this requirement, applicable to Medicare
Part A and Part B healthcare providers and suppliers.8
Under the final rule, a person has identified
an overpayment when the person has or should have, through the exercise of reasonable diligence,
determined that the person has received an overpayment and quantified the amount of the
overpayment.9
In the final rule, CMS clarified that “reasonable diligence” requires providers and
suppliers to undertake ongoing, proactive compliance activities to monitor claims, as well as
reactive investigative activities regarding any potential overpayments.10
Depending on the individual
circumstances, data analytics could be one of these ongoing compliance efforts requiring further
review and analysis.
4	 SNF PEPPER User’s Guide, Fifth Edition pg. 7.
5	Id.
6	 42 U.S.C. § 1320a-7k(d).
7	Id.
8	 81 Fed. Reg. 7654 (Feb. 12, 2016).
9	 42 C.F.R. § 401.305(a)(2).
10	 81 Fed. Reg. 7661.
9Review
Sampling
Another area in which data plays a significant role is in the context of sampling. In FCA investigations,
the DOJ or OIG may “draw a sample” or do a “sample review.” The implications of this are significant
and providers should understand what this entails. Before discussing these implications, it is
helpful to define the word “sampling.”
A provider serves many patients in each time period. These patients are considered the universe.
In sampling, an individual develops an approach or sampling plan whereby a certain number of
individual patients from the universe are selected and grouped into what is called the sample. A
sampling plan can have many different designs and often involves the concept of randomness.
The sample is analyzed and conclusions are drawn. Often the DOJ or OIG will want to use the
conclusions from their analysis of the sample to make conclusions about the universe. If this is
the case, then an individual will complete a process known as extrapolation whereby the sample’s
conclusion (e.g., overpayments, error rate) is multiplied to relate to the universe.
The use of sampling has a long-standing history in the administrative context,11
but was not
statutorily authorized until the passage of the Medicare Prescription Drug, Improvement, and
Modernization Act of 2003 (“MMA”). The MMA established the Medicare Integrity Program, which
authorizes Medicare contractors to use extrapolation to determine overpayment amounts when (i)
there is a sustained or high level of payment error; or (ii) documented educational intervention has
failed to correct the payment error.12
The Medicare Integrity Program also authorizes a Medicare
contractor to request records or supporting documentation for a limited sample of submitted
claims to ensure that the previous practice is not continuing.13
In the context of FCA investigations, government subpoenas or civil investigative demands (“CIDs”)
often include requests for medical records associated with specific patients or claims, based on
a sample developed by the government or one of its contractors.14
In the context of FCA lawsuits, recent court decisions have addressed the legality of sampling as it
relates to establishing liability and damages. However, the issue is far from settled. The following
cases represent recent examples of court decisions involving these issues.
11	 See, e.g., 42 C.F.R. § 405.1064 (ALJ decisions involving statistical samples); Section III.B of the OIG’s Provider Self Disclosure Protocol
(requiring a provider’s overpayment calculation to “consist of a review of either: (1) all the claims affected by the disclosed matter or
(2) a statistically valid random sample of the claims that can be projected to the population of claims affected by the matter” ); HCFA
Ruling 86-1 (Hospital Insurance and Supplementary Medical Insurance Benefits (Parts A and B) Use of Statistical Sampling to Project
Overpayments to Providers and Suppliers).
12	 42 U.S.C. § 1395ddd(f)(3).
13	 42 U.S.C. § 1395ddd(f)(4).
14	 From a practical standpoint, subpoenas and CIDs operate in a similar fashion: they allow the government to request certain documents.
However, a CID goes further than a subpoena duces tecum and can require the recipient to not only produce documents, but to also
answer interrogatories and give oral testimony under oath. 31 U.S.C. § 3733(a). CIDs have become increasingly more common since all
U.S. Attorneys can now issue CIDs. Prior to 2010, only the Attorney General was authorized to issue a CID and that authority could not
be delegated. However, the Fraud Enforcement and Recovery Act (2009) authorized the Attorney General to delegate that authority to
others within the DOJ.
10Proactively Responding to Government Investigations Using Data Analytics
United States ex rel. Martin v. Life Care Centers of America, Inc. (E.D. Tenn.)
The Life Care case was a qui tam action arising from allegations by two former employees
against the skilled nursing company; the government intervened in the case before it
was settled in October 2016.
The government’s central allegation was that Life Care pressured its therapists to
target Ultra High RUG levels and longer ALOS periods for patients to maximize its
Medicare revenue.15
The government contended that as a result of this pressure,
Life Care provided therapy that was not medically reasonable or necessary. The
government sought to prove its theory based on evidence from statistical sampling
and extrapolation of 400 patient admissions and 1,700 claims, representing 54,396
admissions and approximately 154,621 total claims.
Life Care sought partial summary judgment as to the government’s use of statistical
sampling and the use of unidentified claims, arguing that the government could not
establish falsity (i.e., liability) by extrapolation. The court denied partial summary
judgment, finding that “statistical sampling may be used to prove claims brought under
the FCA involving Medicare overpayment, but it does not and cannot control the weight
that the fact finder may accord to the extrapolated evidence.”16
In other words, the court
decided that determining the weight to afford the extrapolated evidence is best left to a
jury. Life Care then filed a motion to certify the summary judgment decision to the Sixth
Circuit for interlocutory appeal, which the court denied.17
Life Care and the government
settled the FCA lawsuit with no further rulings regarding the sampling issue.
U.S. ex rel. Michaels et al. v. Agape Senior Community Inc. et al. (4th Cir.)
In United States ex rel. Michaels v. Agape Senior Cmty., Inc., relators (former employees of
the Agape nursing home network) initiated a qui tam action claiming damages and other
relief under the FCA, the Anti-Kickback Statute, and the Health Care Fraud Statute.18
The government did not intervene in the case. In sum, the relators alleged that Agape
submitted false claims to several federal healthcare programs, including Medicare,
Medicaid, and TRICARE, seeking reimbursement for nursing home-related services.
15	 United States ex rel. Martin v. Life Care Centers of America, Inc., 2014 WL 10937088 (E.D. Tenn. Sept. 29, 2014).
16	 Order on Defendant’s Mot. for Partial Summary Judgment, Dkt. No. 184, United States ex rel. Martin v. Life Care Centers of America,
Inc., No. 1:08-cv 251 (E.D. Tenn. Nov. 24, 2014), dated Sept. 29, 2014.
17	 Order on Mot. to Certify the Court’s Order for Immediate Interlocutory Appeal, Dkt. No. 209, United States ex rel. Martin v. Life Care
Centers of America, Inc., No. 1:08-cv 251 (E.D. Tenn. Nov. 24, 2014), dated Nov. 24, 2014.
18	 United States ex rel. Michaels v. Agape Senior Cmty., Inc., 2015 WL 3903675 (D.S.C. June 25, 2015). The federal anti-kickback statute
and its implementing regulations makes it a criminal offense to knowingly or willfully offer, pay, solicit or receive any remuneration
in exchange for, or to induce, referring an individual to another person or entity for the furnishing, or arranging for or recommending
the purchase, of any item or service that may be paid for in whole or in part by a federal healthcare program, including Medicare and
Medicaid. 42 U.S.C. § 1320a-7b(b). The Health Care Fraud Statute makes it a criminal offense to knowingly and willfully execute a scheme
to defraud a healthcare benefit program. 18 U.S. Code § 1347.
11Review
The district court rejected the relators’ use of statistical sampling in proving liability
and damages, specifically finding that the Agape relators would be required to “prove
each and every claim based upon the evidence relating to that particular claim.” The
court also noted that statistical sampling would be appropriate when it is the only
way for a qui tam relator to prove damages, for example, when evidence has been
destroyed or dissipated. The court certified the issue of whether statistical sampling
can be used to demonstrate FCA liability without directly analyzing Medicare billing
claims, among others, for interlocutory appeal to the Fourth Circuit Court of Appeals.19
In a February 14, 2017 decision, the Fourth Circuit found that the certification of the
statistical sampling ruling for interlocutory review was not appropriate since the
question focused on whether the particular methods of statistical sampling used in
the Agape matter were reliable, and not the pure legal question of whether sampling is
a legally valid technique in determining damages in FCA actions.20
As such, the issue
of whether sampling is an acceptable method to calculate FCA claims or violates due
process remains outstanding, in the Fourth Circuit and elsewhere.
United States ex rel. Paradies v. AseraCare, Inc. (N.D. Ala.)
United States v. AseraCare Inc. arose out of allegations brought by three relators, in
coordination with the government, contending that hospice care provider AseraCare
submitted Medicare claims for patients who did not meet the criteria for hospice
care.21
In this case, the government sought to establish FCA liability using statistical
extrapolation, seeking more than $200 million in damages based on a sample of
approximately 120 patients.
The court initially denied AseraCare’s motion for summary judgment, concluding
that “statistical evidence is evidence” of falsity to defeat summary judgment.22
The
trial was then bifurcated into falsity and scienter phases.23
Following the first phase
(falsity), the judge granted a new trial based on error in instructing the jury: failing
to provide complete instructions as to what was legally necessary for the jury to
find that the claims before it were false.24
In March 2016, the court granted summary
judgment to AseraCare based on the government’s failure to prove falsity, explaining
19	 United States ex rel. Michaels v. Agape Senior Cmty., Inc., No. 15-238 (L) (0:12-cv-03466-JFA) (4th Cir. Sept. 29, 2015). The district court
also certified for interlocutory appeal the issue of whether the DOJ has absolute veto power over FCA settlements in cases where it
has not intervened. The DOJ blocked settlement between the relators and Agape, claiming that the proposed settlement amount was
too low and proposed release of legal liability too broad.
20	 United States ex rel. Michaels v. Agape Senior Community, et al., 2017 WL 588356 (4th Cir. 2017). With respect to the issue of veto power,
the Fourth Circuit held the government has an absolute veto power over voluntary settlements in FCA matters even when it declines
to intervene in the case.
21	 United States v. AseraCare Inc., 2015 WL 8486874 (N.D. Ala. Nov. 03, 2015).
22	 United States v. AseraCare Inc., 2014 WL 6879254 (N.D. Ala. Dec. 4, 2014) (emphasis in original).
23	 Order Granting Motion to Bifurcate, United States v. AseraCare Inc., No. 2:12-CV-245-KOB (N.D. Ala. May 20, 2015).
24	 United States v. AseraCare Inc., 2015 WL 8486874 (N.D. Ala. Nov. 03, 2015).
12Proactively Responding to Government Investigations Using Data Analytics
that mere differences in clinical judgment are not enough to establish FCA falsity, and
the government had not produced evidence other than conflicting medical expert
opinions.25
The government has appealed to the Eleventh Circuit Court of Appeals.
As these cases show, sampling can have a significant impact on an investigation and/or litigation. A
provider, its external counsel, and expert consultants should be involved in all aspects of the sampling
to ensure that a fair and reasonable sample is drawn and that any extrapolations are appropriate.
In the sampling approach, aspects to consider include the methodology used to create the sample
(e.g., stratification), the representativeness of the sample, the confidence (degree of certainty)
levels and the precision (range of accuracy) levels. These aspects will materially affect the size
and composition of the sample.
After analyzing the sample, it is important to consider any comparisons that are drawn between
the sample and any benchmarks. One must consider the qualitative and quantitative differences
among the facility, the sample, and any benchmarks offered.
In challenging the sample, a provider may want to consider conducting an evaluation of the
sampling plan, conducting an independent review of the sample claims, conducting a review of
non-sample claims (i.e., the universe), and/or challenging the credentials of reviewers analyzing
the sample. Should the sample be used in litigation, one may consider Daubert motions as the
sampling evidence may be unqualified.26
Regardless of whether sampling is used in an investigation or litigation, sampling requires a careful
review to ensure that it is being used appropriately. One of the key pitfalls to sampling is the concept
of randomness, as many people often equate randomness with representativeness. It is important to
rememberthatpullingasamplerandomlydoesnotnecessarilymeanthatthesamplewillberepresentative
of the universe. Further analysis is needed to ensure that representativeness has been satisfied.
Conclusion
The use of data analytics in the context of healthcare reimbursement and fraud prevention is
not a new concept. Government contractors have been analyzing data for payment and recovery
purposes over the past several years. In the fraud and abuse context, the government and its
contractors have also increasingly relied upon available data to identify potential issues for further
investigation of wrongdoing by providers. Relators and their counsel have also increasingly mined
publicly available claims data in bringing FCA qui tam actions.
25	 United States v. AseraCare, Inc., 176 F. Supp. 3d 1282 (N.D. Ala. Mar. 31, 2016).
26	 A Daubert motion, named after a Supreme Court case, Daubert v. Merrell Dow Pharms., 509 U.S. 579 (U.S. 1993), is a specific type of
motion in limine used to exclude the presentation of unqualified evidence to the jury.
13Review
What has changed is the government’s increasing reliance on data to develop theories of wrongdoing
by providers. As a result, it is imperative that providers are well-aware of their own data, and the
optics of such data, particularly as it compares to the data of other, similar providers, which is
available through public sources. Providers should be proactively monitoring their own data as
it relates to the relevant data elements discussed above. Proactive monitoring requires not only
an awareness of the actual data metrics, but also an understanding of and appreciation for the
factors that contribute to, or influence, the metrics. Knowing this information will allow a provider
to quickly and intelligently respond to a government investigation, if necessary. Further, in the
context of government investigations, data analytics can be used by providers to contradict, or put
into more accurate context, government allegations of wrongdoing, to resolve an investigation, to
assist in settlement negotiations, or to dissuade the government from intervening in a qui tam case.
14Review
Antitrust Market Definition—A Review of Five
Influential Papers
Audrey Boles, Sam Brott, and Michele Martin*
Introduction
In 1982, economist and Nobel Laureate George Stigler issued the following call to action:
My lament is that this battle on market definitions, which is fought thousands of times
what with all the private antitrust suits, has received virtually no attention from us
economists. Except for a casual flirtation with cross-elasticities of demand and supply,
the determination of markets has remained an undeveloped area of economic research
at either the theoretical or empirical level.1
Since that time (and, in fact, a few years prior), economists have crafted both empirical and
theoretical methods that could be used to define antitrust markets to aid inquiry into competitive
effects. What follows is a collection of summaries of five papers that have influenced thinking
about market definition over the last thirty years.2
We focus on five papers that embody interesting points in the history and evolution of thinking
about market definition in the United States:3
•	 Kenneth G. Elzinga and Thomas F. Hogarty, “The Problem of Geographic Market Delineation
in Antimerger Suits” (1973)
•	 David T. Scheffman and Pablo T. Spiller, “Geographic Market Definition under the US
Department of Justice Merger Guidelines” (1987)
*	 Audrey Boles is an engagement manager with Applied Predictive Technologies, a business analytics software company. Previously, she
was a consultant with BRG, where she specialized in data analysis for antitrust, intellectual property, and healthcare litigation matters.
She can be reached at aboles@predictiveTechnologies.com.
	 Sam Brott is a consultant at BRG, where he specializes in economic data analysis for antitrust and intellectual property litigation. He
can be reached at sbrott@thinkbrg.com.
	 Michele Martin is a public policy data scientist at Humana. Her role involves using data analysis and research to support public
policy advocacy. Previously, she was a consultant at BRG, where she applied her analytical expertise to litigation and internal
investigations for a wide range of healthcare clients, including health insurers and pharmaceutical manufacturers. She can be
reached at michmartin809@gmail.com.
1	 George J. Stigler, “The Economists and the Problem of Monopoly,” 72 Am. Econ. Rev. 1 (1982): 9.
2	 Noteworthy contributions that are not reviewed here include papers such as: George Hay, John C. Hilke, and Philip B. Nelson, “Geographic
Market Definition in an International Context,” 64 Chicago-Kent L. Rev. 711 (1988); Steven Salop and Serge Moresi, “Updating the Merger
Guidelines: Comments” (November 9, 2009), available at: https://www.ftc.gov/sites/default/files/documents/public_comments/
horizontal-merger-guidelines-review-project-545095-00032/545095-00032.pdf; Michael Salinger, “The Concentration-Margins Relationship
Reconsidered,” Brookings Papers on Econ. Activity: Microeconomics 287 (1990); Gregory J. Werden, “Demand Elasticities in Antitrust
Analysis,” 66 Antitrust L.J. 363 (1998): 384–396; and Gregory J. Werden and Luke M. Froeb, “Correlation, Causality, and All That Jazz:
Inherent Shortcomings of Price Tests for Antitrust Market Delineation,” 8 Rev. Indus. Org 329 (1993).
3	 A more comprehensive overview can be found in Greg J. Werden, “The History of Antitrust Market Delineation,” 76 Marq. L. Rev. 123
(1992).
15Antitrust Market Definition—A Review of Five Influential Papers
•	 Barry C. Harris and Joseph J. Simons, “Focusing Market Definition: How Much Substitution
is Necessary?” (1989)
•	 Joseph Farrell and Carl Shapiro, “Antitrust Evaluation of Horizontal Mergers: An Economic
Alternative to Market Definition” (2010)
•	 Louis Kaplow, “Market Definition: Impossible and Counterproductive” (2013)
We also summarize critiques of the papers to properly frame the limits of the proposed methods.
In many cases, these limitations were first articulated by the originating author(s), followed by
calls for consideration before application of the various methods.
We begin with Elzinga and Hogarty’s “The Problem of Geographic Market Delineation in Antimerger
Suits.”4
This paper represents a position taken relatively early that “the definition of a market offered
by classical economists can and should be used in the antitrust context.”5
Their analysis of trade
flows into and out of specified regions was initially employed by merging companies as a means
to define geographic markets. Following several losses in court, the Federal Trade Commission
(FTC) downplayed the applicability of the Elzinga-Hogarty test as a viable method to delineate
antitrust markets in hospital mergers, a position that has been accepted in more recent decisions
regarding hospital mergers.6
Nevertheless, Elzinga and Hogarty’s emphasis on tests “being generally
consistent with economic analysis” and “reasonably applicable by antitrust practitioners” set a
standard for later market definition analytical developments.7
One development was the publication of the 1982 Merger Guidelines.8
The 1982 Merger Guidelines
(revised in 1984) represent the first adoption of the hypothetical monopolist test (HMT) by US
enforcement agencies. Now widely used, the HMT was initially criticized as being “completely
nonoperational” because “no method of investigation of data is presented, and no data, even
those produced by coercive process, are specified that will allow the market to be determined
empirically.”9
The “nonoperational” aspect of the HMT was bridged, in part, by the next papers
4	 Kenneth G. Elzinga and Thomas F. Hogarty, “The Problem of Geographic Market Delineation in Antimerger Suits,” 18 Antitrust Bull. 45
(1973): 81.
5	 Werden (1992).
6	 American Bar Association, Health Care Mergers and Acquisitions Handbook, Second Edition (2018): 54–55.
7	 Elzinga and Hogarty (1973): 81.
8	 A total of six merger guidelines have been promulgated by US antitrust authorities, including revisions: US Department of Justice
(DOJ), Merger Guidelines (1968), available at http://www.justice.gov/atr/hmerger/11247.pdf (“1968 Merger Guidelines”); US DOJ,
Merger Guidelines (1982), available at http://www.justice.gov/atr/hmerger/11248.pdf (“1982 Merger Guidelines”); US DOJ, Merger
Guidelines (1984), available at http://www.justice.gov/atr/hmerger/11249.pdf (“1984 Merger Guidelines”). The 1984 Merger Guidelines
were superseded by the Horizontal Merger Guidelines, which was jointly issued by the DOJ and the FTC. US DOJ and FTC, Horizontal
Merger Guidelines (1992), available at https://www.justice.gov/sites/default/files/atr/legacy/2007/07/11/11250.pdf (“1992 Horizontal
Merger Guidelines”); US DOJ and FTC, Horizontal Merger Guidelines (1997), available at https://www.justice.gov/sites/default/files/atr/
legacy/2007/08/14/hmg.pdf (“1997 Horizontal Merger Guidelines”); US DOJ and FTC, Horizontal Merger Guidelines (2010) [hereinafter
Horizontal Merger Guidelines (2010)], available at http://ftc.gov/os/2010/08/100819hmg.pdf (“2010 Horizontal Merger Guidelines”).
9	 George J. Stigler and Robert A. Sherwin, “The Extent of the Market,” 28 J.L. & Econ. (1985): 555, 582.
16Review
included in our summary. Both Scheffman and Spiller (1987)10
and Harris and Simons (1989)11
pick up where the 1984 Merger Guidelines left off. They introduce empirical methods for defining
relevant antitrust markets that are consistent with the 1984 Merger Guidelines: residual demand
and critical loss, respectively. The fact that these two methods continue to be used decades after
their introduction is indicative of both their usefulness and the HMT’s prevailing approach to
defining markets.
But science is never settled, and intrinsic to its method is a process of continued scrutiny and
evaluation. One of the latest tools to be introduced (and included as part of the 2010 Horizontal
Merger Guidelines) is the concept of upward pricing pressure (UPP) as initially developed by Farrell
and Shapiro (2010).12
UPP represents a notable departure from defining markets in order to infer
market power from market shares. Instead, its emphasis is on evaluating whether lost sales of
a newly merged firm’s product due to a price increase can be internalized through increased
sales of a separate product, such that prices may be profitably increased after accounting for
post-merger efficiencies. Unlike its predecessor methods, UPP does not hinge on a properly
defined market from which market shares can be calculated, though in practice market shares
often play a role in UPP analyses.
While many papers have focused on empirical applications related to the HMT, Kaplow (2013)
criticizes the current “market redefinition” regime embodied in the HMT.13
Kaplow argues (in this
and his other papers and speeches) to eliminate the use of the HMT and instead emphasizes what
he considers more pragmatic (and ad hoc) methods of market definition, much like the methods
that were prevalent prior to the 1984 Merger Guidelines. In that sense, Kaplow seeks to return
emphasis to the economic intuition behind market definition, rather than on the application of
a framework that, according to him, provides little practical purpose.
Kaplow’s criticisms echo many of those found in the literature that preceded it. Indeed, Stigler
and Sherwin’s criticism could just as easily have come from Kaplow:
Why the factual inquiry necessary under [the HMT] approach – coupled with
quantification of market shares and judgment concerning the level and changes in
concentration is any easier than asking directly whether the merger will result in an
increased price (the question that is, after all, the one to be answered) is beyond us.14
10	 David T. Scheffman and Pablo T. Spiller, “Geographic Market Definition under the US Department of Justice Merger Guidelines,” 30 The
Journal of Law and Economics 1 (1987).
11	 Barry C. Harris and Joseph J. Simons, “Focusing Market Definition: How Much Substitution is Necessary?” Research in Law and Economics
207 (1989).
12	 Joseph Farrell and Carl Shapiro, “Antitrust Evaluation of Horizontal Mergers: An Economic Alternative to Market Definition,” 10 The
B.E. Journal of Theoretical Economics 1, Article 9 (2010): 2, 34.
13	 Louis Kaplow, “Market Definition: Impossible and Counterproductive,” 79 Antitrust Law Journal 1 (2013): 361–379.
14	 Stigler and Sherwin (1985).
17Antitrust Market Definition—A Review of Five Influential Papers
The importance of and specific methods used in the definition of markets in antitrust matters
will continue to evolve, especially as more detailed information becomes available to economists.
However, even newly developed methods likely will continue to embody thinking developed by
scholars over the last thirty years.
The Elzinga-Hogarty Test
Elzinga and Hogarty (1973) introduce a test of geographic market definition now referred to as
the Elzinga-Hogarty test.15
The proposed method is based on economic arguments put forward
during US v. Pabst and US v. Philadelphia National Bank.16
The Elzinga-Hogarty test is limited to
geographic market definition (rather than product market definition), which, according to the
authors, had received limited attention by academic researchers at the time the article was written.
Elzinga and Hogarty describe that, prior to their research, many antitrust experts used comparisons
of prices in defining an antitrust market. However, as Elzinga and Hogarty point out, defining
a market based on price comparisons is inconsistent for two main reasons. First, assigning an
accurate, all-encompassing economic price to any good or service is difficult. Second, supply and
demand factors may be the key determinants of price and not competition between suppliers in
two different geographic areas. That is, different geographic areas could have similar prices because
the two areas have similar demand and supply characteristics, and that does not necessarily reflect
that two areas are in the same geographic market.
Elzinga and Hogarty focus on the geographic market definition issues decided by the US Supreme
Court in US v. Pabst addressing the Pabst and Blatz merger of 1958. Elzinga and Hogarty surmise that
the geographic market accepted by the Court was incorrect because the government examined the
supply of beer moving into the hypothetical market, but did not look at the supply of beer moving
out of the market. The Department of Justice (DOJ) argued that the market for Pabst should be
defined as the state of Wisconsin because 80 percent of the beer consumed in Wisconsin was also
brewed in Wisconsin. However, the authors claim that the market should have potentially been
defined as an area encompassing five states, because it is also appropriate to consider exports
out of an alleged market. While 80 percent of the beer consumed in Wisconsin was brewed in
Wisconsin, less than 25 percent of the beer brewed in Wisconsin was consumed in Wisconsin.
Elzinga and Hogarty also examine the suit regarding the proposed merger of Philadelphia National
Bank and Girard Trust Corn Exchange Bank in the industry for commercial banking services,
which was ultimately blocked by the Supreme Court in 1963. Here, too, the authors find fault with
the DOJ’s proposed delineation of the geographic market, which was ultimately accepted by the
Supreme Court, viewing it as too narrow. In this case, however, as opposed to the beer example,
the government overlooked the flow of business into the hypothetical market and only focused on
15	 Elzinga and Hogarty (1973): 45–81.
16	 United States v. Philadelphia National Bank, 201 F. Supp. 348 (1962), 374 US 321 (1963). United States v. Pabst Brewing Company, 233 F.
Supp. 475 (1964); 384 US 546 (1966); 296 F. Supp. 994 (1969).
18Review
the flow of business out of the area. The Elzinga-Hogarty test was novel in its examination of supply
flowing both into and out of a hypothetical market.
The Elzinga-Hogarty test has two parts in defining a geographic market: the Little In From Outside
(LIFO) and Little Out From Inside (LOFI) tests. The first step of the test is to create a starting point
by taking the largest location of the largest of the merging firms, and then finding “the minimum
area required to account for at least 75 percent of the appropriate ‘line of commerce’ shipments
of that firm (or plant).”17
This area is now the hypothetical market area. If the merging parties are
in different geographical areas, then this step must be followed for each area.
The next step is to perform the LIFO test. The LIFO test requires that the hypothetical market
area be expanded until 75 percent of the total sales of the relevant product within the current
hypothetical market area be shipped from plants within the given area. The authors note that
if, after continuing to expand the area, the test is never satisfied, then the hypothetical market
area “is (at least) national in scope.”18
Once the LIFO test is satisfied, the final part of the exercise is the LOFI test. To satisfy the LOFI test,
the hypothetical market area must, if necessary, be expanded until 75 percent of the shipments
of the relevant product by firms within the area are to customers within the area. After both tests
have been satisfied, the market volume can be calculated by summing all consumption from
shipping points within the newly established hypothetical market area.19
Although the authors advocate for 75 percent as the threshold in their procedure, they acknowledge
that this value is arbitrary and also that a higher value such as 90 percent may be more appropriate.
This discussion is directly revisited by Elzinga and Hogarty in their 1978 follow-up paper, “The
Problem of Geographic Market Delineation Revisited: The Case of Coal.”20
In the follow-up, the
authors advocate for a 90 percent threshold because it often results in overlap between markets,
which is more characteristic of the real world. They found that when using 75 percent, as originally
proposed, too many gaps were created between markets that could not be accounted.
Elzinga and Hogarty’s 1978 paper was published in response to two major critiques of the original
paper by Giffen and Kushner (1976) and Shrieves (1975).21
These critiques claim to have found
flaws in the procedure, both citing data from the coal industry. Elzinga and Hogarty dismiss
the criticisms, pointing out where each critic failed to properly apply the LIFO and LOFI tests
in their respective analyses.
17	 Elzinga and Hogarty (1973): 73.
18	Ibid.,74.
19	 The authors are not explicit on the next step if the LOFI test cannot be satisfied.
20	 Kenneth G. Elzinga and Thomas F. Hogarty, “The Problem of Geographic Market Delineation Revisited: The Case of Coal,” Antitrust
Bulletin 23 (1978): 1–18.
21	 Phillip E. Giffen and Joseph W. Kushner, “Geographic Submarkets in Bituminous Coal: Defining a Southeastern Submarket,” Antitrust
Bulletin 21 (1976): 67–79. Ronald E. Shrieves, “Geographic Market Areas and Market Structure in the Bituminous Coal Industry,”
Appalachian Resource Project, University of Tennessee, ARP 45 (1975).
19Antitrust Market Definition—A Review of Five Influential Papers
Giffen and Kushner (1976) claim that the test was unreliable because their application of the
test to the coal industry did not yield the same southeastern market that they believed existed.
However, Giffen and Kushner did not account for the LOFI portion of the test, ignoring the flow
of supply out of the hypothetical market.
Shrieves (1975) modifies the Elzinga-Hogarty test to include an analysis of pricing data and argued
that Wisconsin should be treated as its own market in the coal industry due to its self-sufficiency.
Elzinga and Hogarty (1978) dismiss the critiques in this paper for its lack of not only a LOFI
application, but also a LIFO application. Additionally, Elzinga and Hogarty (1978) dismiss the
Shrieves’ approach for its creation of hypothetical markets based, at least partially, on pricing data.
Gregory J. Werden’s (1981) critique of the Elzinga-Hogarty test consisted of two major points.22
The
critique, essentially an application of Hotelling’s Law, is characterized by Elzinga as setting forth
“a hypothetical example where there is one product and two firms spatially dispersed at A and B
with customers spread uniformly along a line joining A and B.”23
Werden’s idea is that, under the
Elzinga-Hogarty test, if there were positive transportation costs, then there would be a point C that
would divide sales into two territories, such that two distinct markets would meet, but not overlap,
at C. This scenario would allow either party, A or B, to expand or shrink its market at any time with
a slight change in price. Werden further argues that, under the same scenario, a measurement of
the cross-elasticity of demand would be a better test of where to delineate market boundaries,
with a high cross-elasticity of demand indicating that an area is a single market.
The same publication included a response to Werden by Elzinga.24
Elzinga provides a simple
situation in which a high cross-price elasticity would suggest competition when there is a lack
of competition: if A is competitive while B is monopolized, we could see a rise in the cross-
price elasticity, as some customers on the fringe would avoid paying high monopoly prices by
purchasing from A. Although two markets would remain, with one being monopolized, Werden’s
method would suggest that the two are competing.
In 2004, the DOJ and FTC jointly issued a report that critiqued the Elzinga-Hogarty test.25
The
report addresses several hospital merger cases in the 1990s in which the courts’ acceptances of
the Elzinga-Hogarty test played a role in US government losses. The critique’s main point is that,
because the test was designed for fungible commodities, it cannot be applied to service industries
such as the hospital industry. Further, the report argues that flows of patients are not appropriate
metrics of shipments, as envisioned in the Elzinga-Hogarty test, because some patients travel long
distances to obtain care for unique conditions that cannot be treated within their own localities.
As such, the use of the Elzinga-Hogarty test without adjustment would count as an export (or
22	 Gregory J. Werden, “The Use and Misuse of Shipments Data in Defining Geographic Markets,” Antitrust Bulletin 26 (1981): 719–737.
23	 Kenneth G. Elzinga, “Defining Geographic Market Boundaries,” Antitrust Bulletin 26 (1981): 742.
24	 Ibid., 739–752.
25	 FTC and US DOJ, Improving Healthcare: A Dose of Competition, report (2004).
20Review
import) a patient who travels to another hospital, not because the patient seeks a more competitive
price, but because the patient seeks a service for which the hospitals do not compete to begin
with. Although the report critiques the use of the test in hospital cases, it does not dismiss the test
entirely—it cautions that the test cannot simply be followed in all industries. As Davis and Garcés
summarize, “Elzinga and Hogarty’s test can provide a useful piece of evidence when coming to a
view on the appropriate market definition… [But] it may seriously mislead those who apply the
test formulaically.”26
Elzinga recently penned another paper on the topic, with coauthor Anthony Swisher (2011).27
Elzinga and Swisher critique Elzinga’s own method, writing that “two characteristics of hospital
services markets… may tend to undermine the utility of the Elzinga-Hogarty test in hospital
merger cases.”28
The two characteristics that the authors discuss are the “Silent Majority Fallacy”
and the “Payer Problem.” The Silent Majority Fallacy refers to the large number of patients who
do not travel as far as projected in response to a price increase because they strongly prefer
to receive treatment close to home. Similarly, the Payer Problem refers to the large number of
patients who would be projected to have strong aversions to price increases, but who would not
directly feel the impacts of the increases due to the role of insurance plans. Both characteristics
can lead to an overestimation of the size of the geographic market. However, the authors also
write that “[i]t remains to be seen… whether the Elzinga-Hogarty test will continue to be relied
on in more traditional, pre-closing merger challenges.”29
Overall, the Elzinga-Hogarty test has been useful in attempting to design a framework by which to
think about a geographical market delineation. As noted by Werden, Elzinga and Hogarty “were
the first economists to argue that the definition of a market offered by classical economists can
and should be used in the antitrust context. They also were the first to propose and apply a specific
method for using data to delineate markets.”30
The test continues to be used in certain circumstances; however, limitations have been recognized.
As pointed out by the DOJ and FTC, two appellate courts, and Elzinga himself, the test is not
well suited for market definition in hospital cases.31
Also, the Elzinga-Hogarty test is best used
with goods rather than with services. Additionally, its use is limited by data availability regarding
shipments or consumption in any given area.
26	 Peter Davis and Eliana Garcés, Quantitative Techniques for Competition and Antitrust Analysis, Princeton, NJ: Princeton UP (2010).
27	 Kenneth Elzinga and Anthony Swisher, “Limits of the Elzinga-Hogarty Test in Hospital Mergers: The Evanston Case,” International
Journal of the Economics of Business 18 (2011): 133–146.
28	 Ibid., 133.
29	 Ibid., 133.
30	 Werden (1992): 185.
31	 American Bar Association, Health Care Mergers and Acquisitions Handbook, Second Edition (2018): 54–55.
21Antitrust Market Definition—A Review of Five Influential Papers
Residual Demand Analysis
Scheffman and Spiller’s 1987 paper “Geographic Market Definition under the US Department of
Justice Merger Guidelines” serves as an empirical guide to defining relevant antitrust markets.32
The publication of the Merger Guidelines, and Scheffman and Spiller’s subsequent application of
them, marked a key evolution in how geographic markets are defined in mergers reviewed by the
DOJ and FTC.
The 1984 Merger Guidelines define an antitrust market as “a product or group of products and a
geographic area in which it is sold such that a hypothetical, profit-maximizing firm, not subject to
price regulation, that was the only present and future seller of those products in that area would
impose a ‘small but significant and nontransitory’ increase in price [SSNIP] above prevailing or
likely future levels.”33
Scheffman and Spiller explain that to define such a geographic market, one
must start with a particular geographic area in which the product(s) at hand are sold. From here,
one would sequentially expand that area until the above conditions of an antitrust market are
met, such that all geographic areas with suppliers providing viable substitutes are included. The
purpose of the Merger Guidelines, Scheffman and Spiller explain, is to go beyond identifying an
economic market as that area where prices of goods are correlated to identifying the specific group
of producers and geographic area in which a horizontal merger has the potential to facilitate the
creation or enhancement of market power.
The paper begins by distinguishing antitrust markets from economic ones, positing that existing
empirical tests for delineating geographic markets were flawed because they did not recognize
inherent differences between the two. Scheffman and Spiller provide the classical definition of an
economic market as an “area and set of products within which prices are linked to one another
by supply- or demand-side arbitrage and in which those prices can be treated independently of
prices of goods not in the market.”34
In other words, taking into account transportation costs,
prices of products in the same economic market are directly linked by arbitrage. In a hypothetical
economic product market with three producers, a price increase by Producer A would result in
higher sales for Producers B and C. In turn, this would reduce the sales by Producer A. Thus, the
existence of Producers B and C weakens potential market power of Producer A.
In determining an antitrust market, one must go a step further and examine the supply responsiveness
of producers both within and outside the economic market. Potential entrants with a small supply
elasticity might be left out of an antitrust market, regardless of whether they are considered to be in
the same economic market, because they would not respond to increased demand in a substantive
way. On the other hand, producers that are not considered to be in the economic market, but who
represent a next-best substitute and have the capacity to respond to increased demand, may be
32	 Scheffman and Spiller (1987): 123–47
33	 US DOJ, 1984 Merger Guidelines: 3, available at http://www.justice.gov/atr/hmerger/11249.pdf.
34	 Scheffman and Spiller (1987): 125.
22Review
considered as part of the relevant antitrust market. This difference between economic and antitrust
markets is one point that Scheffman and Spiller extract from the Merger Guidelines.
The authors discuss two empirical tests that were, at the time the paper was written, widely
used for delineating geographic markets: price tests and shipment tests. Price tests examine the
relationship between prices in different areas over time. For two locations to be considered part
of the same geographic market, “prices in the two areas should move together with the difference
in prices in the two areas approximating marginal transportation costs.”35
Shipment tests, such as
the Elzinga-Hogarty test, rely on data on shipments into, out of, and between different geographic
areas to inform what constitutes a relevant geographic market.
The authors’ main purpose in explaining these tests is to highlight where they fall short. While
both types of tests successfully identify economic markets based on a classical definition, neither
provides information about the supply elasticity of different groups of producers, information that
Scheffman and Spiller deem crucial to the delineation of antitrust markets given the Merger Guidelines.
To illustrate their point: if two regions have few shipments of a particular product between them
or do not exhibit prices that are correlated, the above two tests would not consider them to be in
the same antitrust market. However, if one region has a highly inelastic supply, producers in the
other could theoretically raise prices above a competitive level. Therefore, an empirical test that
takes price elasticities into account is needed to define relevant antitrust markets in accordance
with rules laid out in the Merger Guidelines.
The authors recommend that residual demand analysis should be used to identify whether
a candidate market is, in fact, an antitrust market. A firm’s residual demand is a function of
overall market demand and the quantity supplied by other firms at various price points. More
specifically, the residual demand curve is the individual firm’s demand, which is that portion of
market demand not supplied by other firms. Figure 1 illustrates the residual demand curve for
Firm A in relation to overall market demand. The graph on the right of the figure shows overall
market demand and the quantity supplied by all firms except Firm A. The demand curve on the
left graph is the market demand curve shifted inward by the exact quantity produced by other
(non-Firm A) firms at each price.
35	 Ibid., 129.
23Antitrust Market Definition—A Review of Five Influential Papers
Figure 1: Residual Demand Curve
Scheffman and Spiller posit that a subgroup of producers’ elasticity of residual demand must be
sufficiently inelastic for there to be the potential for an increase in market power in the event
of a merger. This argument makes sense in the context of the Merger Guidelines. If demand for a
subgroup’s products with respect to (a) overall market demand and (b) demand for other producers’
products is not particularly sensitive to increases in price, it logically follows that all relevant
products have been added to the proposed antitrust market. In other words, delineation of the
relevant antitrust market is complete at that point. If, on the other hand, the quantity demanded
for the sub-group fluctuates with changes in the price of the other producers’ product(s), the
product market needs to be expanded. Baker and Bresnahan, in their 1988 paper “Estimating the
Residual Demand Curve Facing a Single Firm,” illustrate this latter point when they say that “one
firm’s contraction of output will be offset exactly by another’s expansion.”36
After summarizing the conceptual framework, Scheffman and Spiller step through the underlying
mathematics and derive the residual demand function for a hypothetical homogenous product
that is produced in two distinct geographic locations. Variables taken into account include prices
in the location of interest, transportation costs between the two locations, known demand, and
cost shifters, as well as random shocks to demand and supply.
The authors apply their proposed approach to estimate residual demand for wholesale unleaded
gasoline in the eastern United States. Since prices of wholesale unleaded gasoline are highly correlated
throughout the geographic area east of the Rocky Mountains, price correlation tests would lead
antitrust economists to define the relevant antitrust market for this product as this entire region.
Scheffman and Spiller’s approach breaks the eastern United States into different combinations
based on the US Department of Energy’s division of geographic areas. The authors employ regression
analysis to estimate monthly residual demand price elasticities for each selected geographic area.
They take into account cost shifters in each area such as the price of crude oil, energy use, and total
refining capacity, as well as demand shifters such as personal income and gasoline prices. Their
results indicate different geographic markets from those indicated by using the standard price and
36	 Jonathan B. Baker and Timothy F. Bresnahan, “Estimating the Residual Demand Curve Facing a Single Firm,” International Journal of
Industrial Organization 6 (1988): 283–300, 284.
24Review
shipment tests. While they conclude that several study areas would constitute relevant antitrust
markets for wholesale unleaded gasoline, the market is not as broad as the entire eastern US.
Scheffman and Spiller’s work has been integral in providing an empirical application of the Merger
Guidelines and has been cited over two hundred times. However, their employment of residual
demand analysis to delineate relevant antitrust markets has not gone without criticism. Several
papers highlight the importance of using residual demand estimation in conjunction with other
information due to potential limitations in the method.
For example, Froeb and Werden point to problems of extrapolation and nonstationarity.37
First,
residual demand analysis requires an antitrust economist to make inferences about demand
and cost conditions in the future. Although historical and current conditions can be relied on to
a point, there is no guarantee that these conditions will be the same in the future. Elasticity of
demand may not be sufficiently stable through time, for instance. Second, the authors reference
the issue of nonstationarity—that economic conditions are not always constant. This is an issue
particularly in the case of mergers because changing economic conditions often precede them.38
Critical Loss
Harris and Simons’ 1989 paper “Focusing Market Definition: How Much Substitution is Necessary?”
offers a pragmatic approach for following existing guidance on antitrust market definition.39
Both the
1984 Merger Guidelines and existing case law present definitions of relevant product and geographic
markets; however, neither delivers straightforward ways of empirically delineating these markets.40
Harris and Simons’ paper introduces a key concept for use in antitrust merger cases when economists
are trying to discern whether a group of producers constitutes a relevant antitrust market.
The authors point to widespread criticism of the 1984 Merger Guidelines, which many have called
“unworkable in practice.”41
For example, Stigler and Sherwin write, “[t]his market definition has
one, wholly decisive defect: it is completely nonoperational. No method of investigation of data
is presented and no data, even those produced by coercive process, are specified that will allow
the market to be determined empirically.”42
The 1984 Merger Guidelines define an antitrust market
as “a product or group of products and a geographic area” for which a hypothetical monopolist
could impose a “small but significant and nontransitory price increase” and be profitable doing
so.43
Yet the 1984 Merger Guidelines do not clearly state how one should go about analyzing when
a given price increase is, or is not, profitable.
37	 Luke M. Froeb and Gregory J. Werden, “Residual Demand Estimation for Market Delineation: Complications and Limitations,” Review
of Industrial Organization 6 (1991): 33–48.
38	Ibid.
39	 Harris and Simons (1989): 207.
40	 US DOJ, 1984 Merger Guidelines[cite], available at http://www.justice.gov/atr/hmerger/11249.pdf.
41	 Harris and Simons (1989): 208.
42	 Stigler and Sherwin (1985): 582.
43	1984 Merger Guidelines, 3.
25Antitrust Market Definition—A Review of Five Influential Papers
In addition to this critique of the 1984 Merger Guidelines, the authors review the concept of
“reasonable interchangeability” that has been cited in several major Supreme Court decisions,
including US v. du Pont & Co. The decision for this case reads, “[i]n considering what is the relevant
market for determining the control of price and competition, no more definite rule can be declared
than that commodities reasonably interchangeable by consumers for the same purposes make
up that ‘part of the trade or commerce.’”44
While this statement clearly indicates that products
that are substitutes for one another are in the same market, it does not state the extent to which
products must be interchangeable to be considered in the same antitrust market.
The authors outline an empirical method that can both (a) determine when a given price increase
would be profitable and (b) serve as a benchmark for the reasonable interchangeability standard.
With any price increase, firms will inevitably lose sales because some people do not want to pay
the higher prices; however, the sales that each firm retains will bring in more revenue per unit,
and the firm will not incur the variable costs of sales lost. The critical loss calculation aims to
determine “what producers could gain or lose from a price increase.”45
The authors’ process for determining the profitability of a price increase involves two main calculations.
First, one must calculate the critical loss for a given price increase. The critical loss denotes the
point at which firms are indifferent between the prevailing market price and hypothetical higher
price, as the lost sales associated with either condition are equivalent. In other words, profits at
each price are equal, and firms thus do not have an incentive to raise the price above prevailing
levels. Harris and Simons begin by setting profits in the two scenarios equal to one another and
subsequently walk the reader through steps to derive an equation for calculating critical loss. In
the end, the only variables needed to solve for this value are the hypothetical price increase and
the contribution margin, the latter of which is simply the additional profit earned on each unit
sold. The equation is as follows, where X is the critical loss, Y is the hypothetical percentage price
increase, and CM is the contribution margin:46
X = [Y/(Y + CM)]*100
A higher contribution margin will yield a lower critical loss due to the high value of each additional
unit sold. Conversely, the critical loss will be higher if the profit margin on each unit sold is low,
because the firm can afford to lose more customers and still be profitable given a hypothetical
price increase. Figure 2 visually depicts the concept of critical loss.
44	 United States v. du Pont & Co., 351 US 377 (1956), 394–395.
45	 Harris and Simons (1989): 157.
46	 Ibid., 161.
26Review
Figure 2: Critical Loss Profit Calculation47
The graph shows a demand curve and marginal cost curve for a hypothetical product. When the
firm raises the price for this product by ∆P, the quantity demanded decreases by ∆Q, resulting in
a new quantity demanded, Q−∆Q. Put simply, the critical loss value is the ∆Q/Q for which Gained
Profits equals Lost Profits, given some hypothetical price increase. Adjustments can be made to the
calculated critical loss if a product’s sales are directly or indirectly connected to those of another
product produced by the same firm. For instance, if a reduction in a firm’s sales for Product A
results in increased sales for the same firm’s Product B, the critical loss for the firm can be adjusted
upward. Further, if lower levels of production for a firm’s Product C also mean lower levels for
the same firm’s Product D, the critical loss can be adjusted downward. In this way, critical loss
analysis can be used in different settings.
The second step in the process involves estimating the magnitude of sales that would actually
be lost for a specified group of producers if they were to hypothetically increase prices by a
given percentage. To perform such an estimation, one must consider various players, including
customers, producers of the same product, and producers of other products. Residual demand
elasticity is one approach for understanding the reactions of market players to hypothetical price
increases. A firm’s residual demand curve reflects the market demand for that firm’s product that
is not met by other firms in the industry at a given price. Residual demand elasticity measures the
responsiveness of market demand for a firm’s product to increases in the price of that product.
In the context of antitrust merger analysis, a firm or group of firms’ elasticity of residual demand
must be sufficiently inelastic for there to be potential for market power. Each critical loss value
has a corresponding critical residual demand elasticity, which is calculated by dividing the critical
loss by the hypothetical price increase. This value reflects the greatest level of demand volatility
that can be tolerated before a price increase becomes unprofitable.
47	 Ibid., 159.
27Antitrust Market Definition—A Review of Five Influential Papers
The authors state that residual demand elasticity is often difficult to calculate given data limitations
and suggest other tactics for quantifying consumer reactions to price changes. One recommended
approach encompasses estimating how much it would cost the consumer to switch from one product
to another. It logically follows that the lower the cost of substitution, the more likely a consumer
will switch to a viable substitute in the face of higher prices. A second option is to explore other
similar products and determine whether any can satisfy the same uses at comparable price and
quantity levels. Harris and Simons propose that employing these approaches in conjunction with
the critical loss calculation can provide a benchmark for “determining how much interchangeability
is sufficient to put two products or geographic areas in the same market.”48
To illustrate the applicability of the critical loss calculation, Harris and Simons point to the 1986
Supreme Court case FTC vs. Occidental Petroleum Corp, for which both were retained by the
defendants. In this case, the FTC challenged Occidental’s acquisition of the polyvinyl chloride resin
(PVC) assets of Tenneco Polymers. The Supreme Court identified two relevant product markets:
(i) suspension homopolymer PVC resin and (ii) dispersion PVC resin. With the product markets
established, the analysis shifted to identifying the relevant geographic market. Using variable cost
and price information for these two products, the authors calculated contribution margins for each,
which they in turn used to estimate critical losses for a hypothetical price increase of 5 percent.
Following the steps outlined above, the authors then determined the actual loss in sales that
would occur if Occidental raised prices of PVC resin by 5 percent. This was done by analyzing
viable foreign substitutes. More specifically, the analysis considered whether customers would
be willing to purchase foreign-produced PVC resin and whether foreign PVC resin producers had
the capacity to supply additional PVC resin to the United States. Given the availability of foreign-
produced PVC resin, it was determined that the loss of sales from a hypothetical 5 percent price
increase would exceed the critical loss estimates. Ultimately, the Court decided that “the United
States was an inappropriately small geographic market for both types of PVC resin.”49
Since Harris and Simons’ publication of “Focusing Market Definition: How Much Substitution is
Necessary?” several papers have pointed out one way that critical loss analysis can be misused.
In their paper “Critical Loss: Let’s Tell the Whole Story,” Katz and Shapiro argue that critical loss
analysis can be “incomplete and potentially misleading.”50
When profit margins are very high, they
argue, one may conclude that, because lost sales have a significant negative impact on profits, “a
hypothetical monopolist controlling a group of products could not profitably raise prices.”51
This
conclusion does not consider that high profit margins may imply that actual sales lost due to a price
increase are small, “and thus a price increase might be profitable even when critical loss is small.”52
48	 Ibid., 164.
49	 Ibid., 165.
50	 Michael L. Katz and Carl Shapiro, “Critical Loss: Let’s Tell the Whole Story,” Antitrust (2003).
51	 Ibid., 50.
52	Ibid.
28Review
O’Brien and Wickelgren convey similar sentiments in their 2003 critique of critical loss analysis
as an approach to defining relevant antitrust markets.53
Specifically, they point out that high pre-
merger profit margins may mean that customers are not price sensitive. In these situations, price
increases would not necessarily lead to large sales losses. O’Brien and Wickelgren specify, however,
that their critique “does not invalidate the critical loss formula derived [by] Harris and Simons
as an algebraic statement about the loss necessary to make a given price increase unprofitable.”54
Rather, their aim is to highlight ways in which the critical loss formula can be erroneously applied.
Scheffman and Simons respond to such criticisms, writing that “the significance… of [critical loss
analysis] lies in its ease of practical application and from the fact that it is merely ‘arithmetic.’”55
In
essence, they acknowledge that while critical loss can be misused, it remains valid as an approach
in assessing market definition in antitrust cases.
Upward Pricing Pressure
In their 2010 paper “Antitrust Evaluation of Horizontal Mergers: An Economic Alternative to Market
Definition,” Farrell and Shapiro aim to create a simple indicator to screen for unilateral effects of a
proposed merger in a differentiated product setting. Their indicator measures net upward pricing
pressure and is meant to provide insight into whether a proposed merger will likely cause price
increases. The authors argue that the screening tool they develop is practical and “more solidly
grounded in the underlying economics of unilateral effects than is the conventional approach”
without the need to predict “the full equilibrium adjustment of the industry to the merger.”56
In
practice, however, UPP is often difficult to estimate and, at times, may still require defining a market.
As Farrell and Shapiro explain, merger review is both a common practice and a large undertaking.
Mergers meeting certain requirements57
must be reviewed and approved by the DOJ or FTC
(hereafter, the “Agencies”) before they can be consummated. The purpose of this review is to
ensure that the proposed merger does not substantially “lessen competition, or…tend to create a
monopoly.”58
The Agencies look for two different types of effects: coordinated and unilateral effects.
Coordinated effects occur if the merger makes collusion across firms more likely. Unilateral effects
occur if the merger incentivizes the newly merged firm to raise prices above the pre-merger levels.
A typical analysis evaluates market concentration within a defined “relevant market.” However,
in differentiated product markets, it is difficult to determine which products are in the relevant
53	 Daniel P. O’Brien and Abraham L. Wickelgren, “A Critical Analysis of Critical Loss Analysis,” Antitrust Law Journal 71 (2003).
54	 Ibid., 163.
55	 David T. Scheffman and Joseph J. Simons, “The State of Critical Loss Analysis: Let’s Make Sure We Understand the Whole Story,” The
Antitrust Source (2003): 1.
56	 Farrell and Shapiro (2010): 2, 34.
57	 Ibid., 1. Mergers of a “substantial” size are required to notify the Agencies of the proposed merger for review; as of 2010, the “size of
transaction” threshold was $63.4 million.
58	2010 Horizontal Merger Guidelines, Section 1, p. 1 (citing Section 7 of the Clayton Act).
29Antitrust Market Definition—A Review of Five Influential Papers
market and which are out, resulting in “an inevitably artificial line-drawing exercise.”59
To address
the difficulty of defining markets, the 2010 Horizontal Merger Guidelines endorse using the HMT.60
This test is designed to address relevant market definition, but Farrell and Shapiro argue that it
can result in excluding substitute products that compete to some degree with the products of
interest. Thus, the Guidelines’ recommended method can lead to inappropriate market boundaries.
Farrell and Shapiro’s UPP methodology asks if a merger will generate net UPP in a differentiated
product market. Farrell and Shapiro describe two opposing forces that have an effect on price
after a merger. The first is a loss of direct competition because the merging firms are no longer
competing with each other as independent entities. After a merger, there is reduced competition
between two products, which will cause an upward pressure on price to the extent that quantity
diversions from a price increase of one product are absorbed by the second product. The second
force is a marginal cost savings due to efficiencies that are a result of the merger, which will cause
downward pressure on price. The net effect of these two forces is the indicator that Farrell and
Shapiro refer to as UPP. When this indicator is positive, incentives to increase price exceed the
efficiency cost savings, and the merger would then be flagged for more detailed review.61
To illustrate the upward pressure on prices that may occur post-merger, Farrell and Shapiro
imagine two merging firms A and B that compete in a standard Bertrand setting; these firms
produce Product 1 and Product 2, respectively.62
Post-merger, the firms are treated as separate
divisions within the same company and are told to jointly maximize profits. The incentives have
changed now that the firms have merged, because increased sales of Product 1 will cannibalize
some sales of Product 2, and vice versa. The cannibalization of Product 2 sales can now be viewed
as an opportunity cost of selling more of Product 1. This opportunity cost can be thought of as
a tax on each division’s output that deters increasing sales by lowering prices. For each division,
the “tax” is equal to the value of Product 2 sales that are cannibalized. To quantify this tax, one
must calculate the “diversion ratio.”63
The ratio is the impact on sales of Product 2 when the price
of Product 1 falls by enough to sell one more unit. For example, if the price of Product 1 falls and
one hundred more units are sold, but thirty fewer units of Product 2 are sold, the diversion ratio
is 0.3. The diversion ratio times the gross margin of Product 2 is then equal to the value of Product
2 sales cannibalized. Thus, this tax is essentially equal to the lost profits resulting from reduced
59	 Farrell and Shapiro (2010): 4.
60	2010 Horizontal Merger Guidelines, Section 4.1.1.
61	 As Farrell and Shapiro note, the two levels of review are not unique to an UPP analysis. Indeed, mergers reviewed using the more
traditional market share and HHI approach also use a secondary level of review. See Farrell and Shapiro (2010): 3.
62	 While their example assumes a Bertrand setting, Farrell and Shapiro note that UPP’s fundamental assumptions do not rely on a Bertrand
setting and can be utilized in a variety of frameworks, “although unsurprisingly the quantitative measure will vary if one knows how
industry conduct departs from Bertrand.” They make this point in a subsequent paper that responds to criticism from Epstein and
Rubinfeld. See Joseph Farrell and Carl Shapiro, “Upward Pricing Pressure in Horizontal Merger Analysis: Reply to Epstein and Rubinfeld,”
10 The B.E. Journal of Theoretical Economics 1 (2010): Article 41, 1. See also Roy J. Epstein and Daniel L. Rubinfeld, “Understanding UPP,”
10 The B.E. Journal of Theoretical Economics 1 (2010), Article 21.
63	 Epstein and Rubinfeld argue that the diversion ratio is closely related to cross-elasticity, as the two measure virtually the same thing but
on different scales. They argue that cross-elasticity is often easier to calculate, as diversion ratios cannot be independently observed.
Epstein and Rubinfeld (2010): 4–6.
30Review
sales of Product 2 when sales of Product 1 increase. This post-merger “tax” can be thought of as an
increase in the marginal costs for each product, which could result from a unilateral price increase.
Mergers may also create efficiencies as a result of marginal costs savings associated with combining
the operations of the two firms. Efficiencies resulting from these lower marginal costs can be
difficult for the Agencies to predict and quantify. For this preliminary assessment, Farrell and
Shapiro suggest looking at certain “default marginal-cost efficiencies” for each of the merging
firms’ overlapping products. A more detailed evaluation is postponed until after the screening
phase. The “default efficiencies” calculated during Farrell and Shapiro’s screening phase could be
based on evidence of efficiencies in comparable mergers. They suggest looking at efficiencies as
a fraction of pre-merger marginal cost for each product, but recognize that certain efficiencies,
such as an improvement in product quality, are not “naturally measured” as a fraction of marginal
cost. They note that the merging firms are not required to prove these efficiencies, and that it is
“the established policy” for horizontal mergers to be approved without proving efficiencies.64
These efficiencies counterbalance the potential unilateral price increase discussed above. If the
value of the diversion is greater than the value of the efficiencies, the net effect will be positive
UPP. Specifically, Farrell and Shapiro present the following formula for calculating net UPP, where
D12
is the diversion ratio from Product 1 to Product 2, is the pre-merger gross margin of
Product 2, and is the default efficiency credited to Product 1:
Farrell and Shapiro suggest that proposed mergers that have a positive UPP value should be
flagged for more detailed review.
One of the benefits of this methodology, Farrell and Shapiro claim, is that it does not require
attempting to estimate the post-merger equilibrium prices, which can be difficult. To measure
the magnitude of the price change, it is necessary to know the rate at which a cost increase for a
product is “passed through.” This requires knowledge of the curvature of demand, which is often
difficult to fully understand without additional data that may not be available at the screening
phase (i.e., prior to a second request). Merger simulation models (and the HMT) require estimating
pass-through rates.65
Farrell and Shapiro believe that these models are trying to do more than is
necessary and claim that the UPP methodology is a robust yet simple way to start merger review.
64	 Farrell and Shapiro (2010): 10. They also note that it is a matter of debate whether horizontal mergers generate efficiencies. They suggest
that their “default efficiency parameter can be set accordingly” to reflect “one’s optimism or pessimism about the ability of mergers to
create synergies.”
65	 In their response to Farrell and Shapiro, Epstein and Rubinfeld argue that UPP is a special case of a merger simulation model. Farrell
and Shapiro counter that the UPP is not what is typically meant by a “merger simulation model,” which attempts to quantify the post-
merger equilibrium price. See Epstein and Rubinfeld (2010): 2; Farrell and Shapiro (2010): 3.
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018
Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018

More Related Content

Similar to Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018

Discussion No.IIFirst Question You have just been hired by an.docx
Discussion No.IIFirst Question You have just been hired by an.docxDiscussion No.IIFirst Question You have just been hired by an.docx
Discussion No.IIFirst Question You have just been hired by an.docx
madlynplamondon
 
The Essential Data Ingredient
The Essential Data IngredientThe Essential Data Ingredient
The Essential Data Ingredient
Rich Cooper
 
White Paper_USAA Team
White Paper_USAA TeamWhite Paper_USAA Team
White Paper_USAA Team
Brian Moran
 
Week 8 Quantitative Research DesignPrevious Next Instructio.docx
Week 8 Quantitative Research DesignPrevious Next Instructio.docxWeek 8 Quantitative Research DesignPrevious Next Instructio.docx
Week 8 Quantitative Research DesignPrevious Next Instructio.docx
philipnelson29183
 
ETHICS IN URBAN PLANNING 10Case Study 3 Ethics in
ETHICS IN URBAN PLANNING 10Case Study 3 Ethics inETHICS IN URBAN PLANNING 10Case Study 3 Ethics in
ETHICS IN URBAN PLANNING 10Case Study 3 Ethics in
BetseyCalderon89
 
The FixHow Citizens Unitedchanged politics, in 7charts.docx
The FixHow Citizens Unitedchanged politics, in 7charts.docxThe FixHow Citizens Unitedchanged politics, in 7charts.docx
The FixHow Citizens Unitedchanged politics, in 7charts.docx
oreo10
 
Single Parenting Essay. Check my Essay: Single parent struggle argumentative ...
Single Parenting Essay. Check my Essay: Single parent struggle argumentative ...Single Parenting Essay. Check my Essay: Single parent struggle argumentative ...
Single Parenting Essay. Check my Essay: Single parent struggle argumentative ...
Mimi Williams
 
Recasting_The_Role_of_Big_Data w S. Bennett
Recasting_The_Role_of_Big_Data w S. BennettRecasting_The_Role_of_Big_Data w S. Bennett
Recasting_The_Role_of_Big_Data w S. Bennett
Barry R. Hix
 

Similar to Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018 (20)

Discussion No.IIFirst Question You have just been hired by an.docx
Discussion No.IIFirst Question You have just been hired by an.docxDiscussion No.IIFirst Question You have just been hired by an.docx
Discussion No.IIFirst Question You have just been hired by an.docx
 
Racial Profiling Essay. APD Racial Profiling Document Racial Profiling Race...
Racial Profiling Essay. APD Racial Profiling Document  Racial Profiling  Race...Racial Profiling Essay. APD Racial Profiling Document  Racial Profiling  Race...
Racial Profiling Essay. APD Racial Profiling Document Racial Profiling Race...
 
Racial Profiling Essay
Racial Profiling EssayRacial Profiling Essay
Racial Profiling Essay
 
Recasting the Role of Big (or Little) Data
Recasting the Role of Big (or Little) DataRecasting the Role of Big (or Little) Data
Recasting the Role of Big (or Little) Data
 
Preservation and Proportionality: Lowering the Burden of Preserving Data in C...
Preservation and Proportionality: Lowering the Burden of Preserving Data in C...Preservation and Proportionality: Lowering the Burden of Preserving Data in C...
Preservation and Proportionality: Lowering the Burden of Preserving Data in C...
 
The promise and peril of big data
The promise and peril of big dataThe promise and peril of big data
The promise and peril of big data
 
The Essential Data Ingredient
The Essential Data IngredientThe Essential Data Ingredient
The Essential Data Ingredient
 
Lobbying Stakeholder Analysis/ENGL 202 project 4
Lobbying Stakeholder Analysis/ENGL 202 project 4Lobbying Stakeholder Analysis/ENGL 202 project 4
Lobbying Stakeholder Analysis/ENGL 202 project 4
 
White Paper_USAA Team
White Paper_USAA TeamWhite Paper_USAA Team
White Paper_USAA Team
 
Week 8 Quantitative Research DesignPrevious Next Instructio.docx
Week 8 Quantitative Research DesignPrevious Next Instructio.docxWeek 8 Quantitative Research DesignPrevious Next Instructio.docx
Week 8 Quantitative Research DesignPrevious Next Instructio.docx
 
ETHICS IN URBAN PLANNING 10Case Study 3 Ethics in
ETHICS IN URBAN PLANNING 10Case Study 3 Ethics inETHICS IN URBAN PLANNING 10Case Study 3 Ethics in
ETHICS IN URBAN PLANNING 10Case Study 3 Ethics in
 
Biotech CxO Challenges in Life Sciences Survey 2018
Biotech CxO Challenges in Life Sciences Survey 2018Biotech CxO Challenges in Life Sciences Survey 2018
Biotech CxO Challenges in Life Sciences Survey 2018
 
Three big questions about AI in financial services
Three big questions about AI in financial servicesThree big questions about AI in financial services
Three big questions about AI in financial services
 
The FixHow Citizens Unitedchanged politics, in 7charts.docx
The FixHow Citizens Unitedchanged politics, in 7charts.docxThe FixHow Citizens Unitedchanged politics, in 7charts.docx
The FixHow Citizens Unitedchanged politics, in 7charts.docx
 
Big data impact and concerns
Big data impact and concernsBig data impact and concerns
Big data impact and concerns
 
Health Insurance: Pictures, Videos, Breaking News
Health Insurance: Pictures, Videos, Breaking NewsHealth Insurance: Pictures, Videos, Breaking News
Health Insurance: Pictures, Videos, Breaking News
 
Single Parenting Essay. Check my Essay: Single parent struggle argumentative ...
Single Parenting Essay. Check my Essay: Single parent struggle argumentative ...Single Parenting Essay. Check my Essay: Single parent struggle argumentative ...
Single Parenting Essay. Check my Essay: Single parent struggle argumentative ...
 
Single Parenting Essay.pdf
Single Parenting Essay.pdfSingle Parenting Essay.pdf
Single Parenting Essay.pdf
 
Recasting_The_Role_of_Big_Data w S. Bennett
Recasting_The_Role_of_Big_Data w S. BennettRecasting_The_Role_of_Big_Data w S. Bennett
Recasting_The_Role_of_Big_Data w S. Bennett
 
Data-Driven HealthCare - Tobias Gantner English
Data-Driven HealthCare - Tobias Gantner EnglishData-Driven HealthCare - Tobias Gantner English
Data-Driven HealthCare - Tobias Gantner English
 

More from Tito Livio M. Cardoso

Probable Maximum Loss (PML) Evaluation
Probable Maximum Loss (PML) EvaluationProbable Maximum Loss (PML) Evaluation
Probable Maximum Loss (PML) Evaluation
Tito Livio M. Cardoso
 

More from Tito Livio M. Cardoso (20)

Tito Cardoso - Resumo - EN Junho-2023
Tito Cardoso - Resumo - EN Junho-2023Tito Cardoso - Resumo - EN Junho-2023
Tito Cardoso - Resumo - EN Junho-2023
 
Tito Cardoso - Resumo - PT Junho-2023
Tito Cardoso - Resumo - PT Junho-2023Tito Cardoso - Resumo - PT Junho-2023
Tito Cardoso - Resumo - PT Junho-2023
 
Probable Maximum Loss (PML) Evaluation
Probable Maximum Loss (PML) EvaluationProbable Maximum Loss (PML) Evaluation
Probable Maximum Loss (PML) Evaluation
 
Estimativa Parametrica de Custo de Transportador de Correia
Estimativa Parametrica de Custo de Transportador de CorreiaEstimativa Parametrica de Custo de Transportador de Correia
Estimativa Parametrica de Custo de Transportador de Correia
 
Use of the Bowtie Tool to Develop Regulatory Standards Focusing on Process Sa...
Use of the Bowtie Tool to Develop Regulatory Standards Focusing on Process Sa...Use of the Bowtie Tool to Develop Regulatory Standards Focusing on Process Sa...
Use of the Bowtie Tool to Develop Regulatory Standards Focusing on Process Sa...
 
Analise de riscos das alternativas
Analise de riscos das alternativasAnalise de riscos das alternativas
Analise de riscos das alternativas
 
Certificado DIO Especialaista Inovação Digital AD0B801B.pdf
Certificado DIO Especialaista Inovação Digital AD0B801B.pdfCertificado DIO Especialaista Inovação Digital AD0B801B.pdf
Certificado DIO Especialaista Inovação Digital AD0B801B.pdf
 
Certificado DIO Front-End Angular 1E8E60C7.pdf
Certificado DIO Front-End Angular 1E8E60C7.pdfCertificado DIO Front-End Angular 1E8E60C7.pdf
Certificado DIO Front-End Angular 1E8E60C7.pdf
 
Certificado DIO Fullstack developer - Banco carrefour 7B553D75.pdf
Certificado DIO Fullstack developer - Banco carrefour 7B553D75.pdfCertificado DIO Fullstack developer - Banco carrefour 7B553D75.pdf
Certificado DIO Fullstack developer - Banco carrefour 7B553D75.pdf
 
Dinâmica das contratações governamentais: Propostas para eficiência e satisfa...
Dinâmica das contratações governamentais: Propostas para eficiência e satisfa...Dinâmica das contratações governamentais: Propostas para eficiência e satisfa...
Dinâmica das contratações governamentais: Propostas para eficiência e satisfa...
 
DYNAMICS OF GOVERNMENT CONTRACTING: PROPOSALS FOR EFFICIENCY AND SATISFACTION...
DYNAMICS OF GOVERNMENT CONTRACTING: PROPOSALS FOR EFFICIENCY AND SATISFACTION...DYNAMICS OF GOVERNMENT CONTRACTING: PROPOSALS FOR EFFICIENCY AND SATISFACTION...
DYNAMICS OF GOVERNMENT CONTRACTING: PROPOSALS FOR EFFICIENCY AND SATISFACTION...
 
Desenvolvimento de projetos em baixa das commodities
Desenvolvimento de projetos em baixa das commoditiesDesenvolvimento de projetos em baixa das commodities
Desenvolvimento de projetos em baixa das commodities
 
Riscos relacionadas a extensao de cronogramas - Schedule extension related risks
Riscos relacionadas a extensao de cronogramas - Schedule extension related risksRiscos relacionadas a extensao de cronogramas - Schedule extension related risks
Riscos relacionadas a extensao de cronogramas - Schedule extension related risks
 
Leasing de equipamentos industriais
Leasing de equipamentos industriaisLeasing de equipamentos industriais
Leasing de equipamentos industriais
 
Otimização da configuração do banco de baterias para navio híbrido
Otimização da configuração do banco de baterias para navio híbridoOtimização da configuração do banco de baterias para navio híbrido
Otimização da configuração do banco de baterias para navio híbrido
 
Estratégia de Autofinanciamento de Projetos
Estratégia de Autofinanciamento de ProjetosEstratégia de Autofinanciamento de Projetos
Estratégia de Autofinanciamento de Projetos
 
Critério de decisão para implantação da infraestrutura em projetos implantado...
Critério de decisão para implantação da infraestrutura em projetos implantado...Critério de decisão para implantação da infraestrutura em projetos implantado...
Critério de decisão para implantação da infraestrutura em projetos implantado...
 
Decision criterion for anticipation of the infrastructure in staged projects
Decision criterion for anticipation of the infrastructure in staged projectsDecision criterion for anticipation of the infrastructure in staged projects
Decision criterion for anticipation of the infrastructure in staged projects
 
Analise de riscos da aplicação de novas tecnologias
Analise de riscos da aplicação de novas tecnologiasAnalise de riscos da aplicação de novas tecnologias
Analise de riscos da aplicação de novas tecnologias
 
Efeitos da incerteza na produtividade de serviços sobre o custo e prazo do pr...
Efeitos da incerteza na produtividade de serviços sobre o custo e prazo do pr...Efeitos da incerteza na produtividade de serviços sobre o custo e prazo do pr...
Efeitos da incerteza na produtividade de serviços sobre o custo e prazo do pr...
 

Recently uploaded

From Luxury Escort Service Kamathipura : 9352852248 Make on-demand Arrangemen...
From Luxury Escort Service Kamathipura : 9352852248 Make on-demand Arrangemen...From Luxury Escort Service Kamathipura : 9352852248 Make on-demand Arrangemen...
From Luxury Escort Service Kamathipura : 9352852248 Make on-demand Arrangemen...
From Luxury Escort : 9352852248 Make on-demand Arrangements Near yOU
 
Call Girls Banaswadi Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Banaswadi Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...Call Girls Banaswadi Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Banaswadi Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
amitlee9823
 

Recently uploaded (20)

Mira Road Awesome 100% Independent Call Girls NUmber-9833754194-Dahisar Inter...
Mira Road Awesome 100% Independent Call Girls NUmber-9833754194-Dahisar Inter...Mira Road Awesome 100% Independent Call Girls NUmber-9833754194-Dahisar Inter...
Mira Road Awesome 100% Independent Call Girls NUmber-9833754194-Dahisar Inter...
 
TEST BANK For Corporate Finance, 13th Edition By Stephen Ross, Randolph Weste...
TEST BANK For Corporate Finance, 13th Edition By Stephen Ross, Randolph Weste...TEST BANK For Corporate Finance, 13th Edition By Stephen Ross, Randolph Weste...
TEST BANK For Corporate Finance, 13th Edition By Stephen Ross, Randolph Weste...
 
(INDIRA) Call Girl Srinagar Call Now 8617697112 Srinagar Escorts 24x7
(INDIRA) Call Girl Srinagar Call Now 8617697112 Srinagar Escorts 24x7(INDIRA) Call Girl Srinagar Call Now 8617697112 Srinagar Escorts 24x7
(INDIRA) Call Girl Srinagar Call Now 8617697112 Srinagar Escorts 24x7
 
8377087607, Door Step Call Girls In Kalkaji (Locanto) 24/7 Available
8377087607, Door Step Call Girls In Kalkaji (Locanto) 24/7 Available8377087607, Door Step Call Girls In Kalkaji (Locanto) 24/7 Available
8377087607, Door Step Call Girls In Kalkaji (Locanto) 24/7 Available
 
From Luxury Escort Service Kamathipura : 9352852248 Make on-demand Arrangemen...
From Luxury Escort Service Kamathipura : 9352852248 Make on-demand Arrangemen...From Luxury Escort Service Kamathipura : 9352852248 Make on-demand Arrangemen...
From Luxury Escort Service Kamathipura : 9352852248 Make on-demand Arrangemen...
 
Kharghar Blowjob Housewife Call Girls NUmber-9833754194-CBD Belapur Internati...
Kharghar Blowjob Housewife Call Girls NUmber-9833754194-CBD Belapur Internati...Kharghar Blowjob Housewife Call Girls NUmber-9833754194-CBD Belapur Internati...
Kharghar Blowjob Housewife Call Girls NUmber-9833754194-CBD Belapur Internati...
 
(INDIRA) Call Girl Mumbai Call Now 8250077686 Mumbai Escorts 24x7
(INDIRA) Call Girl Mumbai Call Now 8250077686 Mumbai Escorts 24x7(INDIRA) Call Girl Mumbai Call Now 8250077686 Mumbai Escorts 24x7
(INDIRA) Call Girl Mumbai Call Now 8250077686 Mumbai Escorts 24x7
 
Vasai-Virar Fantastic Call Girls-9833754194-Call Girls MUmbai
Vasai-Virar Fantastic Call Girls-9833754194-Call Girls MUmbaiVasai-Virar Fantastic Call Girls-9833754194-Call Girls MUmbai
Vasai-Virar Fantastic Call Girls-9833754194-Call Girls MUmbai
 
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )
 
Webinar on E-Invoicing for Fintech Belgium
Webinar on E-Invoicing for Fintech BelgiumWebinar on E-Invoicing for Fintech Belgium
Webinar on E-Invoicing for Fintech Belgium
 
Top Rated Pune Call Girls Pashan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
Top Rated  Pune Call Girls Pashan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...Top Rated  Pune Call Girls Pashan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
Top Rated Pune Call Girls Pashan ⟟ 6297143586 ⟟ Call Me For Genuine Sex Serv...
 
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
 
Mira Road Memorable Call Grls Number-9833754194-Bhayandar Speciallty Call Gir...
Mira Road Memorable Call Grls Number-9833754194-Bhayandar Speciallty Call Gir...Mira Road Memorable Call Grls Number-9833754194-Bhayandar Speciallty Call Gir...
Mira Road Memorable Call Grls Number-9833754194-Bhayandar Speciallty Call Gir...
 
Call Girls Banaswadi Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Banaswadi Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...Call Girls Banaswadi Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Banaswadi Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
 
Booking open Available Pune Call Girls Wadgaon Sheri 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Wadgaon Sheri  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Wadgaon Sheri  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Wadgaon Sheri 6297143586 Call Hot Ind...
 
Navi Mumbai Cooperetive Housewife Call Girls-9833754194-Natural Panvel Enjoye...
Navi Mumbai Cooperetive Housewife Call Girls-9833754194-Natural Panvel Enjoye...Navi Mumbai Cooperetive Housewife Call Girls-9833754194-Natural Panvel Enjoye...
Navi Mumbai Cooperetive Housewife Call Girls-9833754194-Natural Panvel Enjoye...
 
(Vedika) Low Rate Call Girls in Pune Call Now 8250077686 Pune Escorts 24x7
(Vedika) Low Rate Call Girls in Pune Call Now 8250077686 Pune Escorts 24x7(Vedika) Low Rate Call Girls in Pune Call Now 8250077686 Pune Escorts 24x7
(Vedika) Low Rate Call Girls in Pune Call Now 8250077686 Pune Escorts 24x7
 
falcon-invoice-discounting-unlocking-prime-investment-opportunities
falcon-invoice-discounting-unlocking-prime-investment-opportunitiesfalcon-invoice-discounting-unlocking-prime-investment-opportunities
falcon-invoice-discounting-unlocking-prime-investment-opportunities
 
Top Rated Pune Call Girls Dighi ⟟ 6297143586 ⟟ Call Me For Genuine Sex Servi...
Top Rated  Pune Call Girls Dighi ⟟ 6297143586 ⟟ Call Me For Genuine Sex Servi...Top Rated  Pune Call Girls Dighi ⟟ 6297143586 ⟟ Call Me For Genuine Sex Servi...
Top Rated Pune Call Girls Dighi ⟟ 6297143586 ⟟ Call Me For Genuine Sex Servi...
 
Booking open Available Pune Call Girls Talegaon Dabhade 6297143586 Call Hot ...
Booking open Available Pune Call Girls Talegaon Dabhade  6297143586 Call Hot ...Booking open Available Pune Call Girls Talegaon Dabhade  6297143586 Call Hot ...
Booking open Available Pune Call Girls Talegaon Dabhade 6297143586 Call Hot ...
 

Utilizing a Self-Financing Strategy for Projects - BRG Review 7-1-2018

  • 1. Winter 2018 | Volume 7 ARTICLES Proactively Responding to Government Investigations Using Data Analytics: An Examination of Data Considerations in the Post- Acute Context Katie Pawlitz, Esq. and Greg Russo Antitrust Market Definition—A Review of Five Influential Papers Audrey Boles, Sam Brott, and Michele Martin Utilizing a Self-Financing Strategy for Projects Tito Cardoso INTELLIGENCETHATWORKS 1 2 3 Review
  • 2. iiReview Editor Cleve B. Tyler, PhD Publication Coordinator Matthew Caselli Associate Editors Volume 7 Kevin Christensen, PhD Kelly Nordby, PhD Jennifer Hull Brad Noffsker Brian Alg Jay Seetharaman Riley O’Connell Review
  • 3. iiiReview Subscription. For subscription questions, problems, or address changes: The BRG Review Berkeley Research Group, LLC 1800 M Street NW Second floor Washington, DC 20036 202.480.2700 info@thinkbrg.com Copyright: Copyright ©2018 by Berkeley Research Group, LLC. Except as may be expressly provided elsewhere in this publication, permission is hereby granted to produce and distribute copies of individual works from this publication for non-profit educational purposes, provided that the author, source, and copyright notice are included on each copy. This permission is in addition to rights of reproduction granted under Sections 107, 108, and other provisions of the U.S. Copyright Act and its amendments. Disclaimer: The opinions expressed in the BRG Review are those of the individual authors and do not represent the opinions of BRG or its other employees and affiliates. The information provided in the BRG Review is not intended to and does not render legal, accounting, tax, or other professional advice or services, and no client relationship is established with BRG by making any information available in this publication, or from you transmitting an email or other message to us. None of the information contained herein should be used as a substitute for consultation with competent advisors.
  • 4. ivReview Table of Contents 1. Letter from the Editor...........................................................................................................v Cleve B. Tyler, PhD 2. Proactively Responding to Government Investigations Using Data Analytics: An Examination of Data Considerations in the Post-Acute Context.............................. 1 Katie Pawlitz, Esq. and Greg Russo 3. Antitrust Market Definition—A Review of Five Influential Papers ..............................14 Audrey Boles, Sam Brott, and Michele Martin 4. Utilizing a Self-Financing Strategy for Projects ............................................................ 38 Tito Cardoso
  • 5. vReview Letter from the Editor Welcome to the seventh volume of the BRG Review, an official publication of Berkeley Research Group, LLC. This publication reviews several topics based on independent analysis by our authors. The breadth of material covered provides insight into some of the varied and interesting ongoing research performed around the world by experts and staff throughout BRG. Our experts comprise academics and private-sector professionals in fields including economics, finance, healthcare, and data analytics. BRG has over 1,100 professionals in more than 40 offices worldwide who apply innovative methodologies and analyses to complex problems in the business and legal arenas. In our first paper, Greg Russo and attorney Katie Pawlitz address the role of data analytics in fraud investigations in the healthcare industry, both proactively by providers and during the course of a government investigation. The role of sampling in False Claims Act investigations and litigation is also explored in-depth, including a review of case law regarding the legality of sampling for use in addressing liability and damages. In our second paper, Samuel Brott, Michele Martin, and Audrey Boles provide detailed reviews of five influential papers regarding market definition for use in antitrust investigations and litigation. They review key contributions of each paper and some of the critiques that the core idea in each paper has faced. The authors are current and former staff members at BRG and highlight the depth of talent that exists within BRG. In our last paper, Tito Cardoso provides a strategy for investment that firms may employ when facing restricted access to capital. This self-financing strategy envisions splitting a project into stages such that the investment required at any time to advance the project is smaller. Finally, a special thank you to the associate editors who work hard to ensure that the papers published within the BRG Review reflect nothing short of excellence. To our readers, we hope these papers stimulate discussion and discourse and deepen our relationships with fellow professionals, academics, clients, government representatives, attorneys, and other interested individuals across the world. Regards, Cleve B. Tyler, PhD Editor
  • 6. 1Review Proactively Responding to Government Investigations Using Data Analytics: An Examination of Data Considerations in the Post-Acute Context1 Katie Pawlitz, Esq., and Greg Russo* Katie Pawlitz is a Partner in the Washington, DC, office of Reed Smith LLP. She represents a variety of healthcare providers, suppliers, manufacturers, and associations regarding regulatory issues arising under the Medicare and Medicaid programs and under the healthcare fraud and abuse laws. She also assists clients involved in anti-kickback, Stark Law, and False Claims Act investigations and litigation matters. She can be reached at kpawlitz@reedsmith.com. Greg Russo is a managing director in the Washington, DC, office of BRG who specializes in providing strategic advice to healthcare organizations through his use of complex data analyses and financial modeling. He can be reached at GRusso@thinkbrg.com. Picture this: It is 10 a.m. on a Friday morning. Your day as in house counsel for a nursing home chain is moving along normally. You finished your morning staff meeting and are prepping for a meeting with the Chief Financial Officer (“CFO”). Your administrative assistant knocks on your door to say that a government subpoena directed to your attention has just arrived. Your stomach sinks – not because you are aware of fraud but because you are aware of the headaches that come with responding to a government subpoena. You conduct an initial review of the subpoena and forward a copy for review to another member of the company’s legal team. Your colleague reports to you that he has heard rumors that the Department of Justice (“DOJ”) has been speaking with a former employee who previously worked in your finance department. Rumor has it that the former employee discussed being a party to several conversations in which both the Chief Executive Officer (“CEO”) and CFO pressured individuals running your different facilities to increase profits by admitting patients even if the circumstances were not warranted. This former employee also indicated that the CEO and CFO exerted pressure for lengths of stay to increase. 1 Originally published in ABA The Health Lawyer 29(5) (June 2017): 23–30. * Katie Pawlitz, Esq. is a partner in the Washington, DC, office of Reed Smith LLP. She represents a variety of healthcare providers, suppliers, manufacturers, and associations regarding regulatory issues arising under the Medicare and Medicaid programs and under the healthcare fraud and abuse laws. She also assists clients involved in anti-kickback, Stark Law, and False Claims Act investigations and litigation matters. She can be reached at kpawlitz@reedsmith.com. Greg Russo is a managing director in the Washington, DC, office of BRG who specializes in providing strategic advice to healthcare organizations through his use of complex data analyses and financial modeling. He can be reached at GRusso@thinkbrg.com. Special thanks to Vicki Morris, Brady Fowler, and Elena Kuenzel for their assistance in researching and drafting this article.
  • 7. 2Proactively Responding to Government Investigations Using Data Analytics You have worked in post-acute care long enough to understand a few things. First, these allegations are serious. Second, these allegations, if true, would increase profits. Third, this is going to be a long investigation. For these reasons, it is not surprising that the DOJ’s interest was piqued as is yours. You immediately sketch your next steps. Hire external counsel. Ensure a thorough and expeditious investigation. Determine if the allegations are true. If not true, then provide ample evidence to disprove the allegations. If true, then proactively calculate damages and negotiate a settlement with the DOJ. Any response to an investigation of this sort should involve the use of data analytics. The government and its contractors are becoming increasingly more sophisticated in using data to develop theories of wrongdoing and to identify suspected fraudulent behavior. As a result, providers must be aware of their own data and the optics of that data. Providers should seek to use data, and analyses related to the same, to proactively monitor risk; to respond to government investigations; to dissuade the government from intervening in a False Claims Act (“FCA”) case; as a point of consideration in settlement discussions; and, if necessary, as a defense tool in FCA litigation. While growth in post-acute spending has recently slowed, the Medicare program approximately doubled its post-acute spending between 2000 and 2015. As a result, the government seeks to ensure that the most appropriate (and cost-efficient) care is being provided and has relied on standard data analytics to identify anomalies in care patterns. This article focuses on post-acute providers and data analytics pertaining to these providers. In making the case for the use of data in proactively responding to government investigations, this article examines data considerations in the context of post-acute providers, although the same concepts apply to all types of providers. This article also describes data monitoring activities undertaken by the government and how similar monitoring can and should be proactively implemented by providers. Finally, this article discusses the use of sampling in FCA investigations and litigation, a common approach for which data can play a significant role. Reimbursement Overview In order to appreciate potential risks and allegations in the context of government fraud investigations, and the use of data to respond to the same, one must consider the reimbursement methodology at issue. The following sub-sections provide an overview of Medicare’s reimbursement methodologies for various post-acute provider types and the key data elements for each. These key data elements are items that may be indicative of a provider manipulating the reimbursement system to garner more revenues/profits. As such, government prosecutors are increasingly relying on these key data elements to support theories of wrongdoing. Recognizing this, providers should be proactively monitoring their own metrics as they relate to the relevant data elements discussed below, and
  • 8. 3Review in the face of a government investigation, developing a defense strategy that accounts for, or puts in context, these data elements. Skilled Nursing Facilities (“SNFs”) Reimbursement Overview: SNFs are paid a per diem payment for the provision of services to Medicare beneficiaries based on a prospective payment system (“PPS”), which means Medicare pays for services based on a predetermined, fixed amount. The SNF PPS payment covers all costs of furnishing covered Medicare Part A SNF services (routine, ancillary, and capital-related costs), with limited exception. The PPS payment for each resident is adjusted for case mix and geographic variation in wages. Case-mix adjustments are based on residents’ assessments, which classify residents into resource utilization groups (“RUGs”) based on the severity of residents’ medical conditions and skilled care needs. The determination of resource needs, or RUG category, is established using the Minimum Data Set (“MDS”), a standardized tool that assesses the resident’s clinical condition, functional status, and expected use of services. Key Data Elements: There are several data elements to analyze when responding to DOJ investigations regarding whether or not a SNF has exploited incentives. These data elements provide a retrospective view of the facility’s operations. Primary among these data elements is the distribution of the number of days that a facility provides SNF services at each RUG level and the number of minutes of therapy being provided. It is helpful to understand how the distribution changes over time and whether the number of minutes of therapy has materially changed. Other patterns that can be assessed for abnormalities include the manner in which change-of-therapy assessments occur and the distribution between group/concurrent therapy. Additional patterns could include the percent of patients being readmitted to a short-term acute care hospital as well as the overall length of stay (“LOS”), especially for patients staying over 90 days. Another measure that will likely be considered when the DOJ investigates is the activities of daily living recorded at each patient’s assessment. The activities of daily living contain measures of how a patient performs daily living tasks (e.g., walking, eating, dressing). These daily living measures are not intended to measure performance or quality of care and a facility should be cautious when either proactively using them or when responding to a DOJ inquiry. Inpatient Rehabilitation Facilities (“IRFs”) Reimbursement Overview: IRFs are free standing rehabilitation hospitals or units in acute care hospitals that provide intensive rehabilitation services (i.e., at least three hours of intense therapy per day). IRFs are paid an amount of money each time that a
  • 9. 4Proactively Responding to Government Investigations Using Data Analytics patient leaves the facility (i.e., a per discharge payment) and this payment covers the provision of services to Medicare beneficiaries based on a PPS. The IRF PPS covers all costs of furnishing services (routine, ancillary, and capital related), with limited exception, such as costs related to operating certain educational activities. Reimbursement for each IRF patient is based on a patient assessment process where patients are classified into distinct groups based on clinical characteristics and expected resource needs. Patients are classified using the IRF Patient Assessment Instrument, which contains clinical, demographic, and other information. Separate payments are calculated for each group, including the application of case-mix and facility-level adjustments. Key Data Elements: When responding to a DOJ investigation related to an IRF, it is useful to understand the distribution of cases among the case-mix groups (“CMGs”) that Medicare uses for reimbursement. Certain CMGs offer larger reimbursement and/or a greater margin, making it imperative to understand the practice pattern at a facility. In a similar vein, one should understand the extent to which Medicare made outlier payments to the facility, for what cases, and for what time periods. Outlier payments are Medicare payments that are in addition to the payment calculated in accordance with the established payment methodology. This additional payment covers additional care that the patient received and that is considered by Medicare to be outside the normal amount of care expected by the payment methodology. A full analysis of readmissions is useful to understand the quality of care being provided. This would include readmissions to IRFs or short-term acute care hospitals. Lastly, the overall LOS should be analyzed to understand how it may have shifted. Long Term Acute Care Hospitals (“LTCHs”) Reimbursement Overview: LTCHs treat patients with multi-comorbidities requiring long-stay hospital-level care and are certified under Medicare as short-term acute care hospitals. LTCHs are generally defined as having an average inpatient length of stay (“ALOS”) of greater than 25 days and are excluded from the acute care hospital inpatient PPS. Instead, LTCHs are paid by Medicare under the LTCH PPS, based on prospectively set rates. The LTCH PPS classifies patients into distinct diagnostic groups based on clinical characteristics and expected resource needs. Payment for a Medicare patient will be made at a predetermined, per discharge amount pursuant to the patient’s assigned Medicare Severity Long-Term Care Diagnosis-Related Group (“MS-LTC-DRG”), which is based on diagnoses, procedures performed, age, gender, and discharge status. Medicare calculates the ALOS for each MS-LTC-DRG. If a patient’s stay is five-sixths of the ALOS calculated by Medicare for that MS-LTC-DRG then the LTCH will receive the full amount of Medicare’s payment. If the patient stays for less than five-sixths of the ALOS calculated by Medicare for that MS-LTC-DRG, then the LTCH will only receive five-sixths of Medicare’s payment. For example, if a MS-LTC-
  • 10. 5Review DRG has an ALOS of 30 days, and a patient stays in the LTCH for 25 days (5/6 of 30), the LTCH will receive the entire Medicare payment. However, if the patient is discharged on day 23, the facility will receive something less than the full payment. Key Data Elements: It is imperative to measure outlier payments when considering LTCHs as the outlier thresholds can provide significant insight into a facility’s operations. A case at an LTCH can be considered a short stay and/or high cost outlier, both of which have ramifications on the amount of money received by the LTCH. It is important to analyze the historic readmission percentage either to the same LTCH or to a short-term acute care hospital. Specific diagnoses and procedures should also be analyzed, as these are often of interest during investigations since these diagnoses/ procedures contribute significant revenue. Hospice Providers Reimbursement Overview: Medicare hospice providers are paid a daily payment rate for each day a patient is enrolled in the hospice benefit, which covers all costs incurred in furnishing services identified in a patient’s plan of care (whether provided directly by the hospice provider or arranged by the hospice provider), based on the level of care required to meet the patient’s and family’s need. The levels of care are (i) routine home care; (ii) continuous home care; (iii) inpatient respite care; and (iv) general inpatient care. Payments are made regardless of amount of services furnished on any given day. Effective January 1, 2016, a service intensity add-on (“SIA”) payment is available for services furnished at the end of life. Key Data Elements: Hospice providers must be aware and analyze the change in acuity of patients over time in conjunction with the change in LOS. During an investigation, it is also important to understand the distribution in each level of hospice care and how this changed over time. A key measure to understand the admission policies of a hospice is to analyze the discharge versus death ratio over time. Additional measures to be analyzed include the duration and continuity of home care and the distribution of the categories of care (e.g., routine home care, inpatient respite). Home Health Agencies (“HHAs”) Reimbursement Overview: HHAs are paid an episodic payment (for a 60-day episode of care) for the provision of services to patients under a home health plan of care based on a home health (“HH”) PPS. The HH PPS covers all services and supplies (whether provided directly by the HHA or under arrangement), except certain covered osteoporosis drugs and durable medical equipment. HH PPS payments are adjusted for case-mix and geographic differences in wages. With respect to the case-mix adjustment, payment rates are based on characteristics of the patient and his or her
  • 11. 6Proactively Responding to Government Investigations Using Data Analytics corresponding resource needs (e.g., diagnosis, clinical factors, functional factors, and service needs), as reflected in the Outcome and Assessment Information Set (“OASIS”). Based on the OASIS, patients are classified into Home Health Resource Groups. The HH PPS allows for outlier payments to be made for episodes with unusually large costs that exceed a threshold amount. Low-utilization payment adjustments are also available for patients who require four or fewer visits during the 60-day episode. Finally, a partial episode payment adjustment is available when a patient elects to transfer to another HHA or is discharged and readmitted to the same HHA during the 60-day episode. Key Data Elements: Similar to hospice providers, HHAs need to analyze the acuity of patients over time. Unlike other post-acute care providers that are paid for a single episode, Medicare pays HHAs for a length of time/episode. As such, one must analyze the number of episodes for each beneficiary within this time period. Additionally, understanding how, over time, both low and high utilization episodes have changed is helpful. The Government’s Data Monitoring Activities In addition to understanding the key data elements at issue, it is also important to understand how these data elements may be monitored or examined by the government. In 2002, Congress passed the Improper Payment Information Act (“IPIA”) to “provide for estimates and reports of improper payments by federal agencies.” This Act covered improper payments by all federal agencies, and Congress did not constrain the law to the Medicare program. However, as the Medicare program accounts for a significant portion of the federal budget, this law brought additional scrutiny to the Medicare program. The law required Medicare, like other federal programs, to estimate the amounts of payments improperly paid and report the measures taken to reduce the improper payments. Congress amended the IPIA in 2010 via the Improper Payments Elimination and Recovery Act and in 2012 via the Improper Payments Elimination and Recovery Improvement Act, expanding the requirements to include recovering improper payments. The IPIA, as amended, provided for the creation of the Hospital Payment Monitoring Program, which created several standard reports, including: • Program for Evaluating Payment Patterns Electronic Report (“PEPPER”): supports compliance efforts by publishing payment risks and targets tailored to facility type; • First-Look Analysis Tool for Hospital Outlier Monitoring (“FATHOM”): supports Quality Improvement Organizations in their identification of outlier facilities that require more investigation; and
  • 12. 7Review • Comparative Billing Reports (“CBRs”): focuses on a specific topic/service to determine payment irregularities. This article focuses on PEPPER reports, which are delivered to operators of many different types of providers, including several post-acute provider types. “PEPPER provides provider-specific Medicare data statistics for discharges/services vulnerable to improper payments. PEPPER can support a hospital or facility’s compliance efforts by identifying where it is an outlier for these risk areas. This data can help identify both potential overpayments as well as potential underpayments.” The following types of providers receive PEPPER reports: While the PEPPER program seeks to assist facilities in identifying “potential overpayments as well as potential underpayments,” the value of these reports to investigators must be recognized by the industry. The DOJ, the Department of Health and Human Services’ Office of Inspector General (“OIG”), and other investigating agencies can utilize numerous metrics from the PEPPER reports when analyzing the operations of a facility. The PEPPER reports also compare a facility to the nation, the Medicare Administrative Contractor (“MAC”) jurisdiction,2 and the state. The PEPPER reports define a set of metrics for each provider type. For each metric, the PEPPER reports identify what may be indicated if a facility were to be considered an outlier. For example, see an excerpt from the user’s guide for the SNF PEPPER report. The user’s guide3 provides suggested interventions if a facility is at/below the 20th percentile or at/above the 80th percentile. 2 “A Medicare Administrative Contractor (MAC) is a private health care insurer that has been awarded a geographic jurisdiction to process Medicare Part A and Part B (A/B) medical claims or Durable Medical Equipment (DME) claims for Medicare Fee-For-Service (FFS) beneficiaries.” MACs process all Medicare FFS claims for a given geographic area. More information on MACs can be found at: https://www.cms.gov/medicare/medicare-contracting/medicare-administrative-contractors/what-is-a-mac.html. 3 The PEPPER user’s guide is available at https://www.pepperresources.org/. Short-Term Acute Care Hospitals Hospices Long-Term Acute Care Hospitals Critical Access Hospitals Inpatient Psychiatric Partial Hospitalization Home Health Agencies Inpatient Rehab Facilities Skilled Nursing Facilities
  • 13. 8Proactively Responding to Government Investigations Using Data Analytics As suggested from the diagram above, PEPPER defines outliers as those facilities outside the 20th or 80th percentile of all facilities in the United States. With regards to a metric for which a facility is an outlier, PEPPER indicates that a “provider may wish to review medical record documentation to ensure that services beneficiaries receive are appropriate and necessary and that documentation in the medical record supports the level of care and services for which the SNF received Medicare reimbursement.”4 Although PEPPER recognizes that an outlier “does not necessarily indicate the presence of improper payment or that the provider is doing anything wrong,”5 the investigating agency/individual may not appreciate this possibility and may instead interpret the outlier status as support for allegations of improper services or billing. With the analyses and benchmarks available in the PEPPER reports, it is no surprise that investigators are becoming increasingly comfortable relying on these reports as a front-line investigation tool. Another consideration with respect to information gleaned from PEPPER reports is whether such data implicates the 60-day overpayment rule. Section 6402(a) of the Patient Protection and Affordable Care Act established a new section of the Social Security Act requiring a person who has received an overpayment to report and return the overpayment to the Secretary, the state, an intermediary, a carrier, or a contractor, as appropriate, by the later of 60 days from when the overpayment is “identified” or the date any corresponding cost report is due, if applicable.6 Any overpayment retained by a person after the deadline for reporting and returning an overpayment is an obligation for purposes of the FCA (a reverse false claim).7 In February 2015 CMS published a final rule related to this requirement, applicable to Medicare Part A and Part B healthcare providers and suppliers.8 Under the final rule, a person has identified an overpayment when the person has or should have, through the exercise of reasonable diligence, determined that the person has received an overpayment and quantified the amount of the overpayment.9 In the final rule, CMS clarified that “reasonable diligence” requires providers and suppliers to undertake ongoing, proactive compliance activities to monitor claims, as well as reactive investigative activities regarding any potential overpayments.10 Depending on the individual circumstances, data analytics could be one of these ongoing compliance efforts requiring further review and analysis. 4 SNF PEPPER User’s Guide, Fifth Edition pg. 7. 5 Id. 6 42 U.S.C. § 1320a-7k(d). 7 Id. 8 81 Fed. Reg. 7654 (Feb. 12, 2016). 9 42 C.F.R. § 401.305(a)(2). 10 81 Fed. Reg. 7661.
  • 14. 9Review Sampling Another area in which data plays a significant role is in the context of sampling. In FCA investigations, the DOJ or OIG may “draw a sample” or do a “sample review.” The implications of this are significant and providers should understand what this entails. Before discussing these implications, it is helpful to define the word “sampling.” A provider serves many patients in each time period. These patients are considered the universe. In sampling, an individual develops an approach or sampling plan whereby a certain number of individual patients from the universe are selected and grouped into what is called the sample. A sampling plan can have many different designs and often involves the concept of randomness. The sample is analyzed and conclusions are drawn. Often the DOJ or OIG will want to use the conclusions from their analysis of the sample to make conclusions about the universe. If this is the case, then an individual will complete a process known as extrapolation whereby the sample’s conclusion (e.g., overpayments, error rate) is multiplied to relate to the universe. The use of sampling has a long-standing history in the administrative context,11 but was not statutorily authorized until the passage of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (“MMA”). The MMA established the Medicare Integrity Program, which authorizes Medicare contractors to use extrapolation to determine overpayment amounts when (i) there is a sustained or high level of payment error; or (ii) documented educational intervention has failed to correct the payment error.12 The Medicare Integrity Program also authorizes a Medicare contractor to request records or supporting documentation for a limited sample of submitted claims to ensure that the previous practice is not continuing.13 In the context of FCA investigations, government subpoenas or civil investigative demands (“CIDs”) often include requests for medical records associated with specific patients or claims, based on a sample developed by the government or one of its contractors.14 In the context of FCA lawsuits, recent court decisions have addressed the legality of sampling as it relates to establishing liability and damages. However, the issue is far from settled. The following cases represent recent examples of court decisions involving these issues. 11 See, e.g., 42 C.F.R. § 405.1064 (ALJ decisions involving statistical samples); Section III.B of the OIG’s Provider Self Disclosure Protocol (requiring a provider’s overpayment calculation to “consist of a review of either: (1) all the claims affected by the disclosed matter or (2) a statistically valid random sample of the claims that can be projected to the population of claims affected by the matter” ); HCFA Ruling 86-1 (Hospital Insurance and Supplementary Medical Insurance Benefits (Parts A and B) Use of Statistical Sampling to Project Overpayments to Providers and Suppliers). 12 42 U.S.C. § 1395ddd(f)(3). 13 42 U.S.C. § 1395ddd(f)(4). 14 From a practical standpoint, subpoenas and CIDs operate in a similar fashion: they allow the government to request certain documents. However, a CID goes further than a subpoena duces tecum and can require the recipient to not only produce documents, but to also answer interrogatories and give oral testimony under oath. 31 U.S.C. § 3733(a). CIDs have become increasingly more common since all U.S. Attorneys can now issue CIDs. Prior to 2010, only the Attorney General was authorized to issue a CID and that authority could not be delegated. However, the Fraud Enforcement and Recovery Act (2009) authorized the Attorney General to delegate that authority to others within the DOJ.
  • 15. 10Proactively Responding to Government Investigations Using Data Analytics United States ex rel. Martin v. Life Care Centers of America, Inc. (E.D. Tenn.) The Life Care case was a qui tam action arising from allegations by two former employees against the skilled nursing company; the government intervened in the case before it was settled in October 2016. The government’s central allegation was that Life Care pressured its therapists to target Ultra High RUG levels and longer ALOS periods for patients to maximize its Medicare revenue.15 The government contended that as a result of this pressure, Life Care provided therapy that was not medically reasonable or necessary. The government sought to prove its theory based on evidence from statistical sampling and extrapolation of 400 patient admissions and 1,700 claims, representing 54,396 admissions and approximately 154,621 total claims. Life Care sought partial summary judgment as to the government’s use of statistical sampling and the use of unidentified claims, arguing that the government could not establish falsity (i.e., liability) by extrapolation. The court denied partial summary judgment, finding that “statistical sampling may be used to prove claims brought under the FCA involving Medicare overpayment, but it does not and cannot control the weight that the fact finder may accord to the extrapolated evidence.”16 In other words, the court decided that determining the weight to afford the extrapolated evidence is best left to a jury. Life Care then filed a motion to certify the summary judgment decision to the Sixth Circuit for interlocutory appeal, which the court denied.17 Life Care and the government settled the FCA lawsuit with no further rulings regarding the sampling issue. U.S. ex rel. Michaels et al. v. Agape Senior Community Inc. et al. (4th Cir.) In United States ex rel. Michaels v. Agape Senior Cmty., Inc., relators (former employees of the Agape nursing home network) initiated a qui tam action claiming damages and other relief under the FCA, the Anti-Kickback Statute, and the Health Care Fraud Statute.18 The government did not intervene in the case. In sum, the relators alleged that Agape submitted false claims to several federal healthcare programs, including Medicare, Medicaid, and TRICARE, seeking reimbursement for nursing home-related services. 15 United States ex rel. Martin v. Life Care Centers of America, Inc., 2014 WL 10937088 (E.D. Tenn. Sept. 29, 2014). 16 Order on Defendant’s Mot. for Partial Summary Judgment, Dkt. No. 184, United States ex rel. Martin v. Life Care Centers of America, Inc., No. 1:08-cv 251 (E.D. Tenn. Nov. 24, 2014), dated Sept. 29, 2014. 17 Order on Mot. to Certify the Court’s Order for Immediate Interlocutory Appeal, Dkt. No. 209, United States ex rel. Martin v. Life Care Centers of America, Inc., No. 1:08-cv 251 (E.D. Tenn. Nov. 24, 2014), dated Nov. 24, 2014. 18 United States ex rel. Michaels v. Agape Senior Cmty., Inc., 2015 WL 3903675 (D.S.C. June 25, 2015). The federal anti-kickback statute and its implementing regulations makes it a criminal offense to knowingly or willfully offer, pay, solicit or receive any remuneration in exchange for, or to induce, referring an individual to another person or entity for the furnishing, or arranging for or recommending the purchase, of any item or service that may be paid for in whole or in part by a federal healthcare program, including Medicare and Medicaid. 42 U.S.C. § 1320a-7b(b). The Health Care Fraud Statute makes it a criminal offense to knowingly and willfully execute a scheme to defraud a healthcare benefit program. 18 U.S. Code § 1347.
  • 16. 11Review The district court rejected the relators’ use of statistical sampling in proving liability and damages, specifically finding that the Agape relators would be required to “prove each and every claim based upon the evidence relating to that particular claim.” The court also noted that statistical sampling would be appropriate when it is the only way for a qui tam relator to prove damages, for example, when evidence has been destroyed or dissipated. The court certified the issue of whether statistical sampling can be used to demonstrate FCA liability without directly analyzing Medicare billing claims, among others, for interlocutory appeal to the Fourth Circuit Court of Appeals.19 In a February 14, 2017 decision, the Fourth Circuit found that the certification of the statistical sampling ruling for interlocutory review was not appropriate since the question focused on whether the particular methods of statistical sampling used in the Agape matter were reliable, and not the pure legal question of whether sampling is a legally valid technique in determining damages in FCA actions.20 As such, the issue of whether sampling is an acceptable method to calculate FCA claims or violates due process remains outstanding, in the Fourth Circuit and elsewhere. United States ex rel. Paradies v. AseraCare, Inc. (N.D. Ala.) United States v. AseraCare Inc. arose out of allegations brought by three relators, in coordination with the government, contending that hospice care provider AseraCare submitted Medicare claims for patients who did not meet the criteria for hospice care.21 In this case, the government sought to establish FCA liability using statistical extrapolation, seeking more than $200 million in damages based on a sample of approximately 120 patients. The court initially denied AseraCare’s motion for summary judgment, concluding that “statistical evidence is evidence” of falsity to defeat summary judgment.22 The trial was then bifurcated into falsity and scienter phases.23 Following the first phase (falsity), the judge granted a new trial based on error in instructing the jury: failing to provide complete instructions as to what was legally necessary for the jury to find that the claims before it were false.24 In March 2016, the court granted summary judgment to AseraCare based on the government’s failure to prove falsity, explaining 19 United States ex rel. Michaels v. Agape Senior Cmty., Inc., No. 15-238 (L) (0:12-cv-03466-JFA) (4th Cir. Sept. 29, 2015). The district court also certified for interlocutory appeal the issue of whether the DOJ has absolute veto power over FCA settlements in cases where it has not intervened. The DOJ blocked settlement between the relators and Agape, claiming that the proposed settlement amount was too low and proposed release of legal liability too broad. 20 United States ex rel. Michaels v. Agape Senior Community, et al., 2017 WL 588356 (4th Cir. 2017). With respect to the issue of veto power, the Fourth Circuit held the government has an absolute veto power over voluntary settlements in FCA matters even when it declines to intervene in the case. 21 United States v. AseraCare Inc., 2015 WL 8486874 (N.D. Ala. Nov. 03, 2015). 22 United States v. AseraCare Inc., 2014 WL 6879254 (N.D. Ala. Dec. 4, 2014) (emphasis in original). 23 Order Granting Motion to Bifurcate, United States v. AseraCare Inc., No. 2:12-CV-245-KOB (N.D. Ala. May 20, 2015). 24 United States v. AseraCare Inc., 2015 WL 8486874 (N.D. Ala. Nov. 03, 2015).
  • 17. 12Proactively Responding to Government Investigations Using Data Analytics that mere differences in clinical judgment are not enough to establish FCA falsity, and the government had not produced evidence other than conflicting medical expert opinions.25 The government has appealed to the Eleventh Circuit Court of Appeals. As these cases show, sampling can have a significant impact on an investigation and/or litigation. A provider, its external counsel, and expert consultants should be involved in all aspects of the sampling to ensure that a fair and reasonable sample is drawn and that any extrapolations are appropriate. In the sampling approach, aspects to consider include the methodology used to create the sample (e.g., stratification), the representativeness of the sample, the confidence (degree of certainty) levels and the precision (range of accuracy) levels. These aspects will materially affect the size and composition of the sample. After analyzing the sample, it is important to consider any comparisons that are drawn between the sample and any benchmarks. One must consider the qualitative and quantitative differences among the facility, the sample, and any benchmarks offered. In challenging the sample, a provider may want to consider conducting an evaluation of the sampling plan, conducting an independent review of the sample claims, conducting a review of non-sample claims (i.e., the universe), and/or challenging the credentials of reviewers analyzing the sample. Should the sample be used in litigation, one may consider Daubert motions as the sampling evidence may be unqualified.26 Regardless of whether sampling is used in an investigation or litigation, sampling requires a careful review to ensure that it is being used appropriately. One of the key pitfalls to sampling is the concept of randomness, as many people often equate randomness with representativeness. It is important to rememberthatpullingasamplerandomlydoesnotnecessarilymeanthatthesamplewillberepresentative of the universe. Further analysis is needed to ensure that representativeness has been satisfied. Conclusion The use of data analytics in the context of healthcare reimbursement and fraud prevention is not a new concept. Government contractors have been analyzing data for payment and recovery purposes over the past several years. In the fraud and abuse context, the government and its contractors have also increasingly relied upon available data to identify potential issues for further investigation of wrongdoing by providers. Relators and their counsel have also increasingly mined publicly available claims data in bringing FCA qui tam actions. 25 United States v. AseraCare, Inc., 176 F. Supp. 3d 1282 (N.D. Ala. Mar. 31, 2016). 26 A Daubert motion, named after a Supreme Court case, Daubert v. Merrell Dow Pharms., 509 U.S. 579 (U.S. 1993), is a specific type of motion in limine used to exclude the presentation of unqualified evidence to the jury.
  • 18. 13Review What has changed is the government’s increasing reliance on data to develop theories of wrongdoing by providers. As a result, it is imperative that providers are well-aware of their own data, and the optics of such data, particularly as it compares to the data of other, similar providers, which is available through public sources. Providers should be proactively monitoring their own data as it relates to the relevant data elements discussed above. Proactive monitoring requires not only an awareness of the actual data metrics, but also an understanding of and appreciation for the factors that contribute to, or influence, the metrics. Knowing this information will allow a provider to quickly and intelligently respond to a government investigation, if necessary. Further, in the context of government investigations, data analytics can be used by providers to contradict, or put into more accurate context, government allegations of wrongdoing, to resolve an investigation, to assist in settlement negotiations, or to dissuade the government from intervening in a qui tam case.
  • 19. 14Review Antitrust Market Definition—A Review of Five Influential Papers Audrey Boles, Sam Brott, and Michele Martin* Introduction In 1982, economist and Nobel Laureate George Stigler issued the following call to action: My lament is that this battle on market definitions, which is fought thousands of times what with all the private antitrust suits, has received virtually no attention from us economists. Except for a casual flirtation with cross-elasticities of demand and supply, the determination of markets has remained an undeveloped area of economic research at either the theoretical or empirical level.1 Since that time (and, in fact, a few years prior), economists have crafted both empirical and theoretical methods that could be used to define antitrust markets to aid inquiry into competitive effects. What follows is a collection of summaries of five papers that have influenced thinking about market definition over the last thirty years.2 We focus on five papers that embody interesting points in the history and evolution of thinking about market definition in the United States:3 • Kenneth G. Elzinga and Thomas F. Hogarty, “The Problem of Geographic Market Delineation in Antimerger Suits” (1973) • David T. Scheffman and Pablo T. Spiller, “Geographic Market Definition under the US Department of Justice Merger Guidelines” (1987) * Audrey Boles is an engagement manager with Applied Predictive Technologies, a business analytics software company. Previously, she was a consultant with BRG, where she specialized in data analysis for antitrust, intellectual property, and healthcare litigation matters. She can be reached at aboles@predictiveTechnologies.com. Sam Brott is a consultant at BRG, where he specializes in economic data analysis for antitrust and intellectual property litigation. He can be reached at sbrott@thinkbrg.com. Michele Martin is a public policy data scientist at Humana. Her role involves using data analysis and research to support public policy advocacy. Previously, she was a consultant at BRG, where she applied her analytical expertise to litigation and internal investigations for a wide range of healthcare clients, including health insurers and pharmaceutical manufacturers. She can be reached at michmartin809@gmail.com. 1 George J. Stigler, “The Economists and the Problem of Monopoly,” 72 Am. Econ. Rev. 1 (1982): 9. 2 Noteworthy contributions that are not reviewed here include papers such as: George Hay, John C. Hilke, and Philip B. Nelson, “Geographic Market Definition in an International Context,” 64 Chicago-Kent L. Rev. 711 (1988); Steven Salop and Serge Moresi, “Updating the Merger Guidelines: Comments” (November 9, 2009), available at: https://www.ftc.gov/sites/default/files/documents/public_comments/ horizontal-merger-guidelines-review-project-545095-00032/545095-00032.pdf; Michael Salinger, “The Concentration-Margins Relationship Reconsidered,” Brookings Papers on Econ. Activity: Microeconomics 287 (1990); Gregory J. Werden, “Demand Elasticities in Antitrust Analysis,” 66 Antitrust L.J. 363 (1998): 384–396; and Gregory J. Werden and Luke M. Froeb, “Correlation, Causality, and All That Jazz: Inherent Shortcomings of Price Tests for Antitrust Market Delineation,” 8 Rev. Indus. Org 329 (1993). 3 A more comprehensive overview can be found in Greg J. Werden, “The History of Antitrust Market Delineation,” 76 Marq. L. Rev. 123 (1992).
  • 20. 15Antitrust Market Definition—A Review of Five Influential Papers • Barry C. Harris and Joseph J. Simons, “Focusing Market Definition: How Much Substitution is Necessary?” (1989) • Joseph Farrell and Carl Shapiro, “Antitrust Evaluation of Horizontal Mergers: An Economic Alternative to Market Definition” (2010) • Louis Kaplow, “Market Definition: Impossible and Counterproductive” (2013) We also summarize critiques of the papers to properly frame the limits of the proposed methods. In many cases, these limitations were first articulated by the originating author(s), followed by calls for consideration before application of the various methods. We begin with Elzinga and Hogarty’s “The Problem of Geographic Market Delineation in Antimerger Suits.”4 This paper represents a position taken relatively early that “the definition of a market offered by classical economists can and should be used in the antitrust context.”5 Their analysis of trade flows into and out of specified regions was initially employed by merging companies as a means to define geographic markets. Following several losses in court, the Federal Trade Commission (FTC) downplayed the applicability of the Elzinga-Hogarty test as a viable method to delineate antitrust markets in hospital mergers, a position that has been accepted in more recent decisions regarding hospital mergers.6 Nevertheless, Elzinga and Hogarty’s emphasis on tests “being generally consistent with economic analysis” and “reasonably applicable by antitrust practitioners” set a standard for later market definition analytical developments.7 One development was the publication of the 1982 Merger Guidelines.8 The 1982 Merger Guidelines (revised in 1984) represent the first adoption of the hypothetical monopolist test (HMT) by US enforcement agencies. Now widely used, the HMT was initially criticized as being “completely nonoperational” because “no method of investigation of data is presented, and no data, even those produced by coercive process, are specified that will allow the market to be determined empirically.”9 The “nonoperational” aspect of the HMT was bridged, in part, by the next papers 4 Kenneth G. Elzinga and Thomas F. Hogarty, “The Problem of Geographic Market Delineation in Antimerger Suits,” 18 Antitrust Bull. 45 (1973): 81. 5 Werden (1992). 6 American Bar Association, Health Care Mergers and Acquisitions Handbook, Second Edition (2018): 54–55. 7 Elzinga and Hogarty (1973): 81. 8 A total of six merger guidelines have been promulgated by US antitrust authorities, including revisions: US Department of Justice (DOJ), Merger Guidelines (1968), available at http://www.justice.gov/atr/hmerger/11247.pdf (“1968 Merger Guidelines”); US DOJ, Merger Guidelines (1982), available at http://www.justice.gov/atr/hmerger/11248.pdf (“1982 Merger Guidelines”); US DOJ, Merger Guidelines (1984), available at http://www.justice.gov/atr/hmerger/11249.pdf (“1984 Merger Guidelines”). The 1984 Merger Guidelines were superseded by the Horizontal Merger Guidelines, which was jointly issued by the DOJ and the FTC. US DOJ and FTC, Horizontal Merger Guidelines (1992), available at https://www.justice.gov/sites/default/files/atr/legacy/2007/07/11/11250.pdf (“1992 Horizontal Merger Guidelines”); US DOJ and FTC, Horizontal Merger Guidelines (1997), available at https://www.justice.gov/sites/default/files/atr/ legacy/2007/08/14/hmg.pdf (“1997 Horizontal Merger Guidelines”); US DOJ and FTC, Horizontal Merger Guidelines (2010) [hereinafter Horizontal Merger Guidelines (2010)], available at http://ftc.gov/os/2010/08/100819hmg.pdf (“2010 Horizontal Merger Guidelines”). 9 George J. Stigler and Robert A. Sherwin, “The Extent of the Market,” 28 J.L. & Econ. (1985): 555, 582.
  • 21. 16Review included in our summary. Both Scheffman and Spiller (1987)10 and Harris and Simons (1989)11 pick up where the 1984 Merger Guidelines left off. They introduce empirical methods for defining relevant antitrust markets that are consistent with the 1984 Merger Guidelines: residual demand and critical loss, respectively. The fact that these two methods continue to be used decades after their introduction is indicative of both their usefulness and the HMT’s prevailing approach to defining markets. But science is never settled, and intrinsic to its method is a process of continued scrutiny and evaluation. One of the latest tools to be introduced (and included as part of the 2010 Horizontal Merger Guidelines) is the concept of upward pricing pressure (UPP) as initially developed by Farrell and Shapiro (2010).12 UPP represents a notable departure from defining markets in order to infer market power from market shares. Instead, its emphasis is on evaluating whether lost sales of a newly merged firm’s product due to a price increase can be internalized through increased sales of a separate product, such that prices may be profitably increased after accounting for post-merger efficiencies. Unlike its predecessor methods, UPP does not hinge on a properly defined market from which market shares can be calculated, though in practice market shares often play a role in UPP analyses. While many papers have focused on empirical applications related to the HMT, Kaplow (2013) criticizes the current “market redefinition” regime embodied in the HMT.13 Kaplow argues (in this and his other papers and speeches) to eliminate the use of the HMT and instead emphasizes what he considers more pragmatic (and ad hoc) methods of market definition, much like the methods that were prevalent prior to the 1984 Merger Guidelines. In that sense, Kaplow seeks to return emphasis to the economic intuition behind market definition, rather than on the application of a framework that, according to him, provides little practical purpose. Kaplow’s criticisms echo many of those found in the literature that preceded it. Indeed, Stigler and Sherwin’s criticism could just as easily have come from Kaplow: Why the factual inquiry necessary under [the HMT] approach – coupled with quantification of market shares and judgment concerning the level and changes in concentration is any easier than asking directly whether the merger will result in an increased price (the question that is, after all, the one to be answered) is beyond us.14 10 David T. Scheffman and Pablo T. Spiller, “Geographic Market Definition under the US Department of Justice Merger Guidelines,” 30 The Journal of Law and Economics 1 (1987). 11 Barry C. Harris and Joseph J. Simons, “Focusing Market Definition: How Much Substitution is Necessary?” Research in Law and Economics 207 (1989). 12 Joseph Farrell and Carl Shapiro, “Antitrust Evaluation of Horizontal Mergers: An Economic Alternative to Market Definition,” 10 The B.E. Journal of Theoretical Economics 1, Article 9 (2010): 2, 34. 13 Louis Kaplow, “Market Definition: Impossible and Counterproductive,” 79 Antitrust Law Journal 1 (2013): 361–379. 14 Stigler and Sherwin (1985).
  • 22. 17Antitrust Market Definition—A Review of Five Influential Papers The importance of and specific methods used in the definition of markets in antitrust matters will continue to evolve, especially as more detailed information becomes available to economists. However, even newly developed methods likely will continue to embody thinking developed by scholars over the last thirty years. The Elzinga-Hogarty Test Elzinga and Hogarty (1973) introduce a test of geographic market definition now referred to as the Elzinga-Hogarty test.15 The proposed method is based on economic arguments put forward during US v. Pabst and US v. Philadelphia National Bank.16 The Elzinga-Hogarty test is limited to geographic market definition (rather than product market definition), which, according to the authors, had received limited attention by academic researchers at the time the article was written. Elzinga and Hogarty describe that, prior to their research, many antitrust experts used comparisons of prices in defining an antitrust market. However, as Elzinga and Hogarty point out, defining a market based on price comparisons is inconsistent for two main reasons. First, assigning an accurate, all-encompassing economic price to any good or service is difficult. Second, supply and demand factors may be the key determinants of price and not competition between suppliers in two different geographic areas. That is, different geographic areas could have similar prices because the two areas have similar demand and supply characteristics, and that does not necessarily reflect that two areas are in the same geographic market. Elzinga and Hogarty focus on the geographic market definition issues decided by the US Supreme Court in US v. Pabst addressing the Pabst and Blatz merger of 1958. Elzinga and Hogarty surmise that the geographic market accepted by the Court was incorrect because the government examined the supply of beer moving into the hypothetical market, but did not look at the supply of beer moving out of the market. The Department of Justice (DOJ) argued that the market for Pabst should be defined as the state of Wisconsin because 80 percent of the beer consumed in Wisconsin was also brewed in Wisconsin. However, the authors claim that the market should have potentially been defined as an area encompassing five states, because it is also appropriate to consider exports out of an alleged market. While 80 percent of the beer consumed in Wisconsin was brewed in Wisconsin, less than 25 percent of the beer brewed in Wisconsin was consumed in Wisconsin. Elzinga and Hogarty also examine the suit regarding the proposed merger of Philadelphia National Bank and Girard Trust Corn Exchange Bank in the industry for commercial banking services, which was ultimately blocked by the Supreme Court in 1963. Here, too, the authors find fault with the DOJ’s proposed delineation of the geographic market, which was ultimately accepted by the Supreme Court, viewing it as too narrow. In this case, however, as opposed to the beer example, the government overlooked the flow of business into the hypothetical market and only focused on 15 Elzinga and Hogarty (1973): 45–81. 16 United States v. Philadelphia National Bank, 201 F. Supp. 348 (1962), 374 US 321 (1963). United States v. Pabst Brewing Company, 233 F. Supp. 475 (1964); 384 US 546 (1966); 296 F. Supp. 994 (1969).
  • 23. 18Review the flow of business out of the area. The Elzinga-Hogarty test was novel in its examination of supply flowing both into and out of a hypothetical market. The Elzinga-Hogarty test has two parts in defining a geographic market: the Little In From Outside (LIFO) and Little Out From Inside (LOFI) tests. The first step of the test is to create a starting point by taking the largest location of the largest of the merging firms, and then finding “the minimum area required to account for at least 75 percent of the appropriate ‘line of commerce’ shipments of that firm (or plant).”17 This area is now the hypothetical market area. If the merging parties are in different geographical areas, then this step must be followed for each area. The next step is to perform the LIFO test. The LIFO test requires that the hypothetical market area be expanded until 75 percent of the total sales of the relevant product within the current hypothetical market area be shipped from plants within the given area. The authors note that if, after continuing to expand the area, the test is never satisfied, then the hypothetical market area “is (at least) national in scope.”18 Once the LIFO test is satisfied, the final part of the exercise is the LOFI test. To satisfy the LOFI test, the hypothetical market area must, if necessary, be expanded until 75 percent of the shipments of the relevant product by firms within the area are to customers within the area. After both tests have been satisfied, the market volume can be calculated by summing all consumption from shipping points within the newly established hypothetical market area.19 Although the authors advocate for 75 percent as the threshold in their procedure, they acknowledge that this value is arbitrary and also that a higher value such as 90 percent may be more appropriate. This discussion is directly revisited by Elzinga and Hogarty in their 1978 follow-up paper, “The Problem of Geographic Market Delineation Revisited: The Case of Coal.”20 In the follow-up, the authors advocate for a 90 percent threshold because it often results in overlap between markets, which is more characteristic of the real world. They found that when using 75 percent, as originally proposed, too many gaps were created between markets that could not be accounted. Elzinga and Hogarty’s 1978 paper was published in response to two major critiques of the original paper by Giffen and Kushner (1976) and Shrieves (1975).21 These critiques claim to have found flaws in the procedure, both citing data from the coal industry. Elzinga and Hogarty dismiss the criticisms, pointing out where each critic failed to properly apply the LIFO and LOFI tests in their respective analyses. 17 Elzinga and Hogarty (1973): 73. 18 Ibid.,74. 19 The authors are not explicit on the next step if the LOFI test cannot be satisfied. 20 Kenneth G. Elzinga and Thomas F. Hogarty, “The Problem of Geographic Market Delineation Revisited: The Case of Coal,” Antitrust Bulletin 23 (1978): 1–18. 21 Phillip E. Giffen and Joseph W. Kushner, “Geographic Submarkets in Bituminous Coal: Defining a Southeastern Submarket,” Antitrust Bulletin 21 (1976): 67–79. Ronald E. Shrieves, “Geographic Market Areas and Market Structure in the Bituminous Coal Industry,” Appalachian Resource Project, University of Tennessee, ARP 45 (1975).
  • 24. 19Antitrust Market Definition—A Review of Five Influential Papers Giffen and Kushner (1976) claim that the test was unreliable because their application of the test to the coal industry did not yield the same southeastern market that they believed existed. However, Giffen and Kushner did not account for the LOFI portion of the test, ignoring the flow of supply out of the hypothetical market. Shrieves (1975) modifies the Elzinga-Hogarty test to include an analysis of pricing data and argued that Wisconsin should be treated as its own market in the coal industry due to its self-sufficiency. Elzinga and Hogarty (1978) dismiss the critiques in this paper for its lack of not only a LOFI application, but also a LIFO application. Additionally, Elzinga and Hogarty (1978) dismiss the Shrieves’ approach for its creation of hypothetical markets based, at least partially, on pricing data. Gregory J. Werden’s (1981) critique of the Elzinga-Hogarty test consisted of two major points.22 The critique, essentially an application of Hotelling’s Law, is characterized by Elzinga as setting forth “a hypothetical example where there is one product and two firms spatially dispersed at A and B with customers spread uniformly along a line joining A and B.”23 Werden’s idea is that, under the Elzinga-Hogarty test, if there were positive transportation costs, then there would be a point C that would divide sales into two territories, such that two distinct markets would meet, but not overlap, at C. This scenario would allow either party, A or B, to expand or shrink its market at any time with a slight change in price. Werden further argues that, under the same scenario, a measurement of the cross-elasticity of demand would be a better test of where to delineate market boundaries, with a high cross-elasticity of demand indicating that an area is a single market. The same publication included a response to Werden by Elzinga.24 Elzinga provides a simple situation in which a high cross-price elasticity would suggest competition when there is a lack of competition: if A is competitive while B is monopolized, we could see a rise in the cross- price elasticity, as some customers on the fringe would avoid paying high monopoly prices by purchasing from A. Although two markets would remain, with one being monopolized, Werden’s method would suggest that the two are competing. In 2004, the DOJ and FTC jointly issued a report that critiqued the Elzinga-Hogarty test.25 The report addresses several hospital merger cases in the 1990s in which the courts’ acceptances of the Elzinga-Hogarty test played a role in US government losses. The critique’s main point is that, because the test was designed for fungible commodities, it cannot be applied to service industries such as the hospital industry. Further, the report argues that flows of patients are not appropriate metrics of shipments, as envisioned in the Elzinga-Hogarty test, because some patients travel long distances to obtain care for unique conditions that cannot be treated within their own localities. As such, the use of the Elzinga-Hogarty test without adjustment would count as an export (or 22 Gregory J. Werden, “The Use and Misuse of Shipments Data in Defining Geographic Markets,” Antitrust Bulletin 26 (1981): 719–737. 23 Kenneth G. Elzinga, “Defining Geographic Market Boundaries,” Antitrust Bulletin 26 (1981): 742. 24 Ibid., 739–752. 25 FTC and US DOJ, Improving Healthcare: A Dose of Competition, report (2004).
  • 25. 20Review import) a patient who travels to another hospital, not because the patient seeks a more competitive price, but because the patient seeks a service for which the hospitals do not compete to begin with. Although the report critiques the use of the test in hospital cases, it does not dismiss the test entirely—it cautions that the test cannot simply be followed in all industries. As Davis and Garcés summarize, “Elzinga and Hogarty’s test can provide a useful piece of evidence when coming to a view on the appropriate market definition… [But] it may seriously mislead those who apply the test formulaically.”26 Elzinga recently penned another paper on the topic, with coauthor Anthony Swisher (2011).27 Elzinga and Swisher critique Elzinga’s own method, writing that “two characteristics of hospital services markets… may tend to undermine the utility of the Elzinga-Hogarty test in hospital merger cases.”28 The two characteristics that the authors discuss are the “Silent Majority Fallacy” and the “Payer Problem.” The Silent Majority Fallacy refers to the large number of patients who do not travel as far as projected in response to a price increase because they strongly prefer to receive treatment close to home. Similarly, the Payer Problem refers to the large number of patients who would be projected to have strong aversions to price increases, but who would not directly feel the impacts of the increases due to the role of insurance plans. Both characteristics can lead to an overestimation of the size of the geographic market. However, the authors also write that “[i]t remains to be seen… whether the Elzinga-Hogarty test will continue to be relied on in more traditional, pre-closing merger challenges.”29 Overall, the Elzinga-Hogarty test has been useful in attempting to design a framework by which to think about a geographical market delineation. As noted by Werden, Elzinga and Hogarty “were the first economists to argue that the definition of a market offered by classical economists can and should be used in the antitrust context. They also were the first to propose and apply a specific method for using data to delineate markets.”30 The test continues to be used in certain circumstances; however, limitations have been recognized. As pointed out by the DOJ and FTC, two appellate courts, and Elzinga himself, the test is not well suited for market definition in hospital cases.31 Also, the Elzinga-Hogarty test is best used with goods rather than with services. Additionally, its use is limited by data availability regarding shipments or consumption in any given area. 26 Peter Davis and Eliana Garcés, Quantitative Techniques for Competition and Antitrust Analysis, Princeton, NJ: Princeton UP (2010). 27 Kenneth Elzinga and Anthony Swisher, “Limits of the Elzinga-Hogarty Test in Hospital Mergers: The Evanston Case,” International Journal of the Economics of Business 18 (2011): 133–146. 28 Ibid., 133. 29 Ibid., 133. 30 Werden (1992): 185. 31 American Bar Association, Health Care Mergers and Acquisitions Handbook, Second Edition (2018): 54–55.
  • 26. 21Antitrust Market Definition—A Review of Five Influential Papers Residual Demand Analysis Scheffman and Spiller’s 1987 paper “Geographic Market Definition under the US Department of Justice Merger Guidelines” serves as an empirical guide to defining relevant antitrust markets.32 The publication of the Merger Guidelines, and Scheffman and Spiller’s subsequent application of them, marked a key evolution in how geographic markets are defined in mergers reviewed by the DOJ and FTC. The 1984 Merger Guidelines define an antitrust market as “a product or group of products and a geographic area in which it is sold such that a hypothetical, profit-maximizing firm, not subject to price regulation, that was the only present and future seller of those products in that area would impose a ‘small but significant and nontransitory’ increase in price [SSNIP] above prevailing or likely future levels.”33 Scheffman and Spiller explain that to define such a geographic market, one must start with a particular geographic area in which the product(s) at hand are sold. From here, one would sequentially expand that area until the above conditions of an antitrust market are met, such that all geographic areas with suppliers providing viable substitutes are included. The purpose of the Merger Guidelines, Scheffman and Spiller explain, is to go beyond identifying an economic market as that area where prices of goods are correlated to identifying the specific group of producers and geographic area in which a horizontal merger has the potential to facilitate the creation or enhancement of market power. The paper begins by distinguishing antitrust markets from economic ones, positing that existing empirical tests for delineating geographic markets were flawed because they did not recognize inherent differences between the two. Scheffman and Spiller provide the classical definition of an economic market as an “area and set of products within which prices are linked to one another by supply- or demand-side arbitrage and in which those prices can be treated independently of prices of goods not in the market.”34 In other words, taking into account transportation costs, prices of products in the same economic market are directly linked by arbitrage. In a hypothetical economic product market with three producers, a price increase by Producer A would result in higher sales for Producers B and C. In turn, this would reduce the sales by Producer A. Thus, the existence of Producers B and C weakens potential market power of Producer A. In determining an antitrust market, one must go a step further and examine the supply responsiveness of producers both within and outside the economic market. Potential entrants with a small supply elasticity might be left out of an antitrust market, regardless of whether they are considered to be in the same economic market, because they would not respond to increased demand in a substantive way. On the other hand, producers that are not considered to be in the economic market, but who represent a next-best substitute and have the capacity to respond to increased demand, may be 32 Scheffman and Spiller (1987): 123–47 33 US DOJ, 1984 Merger Guidelines: 3, available at http://www.justice.gov/atr/hmerger/11249.pdf. 34 Scheffman and Spiller (1987): 125.
  • 27. 22Review considered as part of the relevant antitrust market. This difference between economic and antitrust markets is one point that Scheffman and Spiller extract from the Merger Guidelines. The authors discuss two empirical tests that were, at the time the paper was written, widely used for delineating geographic markets: price tests and shipment tests. Price tests examine the relationship between prices in different areas over time. For two locations to be considered part of the same geographic market, “prices in the two areas should move together with the difference in prices in the two areas approximating marginal transportation costs.”35 Shipment tests, such as the Elzinga-Hogarty test, rely on data on shipments into, out of, and between different geographic areas to inform what constitutes a relevant geographic market. The authors’ main purpose in explaining these tests is to highlight where they fall short. While both types of tests successfully identify economic markets based on a classical definition, neither provides information about the supply elasticity of different groups of producers, information that Scheffman and Spiller deem crucial to the delineation of antitrust markets given the Merger Guidelines. To illustrate their point: if two regions have few shipments of a particular product between them or do not exhibit prices that are correlated, the above two tests would not consider them to be in the same antitrust market. However, if one region has a highly inelastic supply, producers in the other could theoretically raise prices above a competitive level. Therefore, an empirical test that takes price elasticities into account is needed to define relevant antitrust markets in accordance with rules laid out in the Merger Guidelines. The authors recommend that residual demand analysis should be used to identify whether a candidate market is, in fact, an antitrust market. A firm’s residual demand is a function of overall market demand and the quantity supplied by other firms at various price points. More specifically, the residual demand curve is the individual firm’s demand, which is that portion of market demand not supplied by other firms. Figure 1 illustrates the residual demand curve for Firm A in relation to overall market demand. The graph on the right of the figure shows overall market demand and the quantity supplied by all firms except Firm A. The demand curve on the left graph is the market demand curve shifted inward by the exact quantity produced by other (non-Firm A) firms at each price. 35 Ibid., 129.
  • 28. 23Antitrust Market Definition—A Review of Five Influential Papers Figure 1: Residual Demand Curve Scheffman and Spiller posit that a subgroup of producers’ elasticity of residual demand must be sufficiently inelastic for there to be the potential for an increase in market power in the event of a merger. This argument makes sense in the context of the Merger Guidelines. If demand for a subgroup’s products with respect to (a) overall market demand and (b) demand for other producers’ products is not particularly sensitive to increases in price, it logically follows that all relevant products have been added to the proposed antitrust market. In other words, delineation of the relevant antitrust market is complete at that point. If, on the other hand, the quantity demanded for the sub-group fluctuates with changes in the price of the other producers’ product(s), the product market needs to be expanded. Baker and Bresnahan, in their 1988 paper “Estimating the Residual Demand Curve Facing a Single Firm,” illustrate this latter point when they say that “one firm’s contraction of output will be offset exactly by another’s expansion.”36 After summarizing the conceptual framework, Scheffman and Spiller step through the underlying mathematics and derive the residual demand function for a hypothetical homogenous product that is produced in two distinct geographic locations. Variables taken into account include prices in the location of interest, transportation costs between the two locations, known demand, and cost shifters, as well as random shocks to demand and supply. The authors apply their proposed approach to estimate residual demand for wholesale unleaded gasoline in the eastern United States. Since prices of wholesale unleaded gasoline are highly correlated throughout the geographic area east of the Rocky Mountains, price correlation tests would lead antitrust economists to define the relevant antitrust market for this product as this entire region. Scheffman and Spiller’s approach breaks the eastern United States into different combinations based on the US Department of Energy’s division of geographic areas. The authors employ regression analysis to estimate monthly residual demand price elasticities for each selected geographic area. They take into account cost shifters in each area such as the price of crude oil, energy use, and total refining capacity, as well as demand shifters such as personal income and gasoline prices. Their results indicate different geographic markets from those indicated by using the standard price and 36 Jonathan B. Baker and Timothy F. Bresnahan, “Estimating the Residual Demand Curve Facing a Single Firm,” International Journal of Industrial Organization 6 (1988): 283–300, 284.
  • 29. 24Review shipment tests. While they conclude that several study areas would constitute relevant antitrust markets for wholesale unleaded gasoline, the market is not as broad as the entire eastern US. Scheffman and Spiller’s work has been integral in providing an empirical application of the Merger Guidelines and has been cited over two hundred times. However, their employment of residual demand analysis to delineate relevant antitrust markets has not gone without criticism. Several papers highlight the importance of using residual demand estimation in conjunction with other information due to potential limitations in the method. For example, Froeb and Werden point to problems of extrapolation and nonstationarity.37 First, residual demand analysis requires an antitrust economist to make inferences about demand and cost conditions in the future. Although historical and current conditions can be relied on to a point, there is no guarantee that these conditions will be the same in the future. Elasticity of demand may not be sufficiently stable through time, for instance. Second, the authors reference the issue of nonstationarity—that economic conditions are not always constant. This is an issue particularly in the case of mergers because changing economic conditions often precede them.38 Critical Loss Harris and Simons’ 1989 paper “Focusing Market Definition: How Much Substitution is Necessary?” offers a pragmatic approach for following existing guidance on antitrust market definition.39 Both the 1984 Merger Guidelines and existing case law present definitions of relevant product and geographic markets; however, neither delivers straightforward ways of empirically delineating these markets.40 Harris and Simons’ paper introduces a key concept for use in antitrust merger cases when economists are trying to discern whether a group of producers constitutes a relevant antitrust market. The authors point to widespread criticism of the 1984 Merger Guidelines, which many have called “unworkable in practice.”41 For example, Stigler and Sherwin write, “[t]his market definition has one, wholly decisive defect: it is completely nonoperational. No method of investigation of data is presented and no data, even those produced by coercive process, are specified that will allow the market to be determined empirically.”42 The 1984 Merger Guidelines define an antitrust market as “a product or group of products and a geographic area” for which a hypothetical monopolist could impose a “small but significant and nontransitory price increase” and be profitable doing so.43 Yet the 1984 Merger Guidelines do not clearly state how one should go about analyzing when a given price increase is, or is not, profitable. 37 Luke M. Froeb and Gregory J. Werden, “Residual Demand Estimation for Market Delineation: Complications and Limitations,” Review of Industrial Organization 6 (1991): 33–48. 38 Ibid. 39 Harris and Simons (1989): 207. 40 US DOJ, 1984 Merger Guidelines[cite], available at http://www.justice.gov/atr/hmerger/11249.pdf. 41 Harris and Simons (1989): 208. 42 Stigler and Sherwin (1985): 582. 43 1984 Merger Guidelines, 3.
  • 30. 25Antitrust Market Definition—A Review of Five Influential Papers In addition to this critique of the 1984 Merger Guidelines, the authors review the concept of “reasonable interchangeability” that has been cited in several major Supreme Court decisions, including US v. du Pont & Co. The decision for this case reads, “[i]n considering what is the relevant market for determining the control of price and competition, no more definite rule can be declared than that commodities reasonably interchangeable by consumers for the same purposes make up that ‘part of the trade or commerce.’”44 While this statement clearly indicates that products that are substitutes for one another are in the same market, it does not state the extent to which products must be interchangeable to be considered in the same antitrust market. The authors outline an empirical method that can both (a) determine when a given price increase would be profitable and (b) serve as a benchmark for the reasonable interchangeability standard. With any price increase, firms will inevitably lose sales because some people do not want to pay the higher prices; however, the sales that each firm retains will bring in more revenue per unit, and the firm will not incur the variable costs of sales lost. The critical loss calculation aims to determine “what producers could gain or lose from a price increase.”45 The authors’ process for determining the profitability of a price increase involves two main calculations. First, one must calculate the critical loss for a given price increase. The critical loss denotes the point at which firms are indifferent between the prevailing market price and hypothetical higher price, as the lost sales associated with either condition are equivalent. In other words, profits at each price are equal, and firms thus do not have an incentive to raise the price above prevailing levels. Harris and Simons begin by setting profits in the two scenarios equal to one another and subsequently walk the reader through steps to derive an equation for calculating critical loss. In the end, the only variables needed to solve for this value are the hypothetical price increase and the contribution margin, the latter of which is simply the additional profit earned on each unit sold. The equation is as follows, where X is the critical loss, Y is the hypothetical percentage price increase, and CM is the contribution margin:46 X = [Y/(Y + CM)]*100 A higher contribution margin will yield a lower critical loss due to the high value of each additional unit sold. Conversely, the critical loss will be higher if the profit margin on each unit sold is low, because the firm can afford to lose more customers and still be profitable given a hypothetical price increase. Figure 2 visually depicts the concept of critical loss. 44 United States v. du Pont & Co., 351 US 377 (1956), 394–395. 45 Harris and Simons (1989): 157. 46 Ibid., 161.
  • 31. 26Review Figure 2: Critical Loss Profit Calculation47 The graph shows a demand curve and marginal cost curve for a hypothetical product. When the firm raises the price for this product by ∆P, the quantity demanded decreases by ∆Q, resulting in a new quantity demanded, Q−∆Q. Put simply, the critical loss value is the ∆Q/Q for which Gained Profits equals Lost Profits, given some hypothetical price increase. Adjustments can be made to the calculated critical loss if a product’s sales are directly or indirectly connected to those of another product produced by the same firm. For instance, if a reduction in a firm’s sales for Product A results in increased sales for the same firm’s Product B, the critical loss for the firm can be adjusted upward. Further, if lower levels of production for a firm’s Product C also mean lower levels for the same firm’s Product D, the critical loss can be adjusted downward. In this way, critical loss analysis can be used in different settings. The second step in the process involves estimating the magnitude of sales that would actually be lost for a specified group of producers if they were to hypothetically increase prices by a given percentage. To perform such an estimation, one must consider various players, including customers, producers of the same product, and producers of other products. Residual demand elasticity is one approach for understanding the reactions of market players to hypothetical price increases. A firm’s residual demand curve reflects the market demand for that firm’s product that is not met by other firms in the industry at a given price. Residual demand elasticity measures the responsiveness of market demand for a firm’s product to increases in the price of that product. In the context of antitrust merger analysis, a firm or group of firms’ elasticity of residual demand must be sufficiently inelastic for there to be potential for market power. Each critical loss value has a corresponding critical residual demand elasticity, which is calculated by dividing the critical loss by the hypothetical price increase. This value reflects the greatest level of demand volatility that can be tolerated before a price increase becomes unprofitable. 47 Ibid., 159.
  • 32. 27Antitrust Market Definition—A Review of Five Influential Papers The authors state that residual demand elasticity is often difficult to calculate given data limitations and suggest other tactics for quantifying consumer reactions to price changes. One recommended approach encompasses estimating how much it would cost the consumer to switch from one product to another. It logically follows that the lower the cost of substitution, the more likely a consumer will switch to a viable substitute in the face of higher prices. A second option is to explore other similar products and determine whether any can satisfy the same uses at comparable price and quantity levels. Harris and Simons propose that employing these approaches in conjunction with the critical loss calculation can provide a benchmark for “determining how much interchangeability is sufficient to put two products or geographic areas in the same market.”48 To illustrate the applicability of the critical loss calculation, Harris and Simons point to the 1986 Supreme Court case FTC vs. Occidental Petroleum Corp, for which both were retained by the defendants. In this case, the FTC challenged Occidental’s acquisition of the polyvinyl chloride resin (PVC) assets of Tenneco Polymers. The Supreme Court identified two relevant product markets: (i) suspension homopolymer PVC resin and (ii) dispersion PVC resin. With the product markets established, the analysis shifted to identifying the relevant geographic market. Using variable cost and price information for these two products, the authors calculated contribution margins for each, which they in turn used to estimate critical losses for a hypothetical price increase of 5 percent. Following the steps outlined above, the authors then determined the actual loss in sales that would occur if Occidental raised prices of PVC resin by 5 percent. This was done by analyzing viable foreign substitutes. More specifically, the analysis considered whether customers would be willing to purchase foreign-produced PVC resin and whether foreign PVC resin producers had the capacity to supply additional PVC resin to the United States. Given the availability of foreign- produced PVC resin, it was determined that the loss of sales from a hypothetical 5 percent price increase would exceed the critical loss estimates. Ultimately, the Court decided that “the United States was an inappropriately small geographic market for both types of PVC resin.”49 Since Harris and Simons’ publication of “Focusing Market Definition: How Much Substitution is Necessary?” several papers have pointed out one way that critical loss analysis can be misused. In their paper “Critical Loss: Let’s Tell the Whole Story,” Katz and Shapiro argue that critical loss analysis can be “incomplete and potentially misleading.”50 When profit margins are very high, they argue, one may conclude that, because lost sales have a significant negative impact on profits, “a hypothetical monopolist controlling a group of products could not profitably raise prices.”51 This conclusion does not consider that high profit margins may imply that actual sales lost due to a price increase are small, “and thus a price increase might be profitable even when critical loss is small.”52 48 Ibid., 164. 49 Ibid., 165. 50 Michael L. Katz and Carl Shapiro, “Critical Loss: Let’s Tell the Whole Story,” Antitrust (2003). 51 Ibid., 50. 52 Ibid.
  • 33. 28Review O’Brien and Wickelgren convey similar sentiments in their 2003 critique of critical loss analysis as an approach to defining relevant antitrust markets.53 Specifically, they point out that high pre- merger profit margins may mean that customers are not price sensitive. In these situations, price increases would not necessarily lead to large sales losses. O’Brien and Wickelgren specify, however, that their critique “does not invalidate the critical loss formula derived [by] Harris and Simons as an algebraic statement about the loss necessary to make a given price increase unprofitable.”54 Rather, their aim is to highlight ways in which the critical loss formula can be erroneously applied. Scheffman and Simons respond to such criticisms, writing that “the significance… of [critical loss analysis] lies in its ease of practical application and from the fact that it is merely ‘arithmetic.’”55 In essence, they acknowledge that while critical loss can be misused, it remains valid as an approach in assessing market definition in antitrust cases. Upward Pricing Pressure In their 2010 paper “Antitrust Evaluation of Horizontal Mergers: An Economic Alternative to Market Definition,” Farrell and Shapiro aim to create a simple indicator to screen for unilateral effects of a proposed merger in a differentiated product setting. Their indicator measures net upward pricing pressure and is meant to provide insight into whether a proposed merger will likely cause price increases. The authors argue that the screening tool they develop is practical and “more solidly grounded in the underlying economics of unilateral effects than is the conventional approach” without the need to predict “the full equilibrium adjustment of the industry to the merger.”56 In practice, however, UPP is often difficult to estimate and, at times, may still require defining a market. As Farrell and Shapiro explain, merger review is both a common practice and a large undertaking. Mergers meeting certain requirements57 must be reviewed and approved by the DOJ or FTC (hereafter, the “Agencies”) before they can be consummated. The purpose of this review is to ensure that the proposed merger does not substantially “lessen competition, or…tend to create a monopoly.”58 The Agencies look for two different types of effects: coordinated and unilateral effects. Coordinated effects occur if the merger makes collusion across firms more likely. Unilateral effects occur if the merger incentivizes the newly merged firm to raise prices above the pre-merger levels. A typical analysis evaluates market concentration within a defined “relevant market.” However, in differentiated product markets, it is difficult to determine which products are in the relevant 53 Daniel P. O’Brien and Abraham L. Wickelgren, “A Critical Analysis of Critical Loss Analysis,” Antitrust Law Journal 71 (2003). 54 Ibid., 163. 55 David T. Scheffman and Joseph J. Simons, “The State of Critical Loss Analysis: Let’s Make Sure We Understand the Whole Story,” The Antitrust Source (2003): 1. 56 Farrell and Shapiro (2010): 2, 34. 57 Ibid., 1. Mergers of a “substantial” size are required to notify the Agencies of the proposed merger for review; as of 2010, the “size of transaction” threshold was $63.4 million. 58 2010 Horizontal Merger Guidelines, Section 1, p. 1 (citing Section 7 of the Clayton Act).
  • 34. 29Antitrust Market Definition—A Review of Five Influential Papers market and which are out, resulting in “an inevitably artificial line-drawing exercise.”59 To address the difficulty of defining markets, the 2010 Horizontal Merger Guidelines endorse using the HMT.60 This test is designed to address relevant market definition, but Farrell and Shapiro argue that it can result in excluding substitute products that compete to some degree with the products of interest. Thus, the Guidelines’ recommended method can lead to inappropriate market boundaries. Farrell and Shapiro’s UPP methodology asks if a merger will generate net UPP in a differentiated product market. Farrell and Shapiro describe two opposing forces that have an effect on price after a merger. The first is a loss of direct competition because the merging firms are no longer competing with each other as independent entities. After a merger, there is reduced competition between two products, which will cause an upward pressure on price to the extent that quantity diversions from a price increase of one product are absorbed by the second product. The second force is a marginal cost savings due to efficiencies that are a result of the merger, which will cause downward pressure on price. The net effect of these two forces is the indicator that Farrell and Shapiro refer to as UPP. When this indicator is positive, incentives to increase price exceed the efficiency cost savings, and the merger would then be flagged for more detailed review.61 To illustrate the upward pressure on prices that may occur post-merger, Farrell and Shapiro imagine two merging firms A and B that compete in a standard Bertrand setting; these firms produce Product 1 and Product 2, respectively.62 Post-merger, the firms are treated as separate divisions within the same company and are told to jointly maximize profits. The incentives have changed now that the firms have merged, because increased sales of Product 1 will cannibalize some sales of Product 2, and vice versa. The cannibalization of Product 2 sales can now be viewed as an opportunity cost of selling more of Product 1. This opportunity cost can be thought of as a tax on each division’s output that deters increasing sales by lowering prices. For each division, the “tax” is equal to the value of Product 2 sales that are cannibalized. To quantify this tax, one must calculate the “diversion ratio.”63 The ratio is the impact on sales of Product 2 when the price of Product 1 falls by enough to sell one more unit. For example, if the price of Product 1 falls and one hundred more units are sold, but thirty fewer units of Product 2 are sold, the diversion ratio is 0.3. The diversion ratio times the gross margin of Product 2 is then equal to the value of Product 2 sales cannibalized. Thus, this tax is essentially equal to the lost profits resulting from reduced 59 Farrell and Shapiro (2010): 4. 60 2010 Horizontal Merger Guidelines, Section 4.1.1. 61 As Farrell and Shapiro note, the two levels of review are not unique to an UPP analysis. Indeed, mergers reviewed using the more traditional market share and HHI approach also use a secondary level of review. See Farrell and Shapiro (2010): 3. 62 While their example assumes a Bertrand setting, Farrell and Shapiro note that UPP’s fundamental assumptions do not rely on a Bertrand setting and can be utilized in a variety of frameworks, “although unsurprisingly the quantitative measure will vary if one knows how industry conduct departs from Bertrand.” They make this point in a subsequent paper that responds to criticism from Epstein and Rubinfeld. See Joseph Farrell and Carl Shapiro, “Upward Pricing Pressure in Horizontal Merger Analysis: Reply to Epstein and Rubinfeld,” 10 The B.E. Journal of Theoretical Economics 1 (2010): Article 41, 1. See also Roy J. Epstein and Daniel L. Rubinfeld, “Understanding UPP,” 10 The B.E. Journal of Theoretical Economics 1 (2010), Article 21. 63 Epstein and Rubinfeld argue that the diversion ratio is closely related to cross-elasticity, as the two measure virtually the same thing but on different scales. They argue that cross-elasticity is often easier to calculate, as diversion ratios cannot be independently observed. Epstein and Rubinfeld (2010): 4–6.
  • 35. 30Review sales of Product 2 when sales of Product 1 increase. This post-merger “tax” can be thought of as an increase in the marginal costs for each product, which could result from a unilateral price increase. Mergers may also create efficiencies as a result of marginal costs savings associated with combining the operations of the two firms. Efficiencies resulting from these lower marginal costs can be difficult for the Agencies to predict and quantify. For this preliminary assessment, Farrell and Shapiro suggest looking at certain “default marginal-cost efficiencies” for each of the merging firms’ overlapping products. A more detailed evaluation is postponed until after the screening phase. The “default efficiencies” calculated during Farrell and Shapiro’s screening phase could be based on evidence of efficiencies in comparable mergers. They suggest looking at efficiencies as a fraction of pre-merger marginal cost for each product, but recognize that certain efficiencies, such as an improvement in product quality, are not “naturally measured” as a fraction of marginal cost. They note that the merging firms are not required to prove these efficiencies, and that it is “the established policy” for horizontal mergers to be approved without proving efficiencies.64 These efficiencies counterbalance the potential unilateral price increase discussed above. If the value of the diversion is greater than the value of the efficiencies, the net effect will be positive UPP. Specifically, Farrell and Shapiro present the following formula for calculating net UPP, where D12 is the diversion ratio from Product 1 to Product 2, is the pre-merger gross margin of Product 2, and is the default efficiency credited to Product 1: Farrell and Shapiro suggest that proposed mergers that have a positive UPP value should be flagged for more detailed review. One of the benefits of this methodology, Farrell and Shapiro claim, is that it does not require attempting to estimate the post-merger equilibrium prices, which can be difficult. To measure the magnitude of the price change, it is necessary to know the rate at which a cost increase for a product is “passed through.” This requires knowledge of the curvature of demand, which is often difficult to fully understand without additional data that may not be available at the screening phase (i.e., prior to a second request). Merger simulation models (and the HMT) require estimating pass-through rates.65 Farrell and Shapiro believe that these models are trying to do more than is necessary and claim that the UPP methodology is a robust yet simple way to start merger review. 64 Farrell and Shapiro (2010): 10. They also note that it is a matter of debate whether horizontal mergers generate efficiencies. They suggest that their “default efficiency parameter can be set accordingly” to reflect “one’s optimism or pessimism about the ability of mergers to create synergies.” 65 In their response to Farrell and Shapiro, Epstein and Rubinfeld argue that UPP is a special case of a merger simulation model. Farrell and Shapiro counter that the UPP is not what is typically meant by a “merger simulation model,” which attempts to quantify the post- merger equilibrium price. See Epstein and Rubinfeld (2010): 2; Farrell and Shapiro (2010): 3.