Emerging Techniques in Power System
Emerging Techniques in
Power System Analysis
With 67 Figures
Department of Electrical Engineering
Electric Power Research Institute
The Hong Kong Polytechnic University
3412 Hillview Ave, Palo Alto,
Hong Kong, China
CA 94304-1395, USA
Higher Education Press, Beijing
Springer Heidelberg Dordrecht London New York
Library of Congress Control Number: 2009933777
c Higher Education Press, Beijing and Springer-Verlag Berlin Heidelberg 2010
This work is subject to copyright. All rights are reserved, whether the whole or part of the
material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microﬁlm or in any other way, and storage in
data banks. Duplication of this publication or parts thereof is permitted only under the
provisions of the German Copyright Law of September 9, 1965, in its current version, and
permission for use must always be obtained from Springer-Verlag. Violations are liable to
prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication
does not imply, even in the absence of a speciﬁc statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
Cover design: Frido Steinen-Broo, EStudio Calamar, Spain
Printed on acid-free paper
Springer is part of Springer Science + Business Media (www.springer.com)
Electrical power systems are one of the most complex large scale systems.
Over the past decades, with deregulation and increasing demand in many
countries, power systems have been operated in a stressed condition and subject to higher risks of instability and more uncertainties. System operators
are responsible for secure system operations in order to supply electricity
to consumers eﬃciently and reliably. Consequently, power system analysis
tasks have become increasingly challenging and require more advanced techniques. This book provides an overview of some the key emerging techniques
for power system analysis. It also sheds lights on the next generation technology innovations given the rapid changes occurring in the power industry,
especially with the recent initiatives toward a smart grid.
Chapter 1 introduces the recent changes of the power industry and the
challenging issues including, load modeling, distributed generations, situational awareness, and control and protection.
Chapter 2 provides an overview of the key emerging technologies following
the evolvement of the power industry. Since it is impossible to cover all of
emerging technologies in this book, only selected key emerging technologies
are described in details in the subsequent chapters. Other techniques are
recommended for further reading.
Chapter 3 describes s the ﬁrst key emerging technique: data mining.
Data mining has been proved an eﬀective technology to analyze very complex
problems, e.g. cascading failure and electricity market signal analysis. Data
mining theories and application examples are presented in this chapter.
Chapter 4 covers another important technique: grid computing. Grid computing techniques provide an eﬀective approach to improve computational
eﬃciency. The methodology has been used in practice for real time power
system stability assessment. Grid computing platforms and application examples are described in this chapter.
Chapter 5 emphasizes the importance of probabilistic power system analysis, including load ﬂow, stability, reliability, and planning tasks. Probabilistic approaches can eﬀectively quantify the increasing uncertainties in power
systems and assist operators and planning in making objective decisions...
Various probabilistic analysis techniques are introduced in this chapter.
Chapter 6 describes the application of an increasingly important device,
phasor measurement units (PMUs) in power system analysis. PMUs are able
to provide real time synchronized system measurement information which
can be used for various operational and planning analyses such as load modeling and dynamic security assessment. The PMU technology is the last key
emerging technique covered in this book.
Chapter 7 provides information leading to further reading on emerging
techniques for power system analysis.
With the new initiatives and continuously evolving power industry, technology advances will continue and more emerging techniques will appear., The
emerging technologies such as smart grid, renewable energy, plug-in electric
vehicles, emission trading, distributed generation, UVAC/DC transmission,
FACTS, and demand side response will create signiﬁcant impact on power
system. Hopefully, this book will increase the awareness of this trend and
provide a useful reference for the selected key emerging techniques covered.
Zhaoyang Dong, Pei Zhang
Hong Kong and Palo Alto
Zhaoyang Dong and Pei Zhang
With the deregulation of the power industry having occurred in many countries across the world, the industry has been experiencing many changes leading to increasing complexity, interconnectivity, and uncertainties. Demand
for electricity has also increased signiﬁcantly in many countries, which
resulted in increasingly stressed power systems. The insuﬃcient investment
in the infrastructure for reliable electricity supply had been regarded as a
key factor leading to several major blackouts in North America and Europe
in 2003. More recently, the initiative toward development of the smart grid
again introduced many additional new challenges and uncertainties to the
power industry. In this chapter, a general overview will be given starting
from deregulation, covering electricity markets, present uncertainties, load
modeling, situational awareness, and control issues.
1.1 Principles of Deregulation
The electricity industry has been undergoing a signiﬁcant transformation
over the past decade. Deregulation of the industry is one of the most important milestones. The industry had been moving from a regulated monopoly
structure to a deregulated market structure in many countries including the
US, UK, Scandinavian countries, Australia, New Zealand, and some South
American countries. Deregulation of the power industry is also in the process
recently in some Asian countries as well. The main motivations of deregulation are to:
• increase eﬃciency;
• reduce prices;
• improve services;
• foster customer choices;
• foster innovation through competition;
• ensure competitiveness in generation;
• promote transmission open access.
Together with deregulation, there are two major objectives for establishing
electricity markets. They are (1) to ensure a secure operation and (2) to
facilitate an economical operation (Shahidehpour et al., 2002).
1.2 Overview of Deregulation Worldwide
In South America, Chile started the development of a competitive system
for its generation services based on marginal prices as early as the early
1980s. Argentina deregulated its power industry in 1992 to form generation,
transmission, and distribution companies into a competitive electricity market where generators compete. Other South America countries followed the
trend as well.
In the UK, the National Grid Company plc was established on March 31,
1990, as the owner and operator of the high voltage transmission system in
England and Wales.
Prior to March 1990, the vast majority of electricity supplied in England and Wales was generated by the Central Electricity Generating Board
(CEGB), which also owned and operated the transmission system and the
interconnectors with Scotland and France. The great majority of the output
of the CEGB was purchased by the 12 area electricity boards; each of which
distributed and sold it to customers.
On March 31, 1990, the electricity industry was restructured and then
privatized under the terms of the Electricity Act 1989. The National Grid
Company plc assumed ownership and control of the transmission system and
joint ownership of the interconnectors with Scotland and France, together
with the two pumped storage stations in North Wales. But, these stations
were subsequently sold oﬀ.
In the early 1990s, the Scandinavian countries (Norway, Sweden, Finland and Denmark) created a Nordic wholesale electricity market – Nord
Pool (www.nordpool.com). The corresponding Nordic Power Exchange is
the world’s ﬁrst international commodity exchange for electrical power. It
serves customers in the four Scandinavian countries. Being the Nordic Power
Exchange, Nord Pool plays a key role as a part of the infrastructure of the
Nordic electricity power market and thereby provides an eﬃcient, publicly
known price of electricity of both the spot and the derivatives market.
In Australia, the National Electricity Market (NEM) was ﬁrst commenced
in December 1998, in order to increase the transmission eﬃciency and
reduce electricity prices. NEM serves as a wholesale market for the supply of
electricity to retailers and end use customers in ﬁve interconnected regions:
Queensland (QLD), New South Wales (NSW), Snowy, Victoria (VIC), and
1.2 Overview of Deregulation Worldwide
South Australia (SA). Tasmania (TAS) joined the Australian NEM on May
29, 2005, through Basslink. The Snowy region was later abolished on July 1,
2008. In 2006 – 2007, the average daily demands in the current ﬁve regions
of QLD, NSW, VIC, SA, and TAS are 5 886 MW, 8 944 MW, 5 913 MW, 1
524 MW, and 1 162 MW, respectively. The NEM system is one of the world’s
longest interconnected power systems connecting 8 million end use consumers
with AUD 7 billion of electricity traded annually (2004 data) and spans over
4 000 km. The Unserved Energy (USE) of the NEM system is 0.002%.
In the United States, deregulation occurred in several regions. One of the
major electricity markets is the California electricity market, which is part
of the PJM (Pennsylvania-New Jersey-Maryland) market. The deregulation
of the California electricity market followed a series of stages, starting from
the late 1970s, to allow non-utility generators to enter the wholesale power
market. In 1992, the Energy Policy Act (EPACT) formed the foundation for
wholesale electricity deregulation.
Similar deregulation processes have occurred in New Zealand and part of
Canada as well (Shahidehpour et al., 2002).
1.2.1 Regulated vs Deregulated
Traditionally the power industry is a vertically integrated single utility and
a monopoly in its service area. It normally is owned by the government, a
cooperative of consumers, or privately. As the single electricity service
provider, the industry is also obligated to provide electricity to all customers
in the service area.
With the electricity supply service provider’s monopoly status, the regulator sets the tariﬀ (electricity price) to earn a fair rate of return on investments
and to recover operational expenses. Under the regulated environment, companies maximize proﬁts while being subject to many regulatory constraints.
From microeconomics, the sole service provider of a monopoly market has the
absolute market power. In addition, because the costs are allowed by the regulator to be passed to the customers, the utility has fewer incentives to reduce
costs or to make investments considering the associated risks. Consequently,
the customers have no choices for their electricity supply service providers
and have no choices on the tariﬀs (except in case of service contracts).
As compared with a monopoly market, an ideal competitive market normally has many sellers/service providers and buyers/customers. As a result
of competition, the market price is equal to the cost of producing the last
unit sold, which is the economically eﬃcient solution. The role of deregulation is to structure a competitive market with enough generators to eliminate
With the deregulation, traditional vertically integrated power utilities are
split into generation, transmission, and distribution service providers to form
a competitive electricity market. Accordingly, the market operation decision
model also changes as shown in Figs. 1.1 and 1.2.
Fig. 1.1. Market Operation Decision Model for the Regulated Power Industry –
Central Utility Decision Model
Fig. 1.2. Market Operation Decision Model for the Deregulated Power Utility –
Competitive Market Decision Model
In the deregulated market, the economic decision making mechanism
responds to a decentralized process. Each participant aims at proﬁt maximization. Unlike that of the regulated environment, the recovery of the
1.2 Overview of Deregulation Worldwide
investment in a new plan is not guaranteed in a deregulated environment.
Consequently, risk management has become a critical part of the electricity
business in a market environment.
Another key change resulted from the electricity market is the introduction of more uncertainties and stake holders into the power industry. This
helps to increase the complexity of power system analysis and leads to the
need for new techniques.
1.2.2 Typical Electricity Markets
There are three major electricity market models in practice worldwide. These
models include the PoolCo model, the bilateral contracts model, and the
1) PoolCo Model
A PoolCo is deﬁned as a centralized marketplace that clears the market
for buyers and sellers. A typical PoolCo model is shown in Fig.1.3.
Fig. 1.3. Spot Market Structure (National Grid Management Council, 1994)
In a PoolCo market, buyers and sellers submit bids to the pool for the
amounts of power they are willing to trade in the market. Sellers in an electricity market would compete for the right to supply energy to the grid and not
for speciﬁc customers. If a seller (normally a generation company or GENCO)
bids too high, it may not be able to sell. In some markets, buyers also bid
into the pool to buy electricity. If a buyer bids too low, it may not be able to
buy. It should be noted that in some markets such as the Australian NEM,
only the sellers bid into the pool while the buyers do not, which means that
the buyers will pay at a pool price determined by the market clearing process. There is an independent system operator (ISO) in a PoolCo market to
implement economic dispatch and produce a single spot price for electricity.
In an ideal competitive market, the market dynamics will drive the spot price
to a competitive level equal to the marginal cost of the most eﬃcient bidders
provided the GENCOs bid into the market with their marginal costs in order
to get dispatched by the ISO. In such a market low cost generators will normally beneﬁt by getting dispatched by the ISO. An ideal PoolCo market is a
competitive market where the GENCOs bid with their marginal costs. When
market power exists, the dominating GENCOs may not necessarily bid with
their marginal costs.
2) Bilateral Contracts Model
Bilateral contracts are negotiable agreements on delivery and receipt of
electricity between two traders. These contracts set the terms and conditions
of agreements independent of the ISO. However, in this model the ISO will
verify that a suﬃcient transmission capacity exists to complete the transactions and maintain the transmission security. The bilateral contract model
is very ﬂexible, as trading parties specify their desired contract terms. However, its disadvantages arise from the high costs of negotiating and writing
contracts and the risk of creditworthiness of counterparties.
3) Hybrid Model
The hybrid model combines various features of the previous two models.
In the hybrid model, the utilization of a PoolCo is not obligatory, and any
customer will be allowed to negotiate a power supply agreement directly with
suppliers or choose to accept power at the spot market price. In the model,
PoolCo will serve all participants who choose not to sign bilateral contracts.
However, allowing customers to negotiate power purchase arrangements with
suppliers will oﬀer a true customer choice and an impetus for the creation of a
wide variety of services and pricing options to best meet individual customer
needs (Shahidehpour et al., 2002).
1.3 Uncertainties in a Power System
Uncertainties have existed in power systems from the beginning of the power
industry. Uncertainties from demand and generator availability have been
studied in reliability assessment for decades. However, with the deregula-
1.3 Uncertainties in a Power System
tion and other new initiatives happening in the power industry, the level of
uncertainty has been increasing dramatically. For example, in a deregulated
environment, although generation planning is considered in the overall planning process, it is diﬃcult for the transmission planner to access accurate
information concerning generation expansion. Transmission planning is no
longer coordinated with generation planning by a single planner. Future generation capacities and system load ﬂow patterns also become more uncertain.
In this new environment, other possible sources of uncertainty include (Buygi
et al., 2006; Zhao et al., 2009):
• system load;
• bidding behaviors of generators;
• availability of generators, transmission lines, and other system facilities;
• installation/closure/replacement of other transmission facilities;
• carbon prices and other environmental costs;
• market rules and government policies.
1.3.1 Load Modeling Issues
Among the sources of uncertainties, power system load plays an important
role. In addition to the uncertainties coming from forecast demand, load
models also contribute to system uncertainty, especially for power system
simulation and stability assessment tasks. Inappropriate load models may
lead to the wrong conclusion and possibly cause serious damage to the system.
It is necessary to give a brief discussion of the load modeling issues here.
Power system simulation is the most important tool guiding the operation
and control of a power grid. The accuracy of the power system simulation
relies heavily on the model reliability. Among all the components in a power
system, the load model is one of the least well known elements; however,
its signiﬁcant inﬂuences on the system stability and control have long been
recognized (Concordia and Ihara, 1982; Undrill and Laskowski, 1982; Kundur 1993; IEEE 1993a; IEEE 1993b). Moreover, the load model has direct
inﬂuences on power system security. On August 10, 1996, WSCC (Western Systems Coordinating Council) in the USA collapsed following power
oscillations. The blackout caused huge economic losses and endangered state
security. However, the system model guiding the WSCC operation had failed
to predict the blackout. Therefore, the model validation process, following this outage, indicated that the load model in WSCC database was not
adequate to reproduce the event. This strongly suggests that a more reliable
load model is desperately needed. The load model also has great eﬀects on
economic operation of a power system. The available transfer capability of
the transmission corridor is highly aﬀected by the accuracy of the load models used. Due to the limited understanding of load models, a power system is
usually operated very conservatively, leading to the poor utilization of both
the transmission and the generation assets.
Nevertheless, it is also widely known that modeling the load is diﬃcult due
to the uncertainty and the complexity of the load. The power load consists of
various components, each with their own characteristics. Furthermore, load is
always changing, both in its amount and composition. Thus, how to describe
the aggregated dynamic characteristic of the load has been unsolved so far.
Due to the blackouts which occurred all around the world in the last few
years, load modeling has received more attention and has become a new
The state of the art for research on load modeling is mainly dedicated to
the structure of the load model and algorithms to ﬁnd its parameters.
The structure of the load model has great impacts on the results of power
system analysis. It has been observed that diﬀerent load models will lead to
various, even completely contrary conclusions on system stability (Kosterev
et al., 1999; Pereira et al., 2002). The traditional production-grade power
system analysis tools often use the constant impedance, constant current, and
constant power load model, namely the ZIP load model. However, simulation
results by modeling load with ZIP often deviate from the ﬁeld test results,
which indicate the ineﬃciency of the ZIP load model. To capture the strong
nonlinear characteristic of load under the recovery of the voltage, a load model
with a nonlinear structure was proposed by (Hill, 1993). Load structure in
terms of nonlinear dynamic equations was later proposed by (Karlsson, Hill,
1994; Lin et al., 1993) identiﬁed two dynamic load model structures based
on measurements, stating that a second order transfer function captures the
load characteristics better than a ﬁrst order transfer function. The recent
trend has been to combine the dynamic load model with the static model
(Lin et al., 1993; Wang et al., 1994; He et al., 2006; Ma et al., 2006; Wang et
al., 1994) developed a load model as a combination of a RC circuit in parallel
with an induction motor equivalent circuit. Ma et al. (Ma et al., 2006; He et
al., 2006; Ma et al., 2007; Ma et al., 2008) proposed a composite load model
of the ZIP in combination with the motor. An interim composite load model
that is 80% static and 20% induction motor model is proposed by (Perira et
al., 2002) for WSCC system simulation. Except for the load model structure,
the identiﬁcation algorithm to ﬁnd the load model parameters is also widely
researched. Both linear and nonlinear optimization algorithms are applied
to solve the load modeling problem. However, the identiﬁcation algorithm is
based on the model structure and it cannot give reliable results without a
sound model structure.
Although various model structures have been proposed for modeling load
for research purposes, the power industry still uses very simple static load
models. The reason is that some basic problems on composite load modeling
are still open, which mainly include three key points: First, which model
structure among proposed various ones is most appropriate to represent the
dynamic characteristic of the load and is it the model with the simplest
structure? Second, can this model structure be identiﬁed? Is the parameter
1.3 Uncertainties in a Power System
set given by the optimization process really the true one, since optimization
may easily stick into some local minima? Third, how is the generalization
capability of the proposed load model? Load is always changing; however,
a model can only be built on available measurements. So, the generalization
capability of the load model reﬂects its validity. Theoretically, the ﬁrst point
involves the minimized realization problem, the second point addresses the
identiﬁcation problem, and the third point closely relates to the statistic
distribution of the load.
A sound load model structure is the basis for all other load modeling
practice. Without a good model structure, all the eﬀorts to ﬁnd reliable load
models are in vain. Based on the Occam’s razor principle, which states that
from all models describing a process accurately, the simplest one is the best
(Nelles, 2001). Correspondingly, simpliﬁcation of the model structure is an
important step in obtaining reliable load models (Ma et al., 2008). Currently,
ZIP in combination with a motor is used to represent the dynamic characteristic of the load model. However, there are various components of a
load. Take motors as an example, there are big motors and small motors,
industry motors and domestic motors, three-phase motors and single-phase
motors. Correspondingly, diﬀerent load compositions are used to model different loads or loads at diﬀerent operating conditions. Once the load model
structure is selected, proper load model parameter values are needed. Given
the variations of the actual loads in a power system, a proper range of
parameter values can be used to provide a useful guide in selecting suitable
load models for further simulation purposes.
Parameter estimation is required in order to calculate the parameter values for a given load model with system response measurement data. This
often involves optimization algorithms and linear/nonlinear least squares
estimation (LSE) techniques, or a combination of both approaches.
A model with the appropriate structure and parameters usually has good
performance when ﬁtting the available data. However, it does not necessarily
mean it is a good model. A good load model must have good generalization
capability. Since a load is always changing, the model built on the available data must also have the strong capability to describe the unseen data.
Methodologies used for generalization capability analysis include statistical
analysis and various machine learning methods. Even if a model with good
generalization capability has been obtained, cross validation is still needed
because it is still possible that the derived load model may fail to present
the system dynamics in some system operating conditions involving system
transients. It is worth noting that both research and engineering practice in
load modeling are still facing many challenges. There are many complex load
modeling problems causing diﬃculties to the power industry; consequently,
static load models are still used by some companies in their operations and
1.3.2 Distributed Generation
In addition to those uncertainty factors discussed previously, another
important issue is the potential large-scale penetration of distributed generation (DG) into the power system. Traditionally, the global power industry
has been dominated by large, centralized generation units which are able
to exploit signiﬁcant economies of scale. In recent decades, the centralized
generation model has been the focus of concern on its costs, security vulnerability, and environmental impacts, while DG is expected to play an
increasingly important role in the future provision of a sustainable electricity
supply. Large-scale implementation of DG will cause signiﬁcant changes in
the power industry and deeply inﬂuence the transmission planning process.
For example, DG can reduce local power demand; thus, it can potentially
defer investments in the transmission and distribution sectors. On the other
hand, when the penetration of DG in the market reaches a certain level, its
suppliers will have to get involved in the spot market and trade the electricity through the transmission and distribution networks, which may need
to be further expanded. Reliability of some types of DGs is also of a concern for the transmission and distribution network service providers (TNSPs
and DNSPs). Therefore, it is important to investigate the impacts of DG on
power system analysis, especially in the planning process. The uncertainties
DG brings to the system also need to be considered in power system analysis.
1.4 Situational Awareness
The huge impact in economic terms as well as interruptions of daily life from
the 2003 blackouts in North America and the following blackouts in UL and
Italy clearly showed the need for techniques to analyze and prevent such
devastating events. According to the Electricity Consumers Resource Council (2004), the blackout in August 2004 in America and Canada had left 50
million people without power supply and with an economic cost estimated
at up to $10 billion. The many studies of this major blackout concluded
that a lack of situational awareness is one of the key factors that resulted
in the wide spread power system outage. It has been concluded that the
lack of situational awareness was composed of a number of factors such as
deﬁciencies in operator training, lack of coordination and ineﬀectiveness in
communications, and inadequate tools for system reliability assessment. This
lack of situational awareness also applies to other major system blackouts
as well. As a result, operators and coordinators were unable to visualize the
security and reliability status of the overall power system following some
disturbance events. Such poor understanding of the system modes of opera-
1.5 Control Performance
tions and health of the network equipments also resulted in the Scandinavian
blackout incident of 2003. As the complexity and connectivity of power systems continue to grow, for the system operators and coordinators, situational
awareness becomes more and more important. New methodologies needed for
better awareness of system operating conditions can be achieved. The capability of control centres will be enhanced with better situational awareness.
This can be partially promoted by development of operator and control centre tools which allows for more eﬃcient proactive control actions as compared
with the conventional preventative tools. Real time tools, which are able to
perform robust real time system security assessment even with the presence
of system wide structural variations, are very useful in allowing operators
to have the better mental model of the system’s health. Therefore, prompt
control actions can be taken to prevent possible system wide outages.
In its report for blackouts, NERC Real-Time Tools Best Practices Task
Force (RTTBPTF) deﬁned situational awareness as “knowing what is going
on around you and understanding what needs to be done and when to maintain, or return to, a reliable operating state.” NERC’s Real-Time Tools Survey report presented situational awareness practices and procedures, which
should be used to deﬁne requirements or guidelines in practice. According to
the article by Endsley, 1998, there are three levels for the term situational
awareness or situation awareness: (1) perception of elements, (2) comprehending the meaning of these elements, and (3) projecting future system
states based on the understanding from levels 1 and 2. For level 1 of situational awareness, operators can use tools which provide real time visual
and audio alarm signals which serve as indicators of the operating states
of the power system. According to NERC (NERC 2005, NERC 2008) there
are three ways of implementing such alarm tools which are being within the
SCADA/EMS system, external functions, or a combination of the two.
NERC Best Practices Task Force Report (2008) summarized the following
situational awareness practice areas in its report: reserve monitoring for both
reactive reserve capability and operating reserve capability; alarm response
procedures; conservative operations to move the system from unknown and
potentially risky conditions into a secure state; operating guides deﬁning procedures about preventive actions; load shed capability for emergency control;
system reassessment practices, and blackstart capability practices.
1.5 Control Performance
This section provides a review of the present framework of power system protection and control (EPRI, 2004; EPRI, 2007; SEL-421 Manual; ALSTOM,
2002; Mooney and Fischer, 2006; Hou et al., 1997; IEEE PSRC WG, 2005;
Tzaiouvaras, 2006; Plumptre et al., 2006). Both protection and control can
be viewed as corrective and/or preventive activities to enhance system
security. Meanwhile, protection can be viewed as activities to disconnect and
de-energize some components, while control can be viewed as activities without physical disconnection of a signiﬁcant portion of system components. In
this report, we do not intend to make a clear distinction between protection
and control. We collectively use the term “protection and control” to indicate the activities to enhance system security. In addition, although there
are a number of ways to classify the protection and control systems based on
diﬀerent viewpoints, this report classiﬁes protection and control as local and
centralized to emphasize the need for better coordination in the future.
1.5.1 Local Protection and Control
A distance relay is the mostly commonly used relay for local protection of
transmission lines. Distance relays measure voltage and current and also compare the apparent impedance with relay setting. When the tripping criteria
are reached, distance relays will trip the breakers and clear the fault. Typical
forms of distance relays include impedance relay, mho relay, modiﬁed mho
relay, and combinations thereof. Usually, distance relays may have Zone 1,
Zone 2, and Zone 3 relays to cover longer distances of transmission lines with
the delayed response time as shown below:
• Zone 1 relay time and the circuit breaker response time may be as fast
as 2 – 3 cycles;
• Zone 2 relay response time is typically 0.3 – 0.5 seconds;
• Zone 3 relay response time is about 2 seconds.
Fig.1.4 shows the Zone 1, Zone 2, and Zone 3 distance relay characteristics.
Fig. 1.4. R-X diagram of Zone 1, Zone 2, and Zone 3 Distance Relay Characteristics
Prime Mover Control and Automatic Generation Control (AGC) is
applied to maintain the power system frequency within a required range
by the control of the active power output of a generator. Prime movers of
1.5 Control Performance
a synchronous generator can be either hydraulic turbines or steam turbines.
The control of prime movers is based on the frequency deviation and load
characteristics. The AGC is used to restore the frequency and the tie-line
ﬂow to their original and scheduled values. The input signal of AGC is called
Area Control Error (ACE), which is the sum of the tie-line ﬂow deviation
and the frequency deviation multiplied by a frequency-bias factor.
Power System Stabilizer (PSS) technology’s purpose is to improve small
signal stability or improve damping. PSSs are installed in the excitation system to provide auxiliary signals to the excitation system voltage regulating
loop. The input signals of PSSs are usually signals that reﬂect the oscillation
characteristics, such as the shaft speed, terminal frequency, and power.
Generator Excitation System is utilized to improve power system stability
and power transfer capability, which are the most important issues in bulk
power systems under heavy load ﬂow. The primary task of the excitation
system in synchronous generators is to maintain the terminal voltage of the
generator at a constant level and guarantee reliable machine operations for
all operating points. The governing functions achieved are (1) voltage control,
(2) reactive power control, and (3) power factor control. The power factor
control uses the excitation current limitation, stator current limitation, and
rotor displacement angle limitation linked to the governor.
On-Load Tap Changer (OLTC) is applied to keep the voltage on the low
voltage (LV) side of a power transformer within a preset dead band, such that
the power supplied to voltage sensitive loads is restored to the pre-disturbance
level. Usually, OLTC takes tens of seconds to minutes to respond to the low
voltage event. OLTC may have a negative impact to voltage stability, because
the higher voltage at the load side may demand higher reactive current to
worsen the reactive problem during a voltage instability event.
Shunt Compensation in bulk power systems includes traditional technology like capacitor banks and new technologies like the static var compensator
(SVC) and the static compensator (STATCOM). An SVC consists of shunt
capacitors and reactors connected via thyristors that operate as power electronics switches. They can consume or produce reactive power at speeds in
the order of milliseconds. One main disadvantage of the SVC is that their
reactive power output varies according to the square of the voltage they are
connected to, which is similar to capacitors. STATCOMs are power electronics based SVCs. They use gate turn oﬀ thyristors or insulated gate bipolar
transistors (IGBTs) to convert a DC voltage input to an AC signal that
is chopped into pulses that are then recombined to correct the phase angle
between voltage and current. STATCOMs have a response time in the order
Load shedding is performed only under an extreme emergency in modern
electric power system operation, such as faults, loss of generation, switching
errors, lightning strikes, and so on. For example, when system frequency drops
due to insuﬃcient generation under a large system disturbance, load shedding
should be done to bring frequency back to normal. Also, if bus voltage slides
down due to an insuﬃcient supply of reactive power, load shedding should
also be performed to bring voltage back to normal. The formal load shedding
scheme can be realized via under-frequency load shedding (UFLS) while the
latter scheme can be realized via under-voltage load shedding (UVLS).
1.5.2 Centralized Protection and Control
Out-of-step (OOS) relaying provides blocking or tripping functions to separate the system when loss of synchronism occurs. Ideally, the system should
be separated at such points as to maintain a balance between load and generation in each separated area. Moreover, separation should be performed
quickly and automatically in order to minimize the disturbance to the system and to maintain maximum service continuity via the OOS blocking relay
and tripping relay. During a transient swing, the OOS condition can be
detected by using two relays having vertical (or circular) characteristics on an
R-X plane as shown in Fig.1.5. If the time required to cross the two characteristics (OOS1 and OOS2) of the apparent impedance locus exceeds a speciﬁed
value, the OOS function is initiated. Otherwise, the disturbance will be identiﬁed as a line fault. The OOS tripping relays should not operate for stable
swings. They must detect all unstable swings and must be set so that normal
load conditions are not picked up. The OOS blocking relays must detect the
condition before the line protection operates. To ensure that line relaying is
not blocked for fault conditions, the setting of the relays must be such that
normal load conditions are not in the blocking area.
Fig. 1.5. Tripping zones and out-of-step relay
Special Protection Systems (SPS), also known as Remedial Action Schemes
(RAS) or System Integrity Protection Systems (SIPS), have become more
widely used in recent years to provide protection for power systems against
problems that do not directly involve speciﬁc equipment fault protection. A
SPS is applied to solve single and credible multiple contingency problems.
1.5 Control Performance
These schemes have become more common primarily because they are less
costly and quicker to permit, design, and build than other alternatives such
as constructing major transmission lines and power plants. A SPS senses
abnormal system conditions and (often) takes pre-determined or pre-designed
actions to prevent those conditions from escalating into major system disturbances. SPS actions minimize equipment damage and prevent cascading outages, uncontrolled loss of generation, and interruptions to customer electric
service. SPS remedial actions may be initiated by critical system conditions
which can be system parameter changes, events, responses, or a combination
of them. SPS remedial actions include generation rejection, load shedding,
controlling reactive units, or/and using braking resistors.
SCADA/EMS is the most typical application of centralized control in
power systems. It is a hardware and software system used by operators to
monitor, control, and optimize a power system. The monitor and control
functions are known as SCADA; the advanced analytical functions such as
state estimation, contingency analysis, and optimization are often referred
to as EMS. Typical beneﬁts of SCADA/EMS systems include: improved
quality of supply, improved system reliability, and better asset utilization
and allocation. An increasing interest in the EMS functions is the online
security analysis software tools, which typically provide transient stability
analysis, voltage security analysis, and small – signal stability analysis. The
latest development in computer hardware and software and in power system
simulation algorithms has at present more accurate results for these functions
in real-time, which could not be achieved online in the past.
1.5.3 Possible Coordination Problem in the Existing Protection
and Control System
Fig.1.6 summarizes the time delay, in the logarithmic scale, of various protections and controls based on a number of literatures (4 – 10). As shown in this
ﬁgure, the time delays of many diﬀerent control systems or strategies have
some considerable overlaps. The reason is historical. In the past, the design
of diﬀerent control was originally based on a single goal to solve a particular problem. As modern power systems are more interconnected and have
increasing stress levels, disturbances may cause multiple controls to respond,
among which some may be undesired. This trend presents great challenges
and risks in protection and control, as evidence by increasing occurrences of
blackout events in North America. This challenge will be illustrated with two
case analyses in the next section.
Fig. 1.6. Time frame of the present protection and control system
1.5.4 Two Scenarios to Illustrate the Coordination Issues among
Protection and Control Systems
1) Load Shedding or Generator Tripping
This case analysis shows a potential coordination problem in a two-area
system with a generation center (see the left part in Fig.1.7) and a load pocket
(see the right part in Fig.1.7). Assume the load pocket experiences a heavy
load increase on a hot summer day. Meanwhile, a transmission contingency
event occurs in the tie-line between the generation center and the load pocket
to cause a reduction of the power import to the load pocket. Then, the load
in the load pocket may be signiﬁcantly greater than the sum of total local
generation, the (reduced) import from the tie-line, and the spinning reserves.
This may lead to a decrease of both frequency and voltage. Certainly, under
this scenario, excessive load is the root cause of imbalance, and load shedding
in the load pocket is an eﬀective short-term solution.
However, there may be a potential risk of blackouts if the local generators’ under-frequency (UF) tripping scheme and the loads’ under-voltage
(UV) shedding scheme are not well coordinated. Likely, the under-frequency
generation tripping scheme will disconnect some generation from the system before the load shedding scheme is completed, since the present setting
in generation tripping is usually very fast. This will worsen the imbalance
between load and generation in the load pocket. Hence, both voltage and frequency may decrease further. This may lead to more generation to be quickly
1.5 Control Performance
Fig. 1.7. A two-area sample system
tripped and the local load pocket will lose a large amount of reactive power
for voltage support. Therefore, this may lead to a sharp drop of voltage and
eventually a fast voltage collapse at the end. Even though this is initially a
real power imbalance or frequency stability problem, the ﬁnal consequence
is a voltage collapse. Fig.1.8 shows the gradual process based on the above
Fig. 1.8. The process to instability
As previously mentioned, the root cause is the imbalance of generation
and load in the load pocket. The coordination of generation tripping and
load shedding is not optimized or well coordinated to perform load shedding
in order to avoid the generation tripping, which eventually causes a sharp
2) Zone 3 Protection
The second example is from the July 2, 1996, WSCC blackout. At the very
beginning of the blackout, two parallel lines were tripped due to fault and
mis-operation, and consequently some generation was tripped as a correct
SPS response. Then, a third line was disconnected due to bad connectors in
a distance relay. More than 20 seconds after these events, the last straw of the
collapse occurred. This last straw was the trip of the Mill Creek-Antelope line
due to the undesired Zone 3 protective relay. After this tripping, the system
collapsed within 3 seconds. The relay of the Mill Creek-Antelope line did as
it should do based on its Zone 3 setting, which was to trip the line when the
observed apparent impedance encroached upon the circle of the Zone 3 relay
as shown in Figs.1.9 and 1.10. In this case, the low apparent impedance was
the consequence of the power system conditions at that moment. Obviously,
Fig. 1.9. The line tripping immediately leading to a fast, large-area collapse during
the WSCC July 2, 1996, Blackout
if the setting of the Zone 3 relay can be dynamically reconﬁgured, considering the heavily loaded system condition, the system operators may have
enough time to perform some corrective actions to save the system from a
Fig. 1.10. Observed impedance encroaching the Zone 3 circle
Power systems have been experiencing dramatic changes over the past decade.
Deregulation is one of the main changes occurring across the world. Increased
connectivity and resultant nonlinear complexity of power system is another
trend. The consequences of such changes are various uncertainties and diﬃculties in power system analysis. Recent major power system blackouts also
remind the power industry of the need for situational awareness and more
eﬀective tools in order to ensure more secure operation of the system. This
chapter has reviewed these important aspects of the power system worldwide.
This chapter serves as an introduction and forms the basis for further
discussion on the emerging techniques in power system analysis.
ALSTOM (2002) Network Protection & Automation Guide. ALSTOM, LevalloisPerret
Buygi MO, Shanechi HM, Balzer G et al (2006) Network planning in unbundled
power systems. IEEE Trans Power Syst 21(3)
Concordia C, Ihara S (1982) Load representation in power systems stability studies.
IEEE Trans. Power App Syst 101: 969 – 977
Endsley MR (1988) Situation awareness global assessment technique. Proceedings
of The National Aerospace and Electronics Conference. IEEE, pp789 – 795
EPRI Project Opportunities (2007) PMU-based Out-of-step Protection Scheme
General Electric Company (1987) Load modeling for power ﬂow and transient stability computer studies, Vol 1 – 4, EPRI Report EL-5003
IEEE Task Force on Load Representation for Dynamic Performance (1993) Load
representation for dynamic performance analysis. IEEE Trans Power Syst 8(2):
472 – 482
IEEE Task Force on Load Representation for Dynamic Performance (1995) Bibliography on load models for Power ﬂow and dynamic performance simulation.
IEEE Trans Power Syst 10(1): 523 – 538
IEEE Task Force on Load Representation for Dynamic Performance (1995) Standard load models for power ﬂow and dynamic performance simulation. IEEE
Trans Power Syst 10(3): 1302 – 1313
Hill DJ (1993) Nonlinear dynamic load models with recovery for voltage stability
studies. IEEE Trans Power Syst 8(1): 166 – 176
He RM, Ma J, Hill DJ (2006) Composite load modeling via measurement approach.
IEEE Trans Power Syst 21(2): 663 – 672
Hou D, Chen S, Turner S (1997) SEL – 321 – 5 relay out-of-step logic. Schweitzer
Engineering Laboratories, Inc Application Guide AG97-13
Karlsson D, Hill DJ (1994) Modeling and identiﬁcation of nonlinear dynamic loads
in power systems. IEEE Trans Power Syst 9(1): 157 – 166
Kundur P (1993) Power system stability and control. McGraw-Hill, New York
Kosterev DN, Taylor CW, Mittelstadt WA (1999) Model validation for the august
10, 1996 WSCC system outage. IEEE Trans Power Syst 14(3): 967 – 979
Lin CJ, Chen YT, Chiang HD et al (1993) Dynamic load models in power systems
using the measurement approach. IEEE Trans Power Syst 8(1)
Ma J, He RM, Hill DJ (2006) Load modeling by ﬁnding support vectors of load
data from ﬁeld Measurements, IEEE Trans Power Syst 21(2): 726 – 735
Ma J, Han D, He R et al (2008) Reducing identiﬁed parameters of measurementbased composite load model. IEEE Trans Power Syst 23(1): 76 – 83
Ma J, Dong ZY, He R et al (2007) System energy analysis incorporating comprehensive load characteristics. IET Gen Trans Dist, 1(6): 855 – 863
Mooney J, Fischer N (2006) Application guidelines for power swing detection on
transmission systems. Proceedings of the 59th annual conference for protective
relay engineers. 2006 IEEE, 289 – 298
National Grid Management Council. Empowering the market–national electricity
reform for australia. December 1994
Nelles O (2001) Nonlinear system identiﬁcation. Springer, Heidelberg
NERC (North American Electric Reliability Council) (2005) Best practices task
force report. Discussions, Conclusions, and Recommendations
NERC Real-Time Tools Best Practices Task Force (2008) Real-time tools survey
analysis and recommendations. Final Report
Pereira L, Kosterev D, Mackin P et al (2002) An interim dynamic induction motor
model for stability studies in the WSCC. IEEE Trans Power Syst 17(4): 1108 –
Plumptre F, Brettschneider S, Hiebert A et al (2006) Validation of out-of-step
protection with a real time digital simulator. TP6241-01, BC hydro, Cegertec,
BC Transmission Corporation and Schweitzer Engineering Laboratories inc
Price WW, Wirgau KA, Murdoch A et al (1988) Load modeling for load ﬂow and
transient stability computer studies. IEEE Trans Power Syst 3, pp180 – 187
Shahidehpour M, Ymin H, Li Z (2002) Market operations in electric power systems.
Forecasting, Scheduling, and Risk Management, IEEE, Wiley, New York
Tzaiouvaras D (2006) Relay performance during major system disturbances.
TP6244 – 01, SEL
Thorpe GH (1998) Competitive electricity market development in australia. Proceedings of ARC Workshop on Emerging Issues and Methods in the Restructuring of the electric Power Industry, The University of Western Australia, 20 – 22
Wang JC, Chiang HD, Chang CL et al (1994) Development of a frequency-dependent
composite load model using the measurement approach. IEEE Trans Power
Syst 9(3): 1546 – 1556
Undrill JM, Laskowski TF (1982) Model selection and data assembly for power
system simulation. IEEE Trans Power App Syst, 101, pp. 3333 – 3341
SEL-421 Manual, Schweitzer Engineering Laboratories, SEL-421 Relay Protection
Automation Control, 2001
Zhao J, Dong ZY, Lindsay P et al (2009) Flexible transmission expansion planning
in a market environment. IEEE Trans Power Syst 24(1): 479 – 488
Zhang P, Min L, Hopkins L, Fardanesh B (2007) Utility Experience Performing
Probabilistic Risk Assessment for Operational Planning. Proceedings of the of
the14th ISAP, November, 2007
2 Fundamentals of Emerging Techniques
Xia Yin, Zhaoyang Dong, and Pei Zhang
Following the new challenges of the power industry outlined in Chapter 1, new
techniques for power system analysis are needed. These emerging techniques
cover various aspects of power system analysis including stability assessment,
reliability, planning, cascading failure analysis, and market analysis. In order
to better understand the functionalities and needs for these emerging techniques, it is necessary to give an overview of these emerging techniques and
compare these emerging ones with traditional approaches.
In this chapter, the following emerging techniques will be outlined. Some
of the key techniques and their applications in power engineering will be
detailed in the subsequent chapters. The main objective is to provide a holistic
picture of the technological trends in power system analysis over the recent
2.1 Power System Cascading Failure and Analysis
In 2003, there were several major blackouts, which were regarded as results of
cascading failures of power systems. The increasing number of system instability events is mainly because of the operation of market mechanisms which
has driven more generation investments but provided insuﬃcient transmission expansion investments. With the increased demand for electricity, many
power systems have been heavily loaded. As a result, power systems are running close to their security limits and therefore vulnerable to disturbances
(Dong et al., 1995).
The blackout of 14 August 2003 (Michigan Public Service Commission
2003) in the USA has so far been the worst case which aﬀected Michigan,
Ohio, New York City, Ontario, Quebec, northern New Jersey, Massachusetts,
and Connecticut, according to a North American Electric Reliability Coun-
2 Fundamentals of Emerging Techniques
cil (NERC) report. Over 50 million people experienced that blackout over a
considerable number of hours. The economic loss and political impact were
enormous, and concerns regarding national security rose from the power sector. The major reasons for the blackout were identiﬁed as (U.S.-Canada Power
System Outage Task Force, 2004):
• failure to identify emergency conditions and communicate to neighboring systems;
• ineﬃcient communication and/or sharing of system wide data;
• failure to ensure operation within secure limits;
• failure to assess system stability conditions in some aﬀected areas;
• inadequate regional-scale visibility over the bulk power system;
• failure of the reliability organizations to provide eﬀective real-time
• a number of other reasons.
According to an EPRI report (Lee, 2003), in the 1990s, electricity demand
in the US grew by 30%, but for the same period there was only a 15%
increase in new transmission capacity. Such imbalance continues to grow;
it is estimated that from 2002 to 2011, demand will grow a further 20%
with only a 3.5% increase in new transmission capacity. This has caused a
signiﬁcant increment in transmission congestion and has created many new
bottlenecks in the ﬂows of bulk power. This situation has further stressed
the power system. It is a far more complex problem than a simple voltage
collapse based on the information available so far.
As clearly indicated in many literatures about this event, the reasons for
such large scale blackouts are extremely complex, and have yet to be fully
understood. Although there are established system security assessment tools
in operation with the power companies over the blackout aﬀected region,
the system operators were unable to identify the severity of emerging system
signals and therefore unable to reach a timely remedial decision to prevent
such cascading system failure.
The state-of-the-art power system stability analysis leads to the following
• many power systems are vulnerable to multiple contingency events;
• the current design approaches to maintain stability are based on
deterministic approaches which do not correctly include the uncertainty
in the power system parameters or the failures which can impact the
• this explicit consideration of the uncertainties in disturbances and of
power system parameters can impact on the decisions on placement of
correction devices such as FACTS devices or on the control design of
• the explicit consideration of where the system breaks under multiple
contingencies can be used to adjust the controllers and the links to be
strengthened in power system design;
2.1 Power System Cascading Failure and Analysis Techniques
• the mechanism of cascading failure blackouts has not been fully understood;
• if timely information about system security is available even a short
time beforehand, many of the severe system security problems such as
blackouts could be avoided.
It can be seen that the information involved to properly assess the security
of a power system is increasingly complex with open access and deregulation.
New techniques are needed to handle such problems.
Cascading failure is a main form of system failure leading to blackouts.
However, the mechanism of cascading failure is still diﬃcult to analyze in
order to develop reliable algorithms to monitor, predict, and prevent blackouts.
To face the impending challenges from operation and planning with
respect to cascading failure avoidance, power system reliability analysis needs
new evaluation tools. So far, the widely recognized contingency analytical
method of large interconnection power systems is the N-1 criterion (CIGRE,
1992). In some cases, the N-1 even can be deﬁned as the loss of a set of
components of the system within a short time. The merits of the N-1 criterion are the ﬂexibility, clarity, and simplicity of implementation. However,
with the increasing risk of the occurrence of catastrophic failure and system
complexity, this criterion may not provide suﬃcient information of the vulnerability and severity level of the system. Since catastrophic disruptions are
normally caused by cascading failures of electrical components, the importance of studying the inherent mechanism of cascading outages is attracting
more and more attention.
So far, many models have been documented on simulating cascading failures. In the article by Dobson et al., 2003, a load-dependent model is proposed
from a probabilistic point of view. At start, the system components will be
allocated a virtual load randomly. Then the model will be initiated by adding
a disturbance load to all the components. A component will be tripped when
its load exceeds the maximum limit, and other unfailed components will
receive a constant load from this failure. This cascading procedure will terminate when there are no component failures within a cascading scenario.
This model can fully explore all the possibilities of cascading cases of the system. This cascading model is further improved by incorporating branching
process approximation in the article by Dobson et al., 2004, so that the propagation of cascading failures can be demonstrated. However, both of them
did not address the joint interactions among system components during cascading scenarios. In the article by Chen et al., 2005, cascading dynamics is
investigated under diﬀerent system operating conditions via a hidden failure
model. This model employs linear programming (LP) generation redispatch
jointed with dc load ﬂow for power distribution and emphasizes the possible
failures existing in the relay system. Chen et al. (Chen et al., 2006) study the
mechanism of cascading outages by estimating the probability distribution of
2 Fundamentals of Emerging Techniques
historical data of transmission outages. However, both methods above do not
consider failures of other network components, such as generators and loads.
In the article by Stubna and Fowler, 2003, to describe the statistics of
robust complex systems under uncertain conditions, highly optimised tolerance (HOT) model is introduced in simulating blackout phenomena in power
systems. A simulation result shows that this model reasonably ﬁts the historical data set of one realistic test power system. Besides these proposed models,
the investigation of critical transitions of a system according to the system
loading conditions during cascading procedure is also studied (Carreras et
al., 2002). The paper ﬁnds that the size of the blackouts will experience a
sharp increase once the system loading condition is over a critical transition
Eﬀorts also have been dedicated to understand the cascading faults from
global system perspectives. Since the inherent diﬀerences of systems make
it diﬃcult to propose a generalized mathematic model for all the networks,
these analysis approaches are normally established by probabilistic and statistic theories. In the article by Carreras et al., 2004, from the detailed time
series analysis of the North American Electrical Reliability Council (NERC)
15 years historical blackout data, the authors ﬁnd that cascading failures
occurring in the system had exhibited self organised criticality (SOC) dynamics. This work shows that the cascading collapse of systems may be caused
by the power system global nonlinear dynamics instead of weather or other
external triggering disturbances. This evidence provides a global philosophy
for understanding the catastrophic failures in power systems.
It has been recognised that the structures of complex networks always
aﬀect their functions (Strogatz, 2001). Due to the complexity inherit in power
grids the study of system topology is another interesting approach. In the
article by Lu et al. 2004, “small world” is introduced for analysing and comparing the topology characteristics of power networks in China and the United
States. The result shows that many power grids fall within the “small world”
category. Paper (Xu and Wang, 2005) employs scale-free coupled map lattices
(CML) models to investigate the cascading phenomena. The result indicates
that the increase in the homogeneity of the network will be helpful to enhance
the system stability. However, since topology analyses normally require networks to be homogeneous and non-weighted, it might need approximations
when dealing with power grid issues.
Recent NERC studies of major blackouts (NERC US Canada Power System Outage Task Force 2004) have shown that more than 70% of those blackouts involved hidden failures, which are incorrect relay operations, namely
removing a circuit element(s) as a direct consequence of another switching
event (Chen et al., 2005; Jun et al., 2006). When a transmission line trip,
there is a small but signiﬁcant probability that lines sharing a bus (those lines
are called as expose to hidden failures) with the tripped line may incorrectly
2.2 Data Mining and Its Application in Power System Analysis
trip due to the relay malfunctioning. The Electric Power Research Institute
(EPRI) and Southern Company jointly developed a cascading failure analysis
software, called Transmission Reliability Evaluation of Large-Scale Systems
(TRELSS), which has been applied in real systems for several years (Makarov
and Hardiman, 2003). The model addresses the trips of loads, generators, and
protection control groups (PCG). In every cascading scenario, the value of
load node voltages, generator node voltages as well as circuit overloads will
be investigated sequentially, and the next cascading fault will be determined
from the result. The model is very complex for application (Makarov and
IEEE PES CAMS Task Force (2008, 2009) on Understanding, Prediction,
Mitigation and Restoration of Cascading Failures provides a detailed review
of the issues of cascading failure analysis. The research and development in
this area continue with various techniques (Liu et al., 2007; Nedic et al., 2006;
Kirschen et al., 2004; Dobson et al., 2005; Dobson et al., 2007; Chen et al.,
2005; Sun and Lee, 2008; Hung and Nieplocha, 2008; Zhao et al., 2007; Mili
et al., 2004; Kinney et al., 2005).
2.2 Data Mining and Its Application in Power System
Data mining is the process to identify hidden, potentially useful and understandable information and patterns from large data bases; or in short it is the
process to discover hidden patterns from data bases. It is an important step in
the process of knowledge discovery in databases (Olaru and Wehenkel, 1999).
It has been used in a number of areas for power system analysis where large
amount data are involved such as forecasting and contingency assessment.
It is well known that online contingency assessment or online dynamic
security assessment (DSA) is a very complex task that requires a signiﬁcant
amount of computational costs for many real interconnected power systems.
With increasing complexity in modern power systems, the corresponding system data are exponentially increasing. Many companies store such data but
are not yet able to fully utilize them. Under such emerging complexity, it is
desirable to have reliable and fast algorithms to perform such duties instead
of the traditional time-consuming security assessment/dynamic simulation
It should be noted that artiﬁcial intelligence (AI) techniques such as neural networks (NNs) have been used for similar purposes as well. However, AI
based methods suﬀer a number of shortcomings which have prevented their
wider application in realistic situations so far. The major shortcomings of
2 Fundamentals of Emerging Techniques
NN based online dynamic security assessment are the inference opacity, the
over-ﬁtting problem, and applicability to a large scale system. Lack of statistical information from NN outputs is also a major concern which limits its
Data mining based real time security assessment approaches are able to
provide statistically reliable results and have been widely practiced in many
complex systems such as telecommunications system and internet security
areas. In power engineering, data mining has been successfully employed
in a number of areas including fault diagnosis and condition monitoring of
power system equipment, customer load proﬁle analysis (Figueiredo et al.,
2005), nontechnical loss analysis (Nizar, 2008), electricity market demand
and price forecasting (Zhao et al., 2007a; Zhao et al., 2007b; Zhao et al.,
2008), power system contingency assessment (Zhao, 2008c), and many other
tasks for power system operations (Madan et al., 1995; Tso et al., 2004; Pecas
Lopes and Vasconcelos, 2000). However, there is still a lack in systematic
application of data mining techniques in some speciﬁc areas such as large
scale power system contingency assessment and predictions (Taskforce 2009).
For applications such as a power system online DSA, it is critical to have
assessment results within a very short time in order for the system operator to take corresponding control actions to prevent series system security
problems. Data mining based approaches, with their mathematically and
statistically reliable characteristics open up a realistic solution for on-line
DSA type tasks. They outperform the traditional AI based approach in many
aspects. First, data mining is originally designed to discover useful patterns in
large-scale databases, in which AI approaches usually face unaﬀordable time
complexity. Therefore, data mining based approach are able to provide the
fast response in user friendly eﬃcient forms. Second, a variety of data cleaning techniques have been incorporated into data mining algorithms, hence
enabling data mining algorithms with strong noisy input tolerance capabilities. The most important feature is that a number of data mining methods actually come from the modiﬁcation of traditional statistic theory. For
instance, the Bayesian classiﬁer is from Bayesian decision theory and support vector machine (SVM) is based on statistical learning theory. As a
result, these techniques are able to handle large-scale data sets. Moreover,
they have strong statistical robustness and the ability to overcome over-ﬁtting
problems as compared with AI techniques. The statistical robustness means
that if the system is assessed to have a security problem, it will experience
such a problem with a given probability of occurrence if no actions are taken.
This characteristic is very important for the system operator managing the
system security in a market environment where any major actions are associated with potentially huge ﬁnancial risks. The operator needs to be sure
that a costly remedial action (such as load shedding) is necessary before that
action takes place. Data mining normally involves four types of tasks
2.3 Grid Computing
including the classiﬁcation, clustering, regression, and association rule learning (Wikipedia) (Han, 2006).
Classiﬁcation is an important task in the data mining and so is presented
in more detail here. According to the article by Vapnik, 1995, the classiﬁcation
problem belongs to supervised learning problems, which can be described
using three components:
• a generator of random vectors X, drawn independently from a ﬁxed
but unknown distribution P (X);
• a supervisor that returns an output value y for every input vector (in
classiﬁcation problems, y should be discrete and is called class label for
a given X), according to a conditional distribution function P (y|X),
also ﬁxed but unknown;
• a learning machine capable of implementing a set of functions f (X, α),
α ∈ Λ.
The object of a classiﬁer is to give the f (X, α), α ∈ Λ with best approximation to the supervisor’s response. Predicting the occurrence of system
contingency is a typical binary classiﬁcation problem. The factors which are
relevant to the contingencies (e.g., demand and weather) can be seen as the
dimensions of the input vector X = (x1 , x2 , . . . , xn ), and xi , i ∈ [1, n] is a
So far, there have been a number of classiﬁcation algorithms in practice. According to the article by Sebastiani, 2002, the main classiﬁcation
algorithms can be categorized as: decision tree and rule based approaches
such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classiﬁer
(Lewis, 1998); on-line methods such as Winnow (Littlestone, 1998); examplebased methods such as k-nearest neighbors (Duda and Hart, 1973); and SVM
(Cortes and Vapnik, 1995).
Similar to classiﬁcation, clustering also allocates similar data into groups
but the groups are not pre-deﬁned. Regression is used to model the data series
with the least error. Association rule learning is used to discover relationships
between variables in a data base (Han, 2006).
More detailed discussion on data mining will be given in Chapter 3 of this
2.3 Grid Computing
With the deregulation and constant expansion of power systems, the demand
of high performance computing (HPC) for power system adequacy and
security analysis has increased rapidly. HPC also plays an important role in
ensuring eﬃcient and reliable communication for power system operation and
2 Fundamentals of Emerging Techniques
control. In the past few years, grid computing technology has been catching
up and is receiving much attention from power engineers and researchers (Ali
et al., 2009; Irving et al., 2004). Grid computing technology is an infrastructure, which can provide high performance computing and a communication
mechanism for providing services in these areas of the power system.
It has been recognized that the commonly used Energy Management Systems (EMS) are unable to provide solutions to meet such requirements of
HPC and data and resource sharing (Chen et al., 2004) for its operations. In
the past, some eﬀorts had been made in order to enhance the computational
power of EMS (Chen et al., 2004) in the form of parallel processing, but only
the centralized resources were used, and an equal distribution of computing
tasks among participating computers was assumed. In parallel processing,
the tasks can be divided into a number of subtasks of equal size to all systems. For this purpose, all machines need to be dedicated and should be
homogeneous, i.e. they should have common conﬁgurations and capabilities,
otherwise diﬀerent computers may return results at diﬀerent times depending on their availability when the tasks were assigned to the computers. In
parallel processing, there is a need for collaboration of data from diﬀerent
organizations, which is sometimes very hard due to various technical or security issues (Chen et al., 2004). Consequently, there should be a mechanism for
processing the distributed and multi-owner data repositories (Cannataro and
Talia, 2001). Some distributed computing solutions also have been proposed
previously for getting high eﬃciency computation, but they demand homogeneous resources and are not scalable. In addition, the parallel processing
techniques involve tightly coupling of the machines (Chen et al., 2004). Use
of super computers is another solution, but it is very expensive and often
not suitable, especially for a single organization which may be constrained
Grid computing is an infrastructure that can provide an integrated environment for all these participants in the electricity market and power system
operations by providing secured resources as well as data sharing and high
performance computing for power system analysis. Grid computing can be
involved in all ﬁelds in which computers are involved, and these ﬁelds can be
related to communications, analysis, and organizational decision making.
Grid computing is a new technology that involves the integrated and collaborative use of computers, networks, databases, and scientiﬁc instruments
owned and managed by multiple organizations (Foster and Kesselman, 1997;
Foster et al., 2001). It is able to provide HPC and access to remote, heterogeneous and geographically separated data over the vast area. This technology is mainly developed by E-science community (EUROGRID, NASA IPG,
PPDG, GridPP), but nowadays it is widely used in many research ﬁelds like
oil and gas ﬁelds, banking, and education. Grid computing has provided large
contributions in these areas.
In the past few years grid computing technology has gained much attention from the power engineering ﬁeld and signiﬁcant research is being done at
2.4 Probabilistic vs Deterministic Approaches
numerous places in order to investigate the potential use of grid computing
technology and in order to apply this technology in power engineering ﬁeld
(Chen et al., 2004; Taylor et al., 2006; Ali et al., 2006; Wang and Liu, 2005;
Ali et al., 2005; Axceleon and PTI, 2003). Grid computing can provide eﬃcient and eﬀective computing services in order to meet the increasing need of
high performance computation in power system reliability and security analyses which are facing today’s power industry. It can also provide remote access
to distributed resources of the power system, thereby providing eﬀective and
fast mechanisms of monitoring and control of power systems. Overall, it can
provide eﬃcient services in power system monitoring and control, scheduling, power system reliability and security analysis, planning, and electricity
market forecast (Chen et al., 2004; Ali et al., 2005).
Grid Computing is a form of parallel and distributed computing that
involves coordination and sharing of computing, application, data storage,
and network resources across dynamic and geographically distributed organizations (Asadzadeh et al., 2004). This integration creates a virtual organization where in a number of mutually distrustful participants with varying
degrees of prior relationship want to share resources to perform some computational tasks (Foster and Kesselman, 1997; Foster et al., 2001). Some of
the commonly used grid computing tools include Globus (Foster and Kesselman, 1997) and EnFuzion (Axceleon). EnHuzion is a distributed computing tool developed by Turbolinux. It has strong robustness, high reliability,
eﬃcient network utilization, intuitive GUI interfaces, multi platform support,
multi-core processors, ﬂexible scheduling and lights-out option, and extensive
administrative tools (Axceleon).
Detailed discussion on grid computing will be given in Chapter 4 of this
2.4 Probabilistic vs Deterministic Approaches
The power systems must be planned to reliably supply electricity to end
users with a high level of reliability and meet the security requirements.
Fundamentally, these requirements conﬂict with economic concerns and usually tradeoﬀs have to be made in system operation and planning. Moreover,
because the power system has been operating for many years following a similar pattern, system operators and engineers could predict future conditions
with reasonable accuracy. However, with the changes over the past few years,
especially with deregulation and increased interconnections, it is more and
more diﬃcult to predict the system conditions, although forecasting is an
important task for system operators.
Traditionally, the system security and reliability are evaluated under the
2 Fundamentals of Emerging Techniques
deterministic framework. The deterministic approach basically studies the
system stability, given a set of network conﬁgurations, system loading conditions and disturbances, etc. (Kundur, 1994). Since the operation of the power
system is stochastic in nature and so are the disturbances, engineers have to
run thousands of time domain simulations to determine the system stability
for a set of credible disturbances before dispatching. Under this deterministic regime, system operations and planning require experience and judgment
from the system operators. Similarly, in the planning stage, planning engineers need to carry out such analysis to evaluate system reliability, and adjust
the expansion plans if necessary. Despite its popularity with many research
organizations and utilities, the time-domain simulation method suﬀers from
intensive and time-consuming computation and is only feasible in recent years
with progresses in computer engineering. This signiﬁcant disadvantage has
motivated engineers and scholars to develop new methods to account for the
stochastic nature of system stability. Studying only the worst case scenario
is one solution to the problem, but the obtained result is, however, too conservative and therefore unpractical for economy concerns in both operation
In the articles by Billinton and Kuruganty, 1980; Billinton and Kuruganty, 1979; Hsu and Chang, 1988, probabilistic indexes for transient stability have been proposed. These methods consider various uncertainties in
power systems, such as the loading conditions, fault locations and types, etc.
The system stability can be assessed in the probabilistic framework which
provides the system operator and planner with a clearer picture of stability
status. The idea of probabilistic stability assessment is extended to the small
signal stability analysis in this paper via a Monte Carlo simulation approach.
In the probabilistic study of power system stability, several methods such
as the cumulant and moment methods can be applied. These methods use
cumulant or moment models to calculate the statistics of system eigenvalues using mathematical equations such as the Gram-Charlier equations (Hsu
and Chang, 1988; Wang et al., 2000; Zhang and Lee, 2004; Da Silva et al.,
1990). The advantage of these methods is fast computational speed. However,
approximation is usually needed in these methods (Wang et al., 2000; Zhang
and Lee, 2004).
The Monte Carlo technique is another option which is more appropriate
for analyzing the complexities in large-scale power systems with high accuracy, though it may require more computational eﬀort (Robert and Casella,
2004; Billinton and Li, 1994). The Monte Carlo method involves using random numbers and probabilistic models to solve problems with uncertainties.
Reliability study in power systems is a case in point (Billinton and Li, 1994).
Simply speaking, it is a method for iteratively evaluating a deterministic
model using sets of random numbers. Take power system small signal stability assessment for example. The Monte Carlo method can be applied for
probabilistic small signal stability analysis. The method starts from the probabilistic modeling of system parameters of interest, such as the dispatching
2.4 Probabilistic vs Deterministic Approaches
of generators, electric loads at various nodal locations and network parameters, etc. Next, a set of random numbers with a uniform distribution will be
generated. Subsequently, these random numbers are fed into the probabilistic
models to generate actual values of the parameters. The load ﬂow analysis
and system eigenvalue calculation can then be carried out, followed by the
small signal stability assessment via system modal analysis. The Monte Carlo
method can also be used for many other probabilistic system analysis tasks.
For transmission system planning, the deterministic criteria may ignore
important system parameters which may have signiﬁcant impacts on the
system reliability. The deterministic planning also favors a conservative
result based on the commonly used worst case conditions. According to EPRI
(EPRI, 2004), deterministic transmission planning fails to provide a measure
of the reliability of the transmission system design. The techniques which can
eﬀectively consider uncertainties in the planning process have been investigated by researchers and engineers for probabilistic transmission planning
practices. Under the probabilistic approach, system failure risk reduction
can be clearly illustrated. The impact of system failure can be assessed and
considered in the planning process. The probabilistic transmission planning
methods developed enable quantiﬁcation of risks associated with diﬀerent
planning options. They also provide useful insights to the design process as
EPRI (EPRI, 2004; Zhang, et al., 2004; Choi et al., 2005; EPRI-PRA,
2002) proposed probabilistic power system planning to consider the stochastic nature of the power system and compared the traditional deterministic
approach vs. the probabilistic approach. A summary of deterministic and
probabilistic system analysis approaches are given in Table 2.1.
Table 2.1. A Summary of Deterministic vs Probabilistic Approaches
small signal stability
power system planning
small signal stability
power system planning
For transmission system planning, generally speaking, the deterministic
method uses simple rules compared with probabilistic methods. Deterministic methods have been implemented in computer software for easy analysis
over the years of system planning practices. However, probabilistic methods normally require new software and higher computational costs in order
to cope with the more comprehensive analysis tasks involved. Although the
probabilistic method is more complex than the deterministic method and
requires more computational power, the beneﬁts of the probabilistic method
out-weight the deterministic one because (1) it enables the tradeoﬀ between
reliability and economics in transmission planning; and (2) it is able to evalu-
2 Fundamentals of Emerging Techniques
ate risks in the process so as to enable risk management in the planning process. Transmission system planning easily involves tens of millions of dollars;
the two advantages of the probabilistic approach make it a very attractive
option for system planners.
Detailed discussions on probabilistic vs deterministic methods will be
given in Chapter 5.
2.5 Phasor Measurement Units
Conventionally, power system control and protection is normally designed
to respond to large disturbances, mainly faults, in the system. Following
the lessons learnt from the 2003 blackout, protection system fault has been
identiﬁed as a major factor leading to the cascading failure of a power system. Consequently, the traditional system protection and control need to be
reviewed and new techniques are needed to cope with today’s power system
operational needs (EPRI, 2007).
The phasor measurement unit (PMU) is digital equipment which records
the magnitude and phase angles of currents and voltages in a power system.
They can be used to provide real-time power system information in a synchronized way as either standalone devices or they can be integrated into other
protective devices. PMUs have been installed in the power systems across
large geographical areas. They provide valuable potential for improving the
monitoring, control, and protection of the power system in many countries.
The synchronized phasor measurement data provides highly useful system
dynamics information. Such information is particularly useful when the system is in a stressed operating state or subject to potential system instability.
Such information can be used to assist the situational awareness for the system control centre operators. In the article by Sun and Lee, 2008, a method is
proposed to use phase-space visualization and pattern recognition to identify
abnormal patterns in system dynamics in order to predict system cascading
failure. By strategically selecting the locations for PMU installations in a
transmission network, the real time synchronized phasor measurement data
can be used to calculate indices which can be used to measure the vulnerability of the system against possible cascading failures (IEEE PES CAMS
Taskforce, 2009; Taylor et al., 2005; Zima and Andersson, 2004).
The increasingly popular wide area monitoring, protection, and control
scheme highly relies on synchronized real time system information. PMUs
together with advanced telecommunication techniques are essential for this
scheme. In summary, PMUs can be used to assist in state estimation,
detect system inter-area oscillations and assist in determining corresponding
controls, provide system voltage stability monitoring and control, facilitate