SlideShare a Scribd company logo
1 of 551
/**
* @author Jane Programmer
* @cwid 123 45 678
* @class COSC 2336, Spring 2019
* @ide Visual Studio Community 2017
* @date April 8, 2019
* @assg Assignment 12
*
* @description Assignment 12 Binary Search Trees
*/
#include <cassert>
#include <iostream>
#include "BinaryTree.hpp"
using namespace std;
/** main
* The main entry point for this program. Execution of this
program
* will begin with this main function.
*
* @param argc The command line argument count which is the
number of
* command line arguments provided by user when they
started
* the program.
* @param argv The command line arguments, an array of
character
* arrays.
*
* @returns An int value indicating program exit status.
Usually 0
* is returned to indicate normal exit and a non-zero value
* is returned to indicate an error condition.
*/
int main(int argc, char** argv)
{
// -----------------------------------------------------------------------
cout << "--------------- testing BinaryTree construction --------
--------" << endl;
BinaryTree t;
cout << "<constructor> Size of new empty tree: " << t.size()
<< endl;
cout << t << endl;
assert(t.size() == 0);
cout << endl;
// -----------------------------------------------------------------------
cout << "--------------- testing BinaryTree insertion ------------
-------" << endl;
t.insert(10);
cout << "<insert> Inserted into empty tree, size: " << t.size()
<< endl;
cout << t << endl;
assert(t.size() == 1);
t.insert(3);
t.insert(7);
t.insert(12);
t.insert(15);
t.insert(2);
cout << "<insert> inserted 5 more items, size: " << t.size() <<
endl;
cout << t << endl;
assert(t.size() == 6);
cout << endl;
// -----------------------------------------------------------------------
cout << "--------------- testing BinaryTree height ---------------
----" << endl;
//cout << "<height> Current tree height: " << t.height() <<
endl;
//assert(t.height() == 3);
// increase height by 2
//t.insert(4);
//t.insert(5);
//cout << "<height> after inserting nodes, height: " <<
t.height()
// << " size: " << t.size() << endl;
//cout << t << endl;
//assert(t.height() == 5);
//assert(t.size() == 8);
cout << endl;
// -----------------------------------------------------------------------
cout << "--------------- testing BinaryTree clear -----------------
--" << endl;
//t.clear();
//cout << "<clear> after clearing tree, height: " << t.height()
// << " size: " << t.size() << endl;
//cout << t << endl;
//assert(t.size() == 0);
//assert(t.height() == 0);
cout << endl;
// return 0 to indicate successful completion
return 0;
}
C y b e r A t t a c k s
“Dr. Amoroso’s fi fth book Cyber Attacks: Protecting National
Infrastructure outlines the chal-
lenges of protecting our nation’s infrastructure from cyber
attack using security techniques
established to protect much smaller and less complex
environments. He proposes a brand
new type of national infrastructure protection methodology and
outlines a strategy presented
as a series of ten basic design and operations principles ranging
from deception to response.
The bulk of the text covers each of these principles in technical
detail. While several of these
principles would be daunting to implement and practice they
provide the fi rst clear and con-
cise framework for discussion of this critical challenge. This
text is thought-provoking and
should be a ‘must read’ for anyone concerned with
cybersecurity in the private or government
sector.”
— Clayton W. Naeve, Ph.D. ,
Senior Vice President and Chief Information Offi cer,
Endowed Chair in Bioinformatics,
St. Jude Children’s Research Hospital,
Memphis, TN
“Dr. Ed Amoroso reveals in plain English the threats and
weaknesses of our critical infra-
structure balanced against practices that reduce the exposures.
This is an excellent guide
to the understanding of the cyber-scape that the security
professional navigates. The book
takes complex concepts of security and simplifi es it into
coherent and simple to understand
concepts.”
— Arnold Felberbaum ,
Chief IT Security & Compliance Offi cer,
Reed Elsevier
“The national infrastructure, which is now vital to
communication, commerce and entertain-
ment in everyday life, is highly vulnerable to malicious attacks
and terrorist threats. Today, it
is possible for botnets to penetrate millions of computers around
the world in few minutes,
and to attack the valuable national infrastructure.
“As the New York Times reported, the growing number of
threats by botnets suggests that
this cyber security issue has become a serious problem, and we
are losing the war against
these attacks.
“While computer security technologies will be useful for
network systems, the reality
tells us that this conventional approach is not effective enough
for the complex, large-scale
national infrastructure.
“Not only does the author provide comprehensive
methodologies based on 25 years of expe-
rience in cyber security at AT&T, but he also suggests ‘security
through obscurity,’ which
attempts to use secrecy to provide security.”
— Byeong Gi Lee ,
President, IEEE Communications Society, and
Commissioner of the Korea Communications Commission
(KCC)
C y b e r A t t a c k s
Protecting National
Infrastructure
Edward G. Amoroso
AMSTERDAM • BOSTON • HEIDELBERG • LONDON
NEW YORK • OXFORD • PARIS • SAN DIEGO
SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO
Butterworth-Heinemann is an imprint of Elsevier
Acquiring Editor: Pam Chester
Development Editor: Gregory Chalson
Project Manager: Paul Gottehrer
Designer: Alisa Andreola
Butterworth-Heinemann is an imprint of Elsevier
30 Corporate Drive, Suite 400, Burlington, MA 01803, USA
© 2011 Elsevier Inc. All rights reserved
No part of this publication may be reproduced or transmitted in
any form or by any means, electronic
or mechanical, including photocopying, recording, or any
information storage and retrieval system,
without permission in writing from the publisher. Details on
how to seek permission, further
information about the Publisher’s permissions policies and our
arrangements with organizations such
as the Copyright Clearance Center and the Copyright Licensing
Agency, can be found at our
website: www.elsevier.com/permissions .
This book and the individual contributions contained in it are
protected under copyright by the
Publisher (other than as may be noted herein).
Notices
Knowledge and best practice in this fi eld are constantly
changing. As new research and experience
broaden our understanding, changes in research methods or
professional practices, may become necessary.
Practitioners and researchers must always rely on their own
experience and knowledge in evaluating
and using any information or methods described herein. In using
such information or methods they should be
mindful of their own safety and the safety of others, including
parties for whom they have a professional
responsibility.
To the fullest extent of the law, neither the Publisher nor the
authors, contributors, or editors, assume
any liability for any injury and/or damage to persons or
property as a matter of products liability,
negligence or otherwise, or from any use or operation of any
methods, products, instructions, or
ideas contained in the material herein.
Library of Congress Cataloging-in-Publication Data
Amoroso, Edward G.
Cyber attacks : protecting national infrastructure / Edward
Amoroso.
p. cm.
Includes index.
ISBN 978-0-12-384917-5
1. Cyberterrorism—United States—Prevention. 2. Computer
security—United States. 3. National
security—United States. I. Title.
HV6773.2.A47 2011
363.325�90046780973—dc22 2010040626
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British
Library.
Printed in the United States of America
10 11 12 13 14 10 9 8 7 6 5 4 3 2 1
For information on all BH publications visit our website at
www.elsevierdirect.com/security
CONTENTS v
CONTENTS
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . ix
Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . xi
Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 1
National Cyber Threats, Vulnerabilities, and Attacks . . . . . . .
. . . . . . . . . 4
Botnet Threat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 6
National Cyber Security Methodology Components . . . . . . .
. . . . . . . . 9
Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 11
Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 13
Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 16
Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 17
Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 19
Discretion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 20
Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 21
Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 23
Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 25
Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 26
Implementing the Principles Nationally . . . . . . . . . . . . . . . .
. . . . . . . . 28
Chapter 2 Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 31
Scanning Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 35
Deliberately Open Ports . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 37
Discovery Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 39
Deceptive Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 41
Exploitation Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 42
Procurement Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 45
Exposing Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 46
Interfaces Between Humans and Computers . . . . . . . . . . . . .
. . . . . . . 47
National Deception Program . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 49
vi CONTENTS
Chapter 3 Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 51
What Is Separation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 53
Functional Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 55
National Infrastructure Firewalls . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 57
DDOS Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 60
SCADA Separation Architecture . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 62
Physical Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 63
Insider Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 65
Asset Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 68
Multilevel Security (MLS) . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 70
Chapter 4 Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 73
Diversity and Worm Propagation . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 75
Desktop Computer System Diversity . . . . . . . . . . . . . . . . . . .
. . . . . . . . 77
Diversity Paradox of Cloud Computing . . . . . . . . . . . . . . . . .
. . . . . . . . 80
Network Technology Diversity . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 82
Physical Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 85
National Diversity Program . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 87
Chapter 5 Commonality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 89
Meaningful Best Practices for Infrastructure Protection . . . . .
. . . . . . . 92
Locally Relevant and Appropriate Security Policy . . . . . . . .
. . . . . . . . 95
Culture of Security Protection . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 97
Infrastructure Simplifi cation . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 99
Certifi cation and Education . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 102
Career Path and Reward Structure . . . . . . . . . . . . . . . . . . . . .
. . . . . . . 105
Responsible Past Security Practice . . . . . . . . . . . . . . . . . . .
. . . . . . . . 106
National Commonality Program . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 107
Chapter 6 Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 109
Effectiveness of Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 111
Layered Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 115
Layered E-Mail Virus and Spam Protection . . . . . . . . . . . . . .
. . . . . . . . 119
CONTENTS vii
Layered Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 120
Layered Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 122
Layered Intrusion Detection . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 124
National Program of Depth . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 126
Chapter 7 Discretion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 129
Trusted Computing Base . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 130
Security Through Obscurity . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 133
Information Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 135
Information Reconnaissance . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 137
Obscurity Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 139
Organizational Compartments . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 141
National Discretion Program . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 143
Chapter 8 Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 145
Collecting Network Data . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 148
Collecting System Data . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 150
Security Information and Event Management . . . . . . . . . . . .
. . . . . . 154
Large-Scale Trending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 156
Tracking a Worm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 159
National Collection Program . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 161
Chapter 9 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 163
Conventional Security Correlation Methods . . . . . . . . . . . . . .
. . . . . . 167
Quality and Reliability Issues in Data Correlation . . . . . . . . .
. . . . . . . 169
Correlating Data to Detect a Worm . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 170
Correlating Data to Detect a Botnet . . . . . . . . . . . . . . . . . . .
. . . . . . . . 172
Large-Scale Correlation Process . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 174
National Correlation Program . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 176
Chapter 10 Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 179
Detecting Infrastructure Attacks . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 183
Managing Vulnerability Information . . . . . . . . . . . . . . . . . .
. . . . . . . . 184
viii CONTENTS
Cyber Security Intelligence Reports . . . . . . . . . . . . . . . . . . .
. . . . . . . . 186
Risk Management Process . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 188
Security Operations Centers . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 190
National Awareness Program . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 192
Chapter 11 Response. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 193
Pre- Versus Post-Attack Response . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 195
Indications and Warning . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 197
Incident Response Teams . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 198
Forensic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 201
Law Enforcement Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 203
Disaster Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 204
National Response Program . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 206
Appendix Sample National Infrastructure Protection
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 207
Sample Deception Requirements (Chapter 2) . . . . . . . . . . . . .
. . . . . . 208
Sample Separation Requirements (Chapter 3) . . . . . . . . . . .
. . . . . . . 209
Sample Diversity Requirements (Chapter 4) . . . . . . . . . . . . .
. . . . . . . . 211
Sample Commonality Requirements (Chapter 5) . . . . . . . . . .
. . . . . . 212
Sample Depth Requirements (Chapter 6) . . . . . . . . . . . . . . .
. . . . . . . 213
Sample Discretion Requirements (Chapter 7) . . . . . . . . . . . . .
. . . . . . 214
Sample Collection Requirements (Chapter 8) . . . . . . . . . . . . .
. . . . . . 214
Sample Correlation Requirements (Chapter 9) . . . . . . . . . . . .
. . . . . . 215
Sample Awareness Requirements (Chapter 10) . . . . . . . . . .
. . . . . . . 216
Sample Response Requirements (Chapter 11) . . . . . . . . . . .
. . . . . . . 216
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 219
PREFACE ix
PREFACE
Man did not enter into society to become worse than he was
before,
nor to have fewer rights than he had before, but to have those
rights better secured.
Thomas Paine in Common Sense
Before you invest any of your time with this book, please take
a
moment and look over the following points. They outline my
basic philosophy of national infrastructure security. I think that
your reaction to these points will give you a pretty good idea of
what your reaction will be to the book.
1. Citizens of free nations cannot hope to express or enjoy
their freedoms if basic security protections are not provided.
Security does not suppress freedom—it makes freedom
possible.
2. In virtually every modern nation, computers and
networks
power critical infrastructure elements. As a result, cyber
attackers can use computers and networks to damage or ruin
the infrastructures that citizens rely on.
3. Security protections, such as those in security books,
were
designed for small-scale environments such as enterprise
computing environments. These protections do not extrapo-
late to the protection of massively complex infrastructure.
4. Effective national cyber protections will be driven
largely by
cooperation and coordination between commercial, indus-
trial, and government organizations. Thus, organizational
management issues will be as important to national defense
as technical issues.
5. Security is a process of risk reduction, not risk removal.
Therefore, concrete steps can and should be taken to
reduce, but not remove, the risk of cyber attack to national
infrastructure.
6. The current risk of catastrophic cyber attack to national
infra-
structure must be viewed as extremely high, by any realistic
measure. Taking little or no action to reduce this risk would be
a foolish national decision.
The chapters of this book are organized around ten basic
principles that will reduce the risk of cyber attack to national
infrastructure in a substantive manner. They are driven by
x PREFACE
experiences gained managing the security of one of the largest,
most complex infrastructures in the world, by years of learning
from various commercial and government organizations, and by
years of interaction with students and academic researchers in
the security fi eld. They are also driven by personal experiences
dealing with a wide range of successful and unsuccessful cyber
attacks, including ones directed at infrastructure of considerable
value. The implementation of the ten principles in this book will
require national resolve and changes to the way computing and
networking elements are designed, built, and operated in the
context of national infrastructure. My hope is that the sugges-
tions offered in these pages will make this process easier.
ACKNOWLEDGMENT xi
ACKNOWLEDGMENT
The cyber security experts in the AT&T Chief Security Offi
ce, my
colleagues across AT&T Labs and the AT&T Chief Technology
Offi ce, my colleagues across the entire AT&T business, and my
graduate and undergraduate students in the Computer Science
Department at the Stevens Institute of Technology, have had
a profound impact on my thinking and on the contents of this
book. In addition, many prominent enterprise customers of
AT&T with whom I’ve had the pleasure of serving, especially
those in the United States Federal Government, have been great
infl uencers in the preparation of this material.
I’d also like to extend a great thanks to my wife Lee, daugh-
ter Stephanie (17), son Matthew (15), and daughter Alicia (9)
for
their collective patience with my busy schedule.
Edward G. Amoroso
Florham Park, NJ
September 2010
This page intentionally left blank
1
Cyber Attacks. DOI:
© Elsevier Inc. All rights reserved.
10.1016/B978-0-12-384917-5.00001-9
2011
INTRODUCTION
Somewhere in his writings—and I regret having forgotten
where—
John Von Neumann draws attention to what seemed to him a
contrast. He remarked that for simple mechanisms it is often
easier to describe how they work than what they do, while for
more
complicated mechanisms it was usually the other way round .
Edsger W. Dijkstra 1
National infrastructure refers to the complex,
underlying delivery
and support systems for all large-scale services considered
abso-
lutely essential to a nation. These services include emergency
response, law enforcement databases, supervisory control and
data acquisition (SCADA) systems, power control networks,
mili-
tary support services, consumer entertainment systems, fi
nancial
applications, and mobile telecommunications. Some national
services are provided directly by government, but most are pro-
vided by commercial groups such as Internet service provid-
ers, airlines, and banks. In addition, certain services considered
essential to one nation might include infrastructure support that
is controlled by organizations from another nation. This global
interdependency is consistent with the trends referred to collec-
tively by Thomas Friedman as a “fl at world.” 2
National infrastructure, especially in the United States, has
always been vulnerable to malicious physical attacks such as
equipment tampering, cable cuts, facility bombing, and asset
theft. The events of September 11, 2001, for example, are the
most prominent and recent instance of a massive physical attack
directed at national infrastructure. During the past couple of
decades, however, vast portions of national infrastructure have
become reliant on software, computers, and networks. This reli-
ance typically includes remote access, often over the Internet, to
1
1 E.W. Dijkstra, Selected Writings on Computing: A
Personal Perspective , Springer-Verlag,
New York, 1982, pp. 212–213.
2 T. Friedman, The World Is Flat: A Brief History of the
Twenty-First Century , Farrar,
Straus, and Giroux, New York, 2007. (Friedman provides a
useful economic backdrop to
the global aspect of the cyber attack trends suggested in this
chapter.)
2 Chapter 1 INTRODUCTION
the systems that control national services. Adversaries thus
can
initiate cyber attacks on infrastructure using worms, viruses,
leaks, and the like. These attacks indirectly target national
infra-
structure through their associated automated controls systems
(see Figure 1.1 ).
A seemingly obvious approach to dealing with this national
cyber threat would involve the use of well-known computer
security techniques. After all, computer security has matured
substantially in the past couple of decades, and considerable
expertise now exists on how to protect software, computers, and
networks. In such a national scheme, safeguards such as fi re-
walls, intrusion detection systems, antivirus software,
passwords,
scanners, audit trails, and encryption would be directly embed-
ded into infrastructure, just as they are currently in small-scale
environments. These national security systems would be con-
nected to a centralized threat management system, and inci-
dent response would follow a familiar sort of enterprise process.
Furthermore, to ensure security policy compliance, one would
expect the usual programs of end-user awareness, security train-
ing, and third-party audit to be directed toward the people
build-
ing and operating national infrastructure. Virtually every
national
infrastructure protection initiative proposed to date has
followed
this seemingly straightforward path. 3
While well-known computer security techniques will certainly
be useful for national infrastructure, most practical experience
to date suggests that this conventional approach will not be suf-
fi cient. A primary reason is the size, scale, and scope inherent
in
complex national infrastructure. For example, where an enter-
prise might involve manageably sized assets, national
infrastruc-
ture will require unusually powerful computing support with
the ability to handle enormous volumes of data. Such volumes
Indirect
Cyber
Attacks
Direct
Physical
Attacks
“Worms,
Viruses,
Leaks”
“Tampering,
Cuts,
Bombs”
National
Infrastructure
Automated Control
Software
Computers
Networks
Figure 1.1 National infrastructure cyber and physical
attacks.
3 Executive Offi ce of the President, Cyberspace Policy
Review: Assuring a Trusted
and Resilient Information and Communications Infrastructure ,
U.S. White House,
Washington, D.C., 2009 (
http://handle.dtic.mil/100.2/ADA501541 ).
Chapter 1 INTRODUCTION 3
will easily exceed the storage and processing capacity of
typical
enterprise security tools such as a commercial threat manage-
ment system. Unfortunately, this incompatibility confl icts with
current initiatives in government and industry to reduce costs
through the use of common commercial off-the-shelf products.
In addition, whereas enterprise systems can rely on manual
intervention by a local expert during a security disaster, large-
scale national infrastructure generally requires a carefully
orches-
trated response by teams of security experts using
predetermined
processes. These teams of experts will often work in different
groups, organizations, or even countries. In the worst cases,
they will cooperate only if forced by government, often sharing
just the minimum amount of information to avoid legal conse-
quences. An additional problem is that the complexity
associated
with national infrastructure leads to the bizarre situation where
response teams often have partial or incorrect understand-
ing about how the underlying systems work. For these reasons,
seemingly convenient attempts to apply existing small-scale
security processes to large-scale infrastructure attacks will ulti-
mately fail (see Figure 1.2 ).
As a result, a brand-new type of national infrastructure protec-
tion methodology is required—one that combines the best ele-
ments of existing computer and network security techniques
with
the unique and diffi cult challenges associated with complex,
large-
scale national services. This book offers just such a protection
methodology for national infrastructure. It is based on a quarter
century of practical experience designing, building, and
operating
Small-Scale
Small Volume
Possibly Manual
Local Expert
High
Focused
High Volume
Large-Scale
Process-Based
Distributed Expertise
Partial or Incorrect
Broad
Collection
Emergency
Expertise
Knowledge
Analysis
Large-Scale
Attributes
Complicate
Cyber Security
Figure 1.2 Differences between small- and large-scale cyber
security.
National infrastructure
databases far exceed the
size of even the largest
commercial databases.
4 Chapter 1 INTRODUCTION
cyber security systems for government, commercial, and con-
sumer infrastructure. It is represented as a series of protection
principles that can be applied to new or existing systems.
Because
of the unique needs of national infrastructure, especially its
mas-
sive size, scale, and scope, some aspects of the methodology
will
be unfamiliar to the computer security community. In fact,
certain
elements of the approach, such as our favorable view of
“security
through obscurity,” might appear in direct confl ict with
conven-
tional views of how computers and networks should be
protected.
National Cyber Threats, Vulnerabilities,
and Attacks
Conventional computer security is based on the oft-repeated
tax-
onomy of security threats which includes confi dentiality,
integrity,
availability, and theft. In the broadest sense, all four diverse
threat
types will have applicability in national infrastructure. For
example,
protections are required equally to deal with sensitive
information
leaks (confi dentiality ), worms affecting the operation of some
criti-
cal application (integrity), botnets knocking out an important
system
(availability), or citizens having their identities compromised
(theft).
Certainly, the availability threat to national services must be
viewed
as particularly important, given the nature of the threat and its
rela-
tion to national assets. One should thus expect particular
attention to
availability threats to national infrastructure. Nevertheless, it
makes
sense to acknowledge that all four types of security threats in
the
conventional taxonomy of computer security must be addressed
in
any national infrastructure protection methodology.
Vulnerabilities are more diffi cult to associate with any taxon-
omy. Obviously, national infrastructure must address well-
known
problems such as improperly confi gured equipment, poorly
designed local area networks, unpatched system software,
exploit-
able bugs in application code, and locally disgruntled employ-
ees. The problem is that the most fundamental vulnerability in
national infrastructure involves the staggering complexity
inher-
ent in the underlying systems. This complexity is so pervasive
that many times security incidents uncover aspects of computing
functionality that were previously unknown to anyone,
including
sometimes the system designers. Furthermore, in certain cases,
the optimal security solution involves simplifying and cleaning
up
poorly conceived infrastructure. This is bad news, because most
large organizations are inept at simplifying much of anything.
The best one can do for a comprehensive view of the vulner-
abilities associated with national infrastructure is to address
their
Any of the most common
security concerns—
confi dentiality, integrity,
availability, and theft—
threaten our national
infrastructure.
Chapter 1 INTRODUCTION 5
relative exploitation points. This can be done with an abstract
national infrastructure cyber security model that includes three
types of malicious adversaries: external adversary (hackers on
the Internet), internal adversary (trusted insiders), and
supplier
adversary (vendors and partners). Using this model, three
exploi-
tation points emerge for national infrastructure: remote access
(Internet and telework), system administration and normal
usage
(management and use of software, computers, and networks),
and supply chain (procurement and outsourcing) (see Figure
1.3 ).
These three exploitation points and three types of adversaries
can be associated with a variety of possible motivations for
initi-
ating either a full or test attack on national infrastructure.
Remote
Access
System
Administration and
Normal Usage
External
Adversary
Three Exploitation Points
National Infrastructure
Three Adversaries
Supply
Chain
Internal
Adversary
Software
Computers
Networks
Supplier
Adversary
Figure 1.3 Adversaries and exploitation points in national
infrastructure.
Five Possible Motivations for an
Infrastructure Attack
● Country-sponsored warfare —National infrastructure
attacks sponsored and funded by enemy countries must be
considered the most signifi cant potential motivation, because
the intensity of adversary capability and willingness to
attack is potentially unlimited.
● Terrorist attack —The terrorist motivation is also signifi
cant, especially because groups driven by terror can easily
obtain suffi cient capability and funding to perform signifi cant
attacks on infrastructure.
● Commercially motivated attack —When one company
chooses to utilize cyber attacks to gain a commercial
advantage, it becomes a national infrastructure incident if the
target company is a purveyor of some national asset.
● Financially driven criminal attack —Identify theft is the
most common example of a fi nancially driven attack by
criminal groups, but other cases exist, such as companies being
extorted to avoid a cyber incident.
● Hacking —One must not forget that many types of
attacks are still driven by the motivation of hackers, who are
often
just mischievous youths trying to learn or to build a reputation
within the hacking community. This is much less a
sinister motivation, and national leaders should try to identify
better ways to tap this boundless capability and energy.
6 Chapter 1 INTRODUCTION
Each of the three exploitation points might be utilized in a
cyber attack on national infrastructure. For example, a supplier
might use a poorly designed supply chain to insert Trojan horse
code into a software component that controls some national
asset, or a hacker on the Internet might take advantage of some
unprotected Internet access point to break into a vulnerable ser-
vice. Similarly, an insider might use trusted access for either
sys-
tem administration or normal system usage to create an attack.
The potential also exists for an external adversary to gain valu-
able insider access through patient, measured means, such as
gaining employment in an infrastructure-supporting organiza-
tion and then becoming trusted through a long process of work
performance. In each case, the possibility exists that a limited
type of engagement might be performed as part of a planned test
or exercise. This seems especially likely if the attack is country
or
terrorist sponsored, because it is consistent with past practice.
At each exploitation point, the vulnerability being used might
be a well-known problem previously reported in an authoritative
public advisory, or it could be a proprietary issue kept hidden
by
a local organization. It is entirely appropriate for a recognized
authority to make a detailed public vulnerability advisory if the
benefi ts of notifying the good guys outweigh the risks of alert-
ing the bad guys. This cost–benefi t result usually occurs when
many organizations can directly benefi t from the information
and can thus take immediate action. When the reported vulner-
ability is unique and isolated, however, then reporting the
details
might be irresponsible, especially if the notifi cation process
does
not enable a more timely fi x. This is a key issue, because many
government authorities continue to consider new rules for man-
datory reporting. If the information being demanded is not prop-
erly protected, then the reporting process might result in more
harm than good.
Botnet Threat
Perhaps the most insidious type of attack that exists today is
the
botnet . 4 In short, a botnet involves remote control of a
collec-
tion of compromised end-user machines, usually broadband-
connected PCs. The controlled end-user machines, which are
referred to as bots , are programmed to attack some target that
is
designated by the botnet controller. The attack is tough to stop
4 Much of the material on botnets in this chapter is derived
from work done by Brian
Rexroad, David Gross, and several others from AT&T.
When to issue a
vulnerability risk advisory
and when to keep the
risk confi dential must be
determined on a case-
by-case basis, depending
on the threat.
Chapter 1 INTRODUCTION 7
because end-user machines are typically administered in an
inef-
fective manner. Furthermore, once the attack begins, it occurs
from sources potentially scattered across geographic, political,
and service provider boundaries. Perhaps worse, bots are pro-
grammed to take commands from multiple controller systems,
so
any attempts to destroy a given controller result in the bots sim-
ply homing to another one.
The Five Entities That Comprise a Botnet Attack
● Botnet operator —This is the individual, group, or
country that creates the botnet, including its setup and
operation.
When the botnet is used for fi nancial gain, it is the operator
who will benefi t. Law enforcement and cyber security
initiatives have found it very diffi cult to identify the operators.
The press, in particular, has done a poor job reporting
on the presumed identity of botnet operators, often suggesting
sponsorship by some country when little supporting
evidence exists.
● Botnet controller —This is the set of servers that
command and control the operation of a botnet. Usually these
servers have been maliciously compromised for this purpose.
Many times, the real owner of a server that has
been compromised will not even realize what has occurred. The
type of activity directed by a controller includes
all recruitment, setup, communication, and attack activity.
Typical botnets include a handful of controllers, usually
distributed across the globe in a non-obvious manner.
● Collection of bots —These are the end-user, broadband-
connected PCs infected with botnet malware. They are
usually owned and operated by normal citizens, who become
unwitting and unknowing dupes in a botnet attack.
When a botnet includes a concentration of PCs in a given
region, observers often incorrectly attribute the attack
to that region. The use of smart mobile devices in a botnet will
grow as upstream capacity and device processing
power increase.
● Botnet software drop —Most botnets include servers
designed to store software that might be useful for the botnets
during their lifecycle. Military personnel might refer to this as
an arsenal . Like controllers, botnet software drop
points are usually servers compromised for this purpose, often
unknown to the normal server operator.
● Botnet target —This is the location that is targeted in the
attack. Usually, it is a website, but it can really be any
device, system, or network that is visible to the bots. In most
cases, botnets target prominent and often controversial
websites, simply because they are visible via the Internet and
generally have a great deal at stake in terms of their
availability. This increases gain and leverage for the attacker.
Logically, however, botnets can target anything visible.
The way a botnet works is that the controller is set up to com-
municate with the bots via some designated protocol, most often
Internet Relay Chat (IRC). This is done via malware inserted
into
the end-user PCs that comprise the bots. A great challenge in
this
regard is that home PCs and laptops are so poorly administered.
Amazingly, over time, the day-to-day system and security
admin-
istration task for home computers has gravitated to the end user.
8 Chapter 1 INTRODUCTION
This obligation results in both a poor user experience and gen-
eral dissatisfaction with the security task. For example, when a
typical computer buyer brings a new machine home, it has prob-
ably been preloaded with security software by the retailer. From
this point onward, however, that home buyer is then tasked with
all responsibility for protecting the machine. This includes
keep-
ing fi rewall, intrusion detection, antivirus, and antispam
software
up to date, as well as ensuring that all software patches are cur-
rent. When these tasks are not well attended, the result is a
more
vulnerable machine that is easily turned into a bot. (Sadly, even
if a machine is properly managed, expert bot software designers
might fi nd a way to install the malware anyway.)
Once a group of PCs has been compromised into bots, attacks
can thus be launched by the controller via a command to the
bots, which would then do as they are instructed. This might
not occur instantaneously with the infection; in fact, experi-
ence suggests that many botnets lay dormant for a great deal of
time. Nevertheless, all sorts of attacks are possible in a bot-
net arrangement, including the now-familiar distributed denial
of service attack (DDOS). In such a case, the bots create more
inbound traffi c than the target gateway can handle. For
example,
if some theoretical gateway allows for 1 Gbps of inbound traffi
c,
and the botnet creates an inbound stream larger than 1 Gbps,
then a logjam results at the inbound gateway, and a denial of
service condition occurs (see Figure 1.4 ).
Any serious present study of cyber security must acknowl-
edge the unique threat posed by botnets. Virtually any Internet-
connected system is vulnerable to major outages from a
botnet-originated DDOS attack. The physics of the situation are
especially depressing; that is, a botnet that might steal 500
Kbps
Broadband
Carriers
Capacity Excess
Creates Jam
Bots
Target A’s
Designated
Carrier
1 Gbps
Ingress
Target A
1 Gbps DDOS Traffic
Aimed at Target A
Figure 1.4 Sample DDOS attack from a botnet.
Home PC users may never
know they are being used
for a botnet scheme.
A DDOS attack is like a
cyber traffi c jam.
Chapter 1 INTRODUCTION 9
of upstream capacity from each bot (which would generally
allow
for concurrent normal computing and networking) would only
need three bots to collapse a target T1 connection. Following
this
logic, only 16,000 bots would be required theoretically to fi ll
up
a 10-Gbps connection. Because most of the thousands of bot-
nets that have been observed on the Internet are at least this
size,
the threat is obvious; however, many recent and prominent bot-
nets such as Storm and Confi cker are much larger, comprising
as
many as several million bots, so the threat to national
infrastruc-
ture is severe and immediate.
National Cyber Security Methodology
Components
Our proposed methodology for protecting national infrastruc-
ture is presented as a series of ten basic design and operation
principles. The implication is that, by using these principles as
a guide for either improving existing infrastructure components
or building new ones, the security result will be desirable,
includ-
ing a reduced risk from botnets. The methodology addresses all
four types of security threats to national infrastructure; it also
deals with all three types of adversaries to national
infrastructure,
as well as the three exploitation points detailed in the
infrastruc-
ture model. The list of principles in the methodology serves as a
guide to the remainder of this chapter, as well as an outline for
the remaining chapters of the book:
● Chapter 2: Deception —The openly advertised use of
deception
creates uncertainty for adversaries because they will not know
if a discovered problem is real or a trap. The more common hid-
den use of deception allows for real-time behavioral analysis if
an intruder is caught in a trap. Programs of national infrastruc-
ture protection must include the appropriate use of deception,
especially to reduce the malicious partner and supplier risk.
● Chapter 3: Separation —Network separation is currently
accomplished using fi rewalls, but programs of national infra-
structure protection will require three specifi c changes.
Specifi cally, national infrastructure must include network-
based fi rewalls on high-capacity backbones to throttle DDOS
attacks, internal fi rewalls to segregate infrastructure and
reduce the risk of sabotage, and better tailoring of fi rewall fea-
tures for specifi c applications such as SCADA protocols. 5
5 R. Kurtz, Securing SCADA Systems , Wiley, New York,
2006. (Kurtz provides an excellent
overview of SCADA systems and the current state of the
practice in securing them.)
10 Chapter 1 INTRODUCTION
● Chapter 4: Diversity —Maintaining diversity in the
products,
services, and technologies supporting national infrastruc-
ture reduces the chances that one common weakness can be
exploited to produce a cascading attack. A massive program
of coordinated procurement and supplier management is
required to achieve a desired level of national diversity across
all assets. This will be tough, because it confl icts with most
cost-motivated information technology procurement initia-
tives designed to minimize diversity in infrastructure.
● Chapter 5: Commonality —The consistent use of security
best
practices in the administration of national infrastructure
ensures that no infrastructure component is either poorly
managed or left completely unguarded. National programs
of standards selection and audit validation, especially with
an emphasis on uniform programs of simplifi cation, are thus
required. This can certainly include citizen end users, but one
should never rely on high levels of security compliance in the
broad population.
● Chapter 6: Depth —The use of defense in depth in
national
infrastructure ensures that no critical asset is reliant on a
single security layer; thus, if any layer should fail, an addi-
tional layer is always present to mitigate an attack. Analysis is
required at the national level to ensure that all critical assets
are protected by at least two layers, preferably more.
● Chapter 7: Discretion —The use of personal discretion in
the
sharing of information about national assets is a practical
technique that many computer security experts fi nd diffi cult
to accept because it confl icts with popular views on “security
through obscurity.” Nevertheless, large-scale infrastructure
protection cannot be done properly unless a national culture
of discretion and secrecy is nurtured. It goes without saying
that such discretion should never be put in place to obscure
illegal or unethical practices.
● Chapter 8: Collection —The collection of audit log
informa-
tion is a necessary component of an infrastructure security
scheme, but it introduces privacy, size, and scale issues not
seen in smaller computer and network settings. National
infrastructure protection will require a data collection
approach that is acceptable to the citizenry and provides the
requisite level of detail for security analysis.
● Chapter 9: Correlation —Correlation is the most
fundamen-
tal of all analysis techniques for cyber security, but modern
attack methods such as botnets greatly complicate its use for
attack-related indicators. National-level correlation must be
performed using all available sources and the best available
Chapter 1 INTRODUCTION 11
technology and algorithms. Correlating information around a
botnet attack is one of the more challenging present tasks in
cyber security.
● Chapter 10: Awareness —Maintaining situational
awareness is
more important in large-scale infrastructure protection than
in traditional computer and network security because it helps
to coordinate the real-time aspect of multiple infrastructure
components. A program of national situational awareness
must be in place to ensure proper management decision-
making for national assets.
● Chapter 11: Response —Incident response for national
infra-
structure protection is especially diffi cult because it gener-
ally involves complex dependencies and interactions between
disparate government and commercial groups. It is best
accomplished at the national level when it focuses on early
indications, rather than on incidents that have already begun
to damage national assets.
The balance of this chapter will introduce each principle, with
discussion on its current use in computer and network security,
as
well as its expected benefi ts for national infrastructure
protection.
Deception
The principle of deception involves the deliberate introduc-
tion of misleading functionality or misinformation into national
infrastructure for the purpose of tricking an adversary. The idea
is that an adversary would be presented with a view of national
infrastructure functionality that might include services or inter-
face components that are present for the sole purpose of fakery.
Computer scientists refer to this functionality as a honey pot ,
but the use of deception for national infrastructure could go
far beyond this conventional view. Specifi cally, deception can
be used to protect against certain types of cyber attacks that
no other security method will handle. Law enforcement agen-
cies have been using deception effectively for many years, often
catching cyber stalkers and criminals by spoofi ng the reported
identity of an end point. Even in the presence of such obvi-
ous success, however, the cyber security community has yet to
embrace deception as a mainstream protection measure.
Deception in computing typically involves a layer of clev-
erly designed trap functionality strategically embedded into the
internal and external interfaces for services. Stated more
simply,
deception involves fake functionality embedded into real inter-
faces. An example might be a deliberately planted trap link on
Deception is an oft-used
tool by law enforcement
agencies to catch cyber
stalkers and predators.
12 Chapter 1 INTRODUCTION
a website that would lead potential intruders into an environ-
ment designed to highlight adversary behavior. When the decep-
tion is open and not secret, it might introduce uncertainty for
adversaries in the exploitation of real vulnerabilities, because
the
adversary might suspect that the discovered entry point is a
trap.
When it is hidden and stealth, which is the more common situa-
tion, it serves as the basis for real-time forensic analysis of
adver-
sary behavior. In either case, the result is a public interface that
includes real services, deliberate honey pot traps, and the inevi-
table exploitable vulnerabilities that unfortunately will be pres-
ent in all nontrivial interfaces (see Figure 1.5 ).
Only relatively minor tests of honey pot technology have
been reported to date, usually in the context of a research effort.
Almost no reports are available on the day-to-day use of decep-
tion as a structural component of a real enterprise security
program. In fact, the vast majority of security programs for
com-
panies, government agencies, and national infrastructure would
include no such functionality. Academic computer scientists
have shown little interest in this type of security, as evidenced
by
the relatively thin body of literature on the subject. This lack of
interest might stem from the discomfort associated with using
computing to mislead. Another explanation might be the relative
ineffectiveness of deception against the botnet threat, which is
clearly the most important security issue on the Internet today.
Regardless of the cause, this tendency to avoid the use of
decep-
tion is unfortunate, because many cyber attacks, such as subtle
break-ins by trusted insiders and Trojan horses being
maliciously
inserted by suppliers into delivered software, cannot be easily
remedied by any other means.
The most direct benefi t of deception is that it enables foren-
sic analysis of intruder activity. By using a honey pot, unique
insights into attack methods can be gained by watching what
is occurring in real time. Such deception obviously works best
in a hidden, stealth mode, unknown to the intruder, because if
Interface to
Valid Services
Trap Interface
to Honey Pot
Should Resemble
Valid Services
Vulnerabilities
Possible
Uncertainty
Real
Assets
Honey
Pot
???
Figure 1.5 Components of an interface with deception.
Deception is less effective
against botnets than other
types of attack methods.
Chapter 1 INTRODUCTION 13
the intruder realizes that some vulnerable exploitation point is
a fake, then no exploitation will occur. Honey pot pioneers Cliff
Stoll, Bill Cheswick, and Lance Spitzner have provided a major-
ity of the reported experience in real-time forensics using honey
pots. They have all suggested that the most diffi cult task
involves
creating believability in the trap. It is worth noting that
connect-
ing a honey pot to real assets is a terrible idea.
An additional potential benefi t of deception is that it can
introduce the clever idea that some discovered vulnerability
might instead be a deliberately placed trap. Obviously, such an
approach is only effective if the use of deception is not hidden;
that is, the adversary must know that deception is an approved
and accepted technique used for protection. It should therefore
be obvious that the major advantage here is that an accidental
vulnerability, one that might previously have been an open door
for an intruder, will suddenly look like a possible trap. A
further
profound notion, perhaps for open discussion, is whether just
the implied statement that deception might be present (perhaps
without real justifi cation) would actually reduce risk.
Suppliers,
for example, might be less willing to take the risk of Trojan
horse insertion if the procuring organization advertises an open
research and development program of detailed software test and
inspection against this type of attack.
Separation
The principle of separation involves enforcement of access
policy
restrictions on the users and resources in a computing environ-
ment. Access policy restrictions result in separation domains,
which are arguably the most common security architectural
concept in use today. This is good news, because the creation of
access-policy-based separation domains will be essential in the
protection of national infrastructure. Most companies today will
typically use fi rewalls to create perimeters around their
presumed
enterprise, and access decisions are embedded in the associated
rules sets. This use of enterprise fi rewalls for separation is
com-
plemented by several other common access techniques:
● Authentication and identity management —These
methods are
used to validate and manage the identities on which separa-
tion decisions are made. They are essential in every enterprise
but cannot be relied upon solely for infrastructure security.
Malicious insiders, for example, will be authorized under such
systems. In addition, external attacks such as DDOS are unaf-
fected by authentication and identity management.
Do not connect honey pots
to real assets!
14 Chapter 1 INTRODUCTION
● Logical access controls —The access controls inherent
in oper-
ating systems and applications provide some degree of sepa-
ration, but they are also weak in the presence of compromised
insiders. Furthermore, underlying vulnerabilities in appli-
cations and operating systems can often be used to subvert
these methods.
● LAN controls —Access control lists on local area
network
(LAN) components can provide separation based on infor-
mation such as Internet Protocol (IP) or media access control
(MAC) address. In this regard, they are very much like fi
rewalls
but typically do not extend their scope beyond an isolated
segment.
● Firewalls —For large-scale infrastructure, fi rewalls are
particu-
larly useful, because they separate one network from another.
Today, every Internet-based connection is almost certainly
protected by some sort of fi rewall functionality. This approach
worked especially well in the early years of the Internet,
when the number of Internet connections to the enterprise
was small. Firewalls do remain useful, however, even with
the massive connectivity of most groups to the Internet. As a
result, national infrastructure should continue to include the
use of fi rewalls to protect known perimeter gateways to the
Internet.
Given the massive scale and complexity associated with
national infrastructure, three specifi c separation enhancements
are required, and all are extensions of the fi rewall concept.
Required Separation Enhancements for National
Infrastructure Protection
1. The use of network-based fi rewalls is absolutely
required for many national infrastructure applications,
especially
ones vulnerable to DDOS attacks from the Internet. This use of
network-based mediation can take advantage of
high-capacity network backbones if the service provider is
involved in running the fi rewalls.
2. The use of fi rewalls to segregate and isolate internal
infrastructure components from one another is a mandatory
technique for simplifying the implementation of access control
policies in an organization. When insiders have
malicious intent, any exploit they might attempt should be
explicitly contained by internal fi rewalls.
3. The use of commercial off-the-shelf fi rewalls, especially
for SCADA usage, will require tailoring of the fi rewall to the
unique protocol needs of the application. It is not acceptable for
national infrastructure protection to retrofi t the use
of a generic, commercial, off-the-shelf tool that is not optimized
for its specifi c use (see Figure 1.6 ).
Chapter 1 INTRODUCTION 15
With the advent of cloud computing, many enterprise and
government agency security managers have come to acknowl-
edge the benefi ts of network-based fi rewall processing. The
approach scales well and helps to deal with the uncontrolled
complexity one typically fi nds in national infrastructure. That
said, the reality is that most national assets are still secured by
placing a fi rewall at each of the hundreds or thousands of pre-
sumed choke points. This approach does not scale and leads
to a false sense of security. It should also be recognized that the
fi rewall is not the only device subjected to such scale
problems.
Intrusion detection systems, antivirus fi ltering, threat manage-
ment, and denial of service fi ltering also require a network-
based
approach to function properly in national infrastructure.
An additional problem that exists in current national infrastruc-
ture is the relative lack of architectural separation used in an
internal,
trusted network. Most security engineers know that large
systems are
best protected by dividing them into smaller systems. Firewalls
or
packet fi ltering routers can be used to segregate an enterprise
net-
work into manageable domains. Unfortunately, the current state
of
the practice in infrastructure protection rarely includes a
disciplined
approach to separating internal assets. This is unfortunate,
because
it allows an intruder in one domain to have access to a more
expan-
sive view of the organizational infrastructure. The threat
increases
when the fi rewall has not been optimized for applications such
as
SCADA that require specialized protocol support.
Required New Separation
Mechanisms
(Less Familiar)
Existing Separation
Mechanisms
(Less Familiar)
Internet Service Provider
Commercial and
Government
Infrastructure
Commercial
Off-the-Shelf
Perimeter
Firewalls
Authentification and
Identity Management,
Logical Access Controls,
LAN Controls
Internal
Firewalls
Tailored
Firewalls
(SCADA)
Network-Based
Firewalls
(Carrier)
Figure 1.6 Firewall enhancements for national
infrastructure.
Parceling a network into
manageable smaller
domains creates an
environment that is easier
to protect.
16 Chapter 1 INTRODUCTION
Diversity
The principle of diversity involves the selection and use of
tech-
nology and systems that are intentionally different in substan-
tive ways. These differences can include technology source,
programming language, computing platform, physical location,
and product vendor. For national infrastructure, realizing such
diversity requires a coordinated program of procurement to
ensure a proper mix of technologies and vendors. The purpose
of
introducing these differences is to deliberately create a measure
of non-interoperability so that an attack cannot easily cascade
from one component to another through exploitation of some
common vulnerability. Certainly, it would be possible, even in a
diverse environment, for an exploit to cascade, but the
likelihood
is reduced as the diversity profi le increases.
This concept is somewhat controversial, because so much
of computer science theory and information technology prac-
tice in the past couple of decades has been focused on maxi-
mizing interoperability of technologies. This might help explain
the relative lack of attentiveness that diversity considerations
receive in these fi elds. By way of analogy, however, cyber
attacks
on national infrastructure are mitigated by diversity technol-
ogy just as disease propagation is reduced by a diverse biologi-
cal ecosystem. That is, a problem that originates in one area of
infrastructure with the intention of automatic propagation will
only succeed in the presence of some degree of interoperability.
If
the technologies are suffi ciently diverse, then the attack propa-
gation will be reduced or even stopped. As such, national asset
managers are obliged to consider means for introducing diver-
sity in a cost-effective manner to realize its security benefi ts
(see
Figure 1.7 ).
Attack
Target
Component
3
Attack
Target
Component
2
Non-Diverse
(Attack Propagates)
Diverse
(Attack Propagation Stops)
Attack
Adversary
Target
Component
1
Figure 1.7 Introducing diversity to national infrastructure.
Chapter 1 INTRODUCTION 17
Diversity is especially tough to implement in national infra-
structure for several reasons. First, it must be acknowledged
that
a single, major software vendor tends to currently dominate the
personal computer (PC) operating system business landscape
in most government and enterprise settings. This is not likely to
change, so national infrastructure security initiatives must sim-
ply accept an ecosystem lacking in diversity in the PC
landscape.
The profi le for operating system software on computer servers
is
slightly better from a diversity perspective, but the choices
remain
limited to a very small number of available sources. Mobile
oper-
ating systems currently offer considerable diversity, but one
can-
not help but expect to see a trend toward greater consolidation.
Second, diversity confl icts with the often-found organiza-
tional goal of simplifying supplier and vendor relationships;
that
is, when a common technology is used throughout an organiza-
tion, day-to-day maintenance, administration, and training costs
are minimized. Furthermore, by purchasing in bulk, better terms
are often available from a vendor. In contrast, the use of
diversity
could result in a reduction in the level of service provided in an
organization. For example, suppose that an Internet service pro-
vider offers particularly secure and reliable network services to
an organization. Perhaps the reliability is even measured to
some
impressive quantitative availability metric. If the organization
is committed to diversity, then one might be forced to actually
introduce a second provider with lower levels of reliability.
In spite of these drawbacks, diversity carries benefi ts that are
indisputable for large-scale infrastructure. One of the great
chal-
lenges in national infrastructure protection will thus involve fi
nd-
ing ways to diversify technology products and services without
increasing costs and losing business leverage with vendors.
Consistency
The principle of consistency involves uniform attention to
secu-
rity best practices across national infrastructure components.
Determining which best practices are relevant for which
national
asset requires a combination of local knowledge about the asset,
as well as broader knowledge of security vulnerabilities in
generic
infrastructure protection. Thus, the most mature approach to
consistency will combine compliance with relevant standards
such as the Sarbanes–Oxley controls in the United States, with
locally derived security policies that are tailored to the
organiza-
tional mission. This implies that every organization charged
with
the design or operation of national infrastructure must have a
Enforcing diversity of
products and services
might seem counterintuitive
if you have a reliable
provider.
18 Chapter 1 INTRODUCTION
local security policy. Amazingly, some large groups do not
have
such a policy today.
The types of best practices that are likely to be relevant for
national infrastructure include well-defi ned software lifecycle
methodologies, timely processes for patching software and sys-
tems, segregation of duty controls in system administration,
threat management of all collected security information, secu-
rity awareness training for all system administrators,
operational
confi gurations for infrastructure management, and use of soft-
ware security tools to ensure proper integrity management. Most
security experts agree on which best practices to include in a
generic set of security requirements, as evidenced by the inclu-
sion of a common core set of practices in every security
standard.
Attentiveness to consistency is thus one of the less
controversial
of our recommended principles.
The greatest challenge in implementing best practice consis-
tency across infrastructure involves auditing. The typical audit
process is performed by an independent third-party entity doing
an analysis of target infrastructure to determine consistency
with
a desired standard. The result of the audit is usually a numeric
score, which is then reported widely and used for management
decisions. In the United States, agencies of the federal govern-
ment are audited against a cyber security standard known as
FISMA (Federal Information Security Management Act). While
auditing does lead to improved best practice coverage, there
are often problems. For example, many audits are done poorly,
which results in confusion and improper management deci-
sions. In addition, with all the emphasis on numeric ratings,
many agencies focus more on their score than on good security
practice.
Today, organizations charged with protecting national infra-
structure are subjected to several types of security audits.
Streamlining these standards would certainly be a good idea,
but
some additional items for consideration include improving the
types of common training provided to security administrators,
as well as including past practice in infrastructure protection in
common audit standards. The most obvious practical consid-
eration for national infrastructure, however, would be national-
level agreement on which standard or standards would be used
to determine competence to protect national assets. While this is
a straightforward concept, it could be tough to obtain wide con-
currence among all national participants. A related issue
involves
commonality in national infrastructure operational confi gu-
rations; this reduces the chances that a rogue confi guration
A good audit score is
important but should not
replace good security
practices.
A national standard of
competence for protecting
our assets is needed.
Chapter 1 INTRODUCTION 19
installed for malicious purposes, perhaps by compromised
insiders.
Depth
The principle of depth involves the use of multiple security
layers
of protection for national infrastructure assets. These layers
pro-
tect assets from both internal and external attacks via the
familiar
“defense in depth” approach; that is, multiple layers reduce the
risk of attack by increasing the chances that at least one layer
will
be effective. This should appear to be a somewhat sketchy situ-
ation, however, from the perspective of traditional engineering.
Civil engineers, for example, would never be comfortable
design-
ing a structure with multiple fl awed supports in the hopes that
one of them will hold the load. Unfortunately, cyber security
experts have no choice but to rely on this fl awed notion,
perhaps
highlighting the relative immaturity of security as an
engineering
discipline.
One hint as to why depth is such an important requirement
is that national infrastructure components are currently con-
trolled by software, and everyone knows that the current state
of software engineering is abysmal. Compared to other types of
engineering, software stands out as the only one that accepts the
creation of knowingly fl awed products as acceptable. The
result is
that all nontrivial software has exploitable vulnerabilities, so
the
idea that one should create multiple layers of security defense is
unavoidable. It is worth mentioning that the degree of diversity
in these layers will also have a direct impact on their
effectiveness
(see Figure 1.8 ).
To maximize the usefulness of defense layers in national infra-
structure, it is recommended that a combination of functional
Software engineering
standards do not contain
the same level of quality
as civil and other
engineering standards.
Attack Gets
Through Here...
...Hopefully
Stopped Here
Multiple Layers of Protection
Adversary Target Asset
Asset Protected
Via Depth Approach
Figure 1.8 National infrastructure security through defense
in depth.
20 Chapter 1 INTRODUCTION
and procedural controls be included. For example, a common
fi rst layer of defense is to install an access control mechanism
for the admission of devices to the local area network. This
could
involve router controls in a small network or fi rewall access
rules
in an enterprise. In either case, this fi rst line of defense is
clearly
functional. As such, a good choice for a second layer of defense
might involve something procedural, such as the deployment
of scanning to determine if inappropriate devices have gotten
through the fi rst layer. Such diversity will increase the chances
that the cause of failure in one layer is unlikely to cause a
similar
failure in another layer.
A great complication in national infrastructure protection is
that many layers of defense assume the existence of a defi ned
net-
work perimeter. For example, the presence of many fl aws in
enter-
prise security found by auditors is mitigated by the recognition
that intruders would have to penetrate the enterprise perimeter
to
exploit these weaknesses. Unfortunately, for most national
assets,
fi nding a perimeter is no longer possible. The assets of a
country,
for example, are almost impossible to defi ne within some geo-
graphic or political boundary, much less a network one.
Security
managers must therefore be creative in identifying controls that
will be meaningful for complex assets whose properties are not
always evident. The risk of getting this wrong is that in
providing
multiple layers of defense, one might misapply the protections
and leave some portion of the asset base with no layers in place.
Discretion
The principle of discretion involves individuals and groups
making good decisions to obscure sensitive information about
national infrastructure. This is done by combining formal man-
datory information protection programs with informal discre-
tionary behavior. Formal mandatory programs have been in
place for many years in the U.S. federal government, where
docu-
ments are associated with classifi cations, and policy enforce-
ment is based on clearances granted to individuals. In the most
intense environments, such as top-secret compartments in
the intelligence community, violations of access policies could
be interpreted as espionage, with all of the associated criminal
implications. For this reason, prominent breaches of highly
clas-
sifi ed government information are not common.
In commercial settings, formal information protection pro-
grams are gaining wider acceptance because of the increased
need to protect personally identifi able information (PII) such as
Naturally, top-secret
information within the
intelligence community is
at great risk for attack or
infi ltration.
Chapter 1 INTRODUCTION 21
credit card numbers. Employees of companies around the
world
are starting to understand the importance of obscuring certain
aspects of corporate activity, and this is healthy for national
infra-
structure protection. In fact, programs of discretion for national
infrastructure protection will require a combination of corpo-
rate and government security policy enforcement, perhaps with
custom-designed information markings for national assets. The
resultant discretionary policy serves as a layer of protection to
prevent national infrastructure-related information from reach-
ing individuals who have no need to know such information.
A barrier in our recommended application of discretion is the
maligned notion of “security through obscurity.” Security
experts,
especially cryptographers, have long complained that obscurity
is an unacceptable protection approach. They correctly
reference
the problems of trying to secure a system by hiding its underly-
ing detail. Inevitably, an adversary discovers the hidden design
secrets and the security protection is lost. For this reason, con-
ventional computer security correctly dictates an open approach
to software, design, and algorithms. An advantage of this open
approach is the social review that comes with widespread adver-
tisement; for example, the likelihood is low of software ever
being
correct without a signifi cant amount of intense review by
experts.
So, the general computer security argument against “security
through obscurity” is largely valid in most cases.
Nevertheless, any manager charged with the protection of
nontrivial, large-scale infrastructure will tell you that discretion
and, yes, obscurity are indispensable components in a protec-
tion program. Obscuring details around technology used, soft-
ware deployed, systems purchased, and confi gurations managed
will help to avoid or at least slow down certain types of attacks.
Hackers often claim that by discovering this type of informa-
tion about a company and then advertising the weaknesses they
are actually doing the local security team a favor. They suggest
that such advertisement is required to motivate a security team
toward a solution, but this is actually nonsense. Programs
around
proper discretion and obscurity for infrastructure information
are indispensable and must be coordinated at the national level.
Collection
The principle of collection involves automated gathering of
sys-
tem-related information about national infrastructure to enable
security analysis. Such collection is usually done in real time
and
involves probes or hooks in applications, system software, net-
work elements, or hardware devices that gather information of
“Security through
obscurity” may actually
leave assets more
vulnerable to attack than
an open approach would.
22 Chapter 1 INTRODUCTION
interest. The use of audit trails in small-scale computer
security is
an example of a long-standing collection practice that
introduces
very little controversy among experts as to its utility. Security
devices such as fi rewalls produce log fi les, and systems
purported
to have some degree of security usefulness will also generate an
audit trail output. The practice is so common that a new type
of product, called a security information management system
(SIMS), has been developed to process all this data.
The primary operational challenge in setting up the right type
of collection process for computers and networks has been two-
fold: First, decisions must be made about what types of
informa-
tion are to be collected. If this decision is made correctly, then
the information collected should correspond to exactly the type
of data required for security analysis, and nothing else. Second,
decisions must be made about how much information is actu-
ally collected. This might involve the use of existing system
func-
tions, such as enabling the automatic generation of statistics on
a router; or it could involve the introduction of some new type
of
function that deliberately gathers the desired information. Once
these considerations are handled, appropriate mechanisms for
collecting data from national infrastructure can be embedded
into the security architecture (see Figure 1.9 ).
The technical and operational challenges associated with the
collection of logs and audit trails are heightened in the protec-
tion of national assets. Because national infrastructure is so
com-
plex, determining what information should be collected turns
out to be a diffi cult exercise. In particular, the potential arises
with large-scale collection to intrude on the privacy of individu-
als and groups within a nation. As such, any initiative to protect
Typical Infrastructure
Collection Points
Type and Volume
Issues
Device Status Monitors
Distributed Across
Government and Industry
Interpretation
and Action
Operating System Logs
Network Monitors
Application Hooks
Transport
Issues
Privacy
Issues
Data
Collection
Repositories
Figure 1.9 Collecting national infrastructure-related
security information.
Chapter 1 INTRODUCTION 23
infrastructure through the collection of data must include at
least
some measure of privacy policy determination. Similarly, the
vol-
umes of data collected from large infrastructure can exceed
prac-
tical limits. Telecommunications collection systems designed to
protect the integrity of a service provider backbone, for
example,
can easily generate many terabytes of data in hours of
processing.
In both cases, technical and operational expertise must be
applied to ensure that the appropriate data is collected in the
proper amounts. The good news is that virtually all security
protection algorithms require no deep, probing information of
the type that might generate privacy or volumetric issues. The
challenge arises instead when collection is done without proper
advance analysis which often results in the collection of more
data than is needed. This can easily lead to privacy problems in
some national collection repositories, so planning is particularly
necessary. In any event, a national strategy of data collection is
required, with the usual sorts of legal and policy guidance on
who collects what and under which circumstances. As we sug-
gested above, this exercise must be guided by the requirements
for security analysis—and nothing else.
Correlation
The principle of correlation involves a specifi c type of
analysis
that can be performed on factors related to national
infrastructure
protection. The goal of correlation is to identify whether
security-
related indicators might emerge from the analysis. For example,
if
some national computing asset begins operating in a sluggish
man-
ner, then other factors would be examined for a possible
correlative
relationship. One could imagine the local and wide area
networks
being analyzed for traffi c that might be of an attack nature. In
addi-
tion, similar computing assets might be examined to determine
if
they are experiencing a similar functional problem. Also, all
soft-
ware and services embedded in the national asset might be ana-
lyzed for known vulnerabilities. In each case, the purpose of the
correlation is to combine and compare factors to help explain a
given security issue. This type of comparison-oriented analysis
is
indispensable for national infrastructure because of its
complexity.
Interestingly, almost every major national infrastructure pro-
tection initiative attempted to date has included a fusion cen-
ter for real-time correlation of data. A fusion center is a
physical
security operations center with means for collecting and ana-
lyzing multiple sources of ingress data. It is not uncommon for
such a center to include massive display screens with colorful,
What and how much data
to collect is an operational
challenge.
Only collect as much data
as is necessary for security
purposes.
Monitoring and analyzing
networks and data
collection may reveal
a hidden or emerging
security threat.
24 Chapter 1 INTRODUCTION
visualized representations, nor is it uncommon to fi nd such
cen-
ters in the military with teams of enlisted people performing the
manual chores. This is an important point, because, while such
automated fusion is certainly promising, best practice in cor-
relation for national infrastructure protection must include the
requirement that human judgment be included in the analysis.
Thus, regardless of whether resources are centralized into one
physical location, the reality is that human beings will need to
be
included in the processing (see Figure 1.10 ).
In practice, fusion centers and the associated processes and
correlation algorithms have been tough to implement, even in
small-scale environments. Botnets, for example, involve the use
of source systems that are selected almost arbitrarily. As such,
the use of correlation to determine where and why the attack is
occurring has been useless. In fact, correlating geographic
infor-
mation with the sources of botnet activity has even led to many
false conclusions about who is attacking whom. Countless hours
have been spent by security teams poring through botnet infor-
mation trying to determine the source, and the best one can
Correlation Process
Output
Recommended
Actions
Multiple
Ingress Data
Feeds
Comparison and
Analysis of
Relevant Factors
Derive
Real-Time
Conclusions
Figure 1.10 National infrastructure high-level correlation
approach.
Three Steps to Improve Current Correlation
Capabilities
1. The actual computer science around correlation
algorithms needs to be better investigated. Little attention has
been
placed in academic computer science and applied mathematics
departments to multifactor correlation of real-time
security data. This could be changed with appropriate funding
and grant emphasis from the government.
2. The ability to identify reliable data feeds needs to be
greatly improved. Too much attention has been placed on ad
hoc collection of volunteered feeds, and this complicates the
ability for analysis to perform meaningful correlation.
3. The design and operation of a national-level fusion center
must be given serious consideration. Some means must be
identifi ed for putting aside political and funding problems in
order to accomplish this important objective.
Chapter 1 INTRODUCTION 25
hope for might be information about controllers or software
drops. In the end, current correlation approaches fall short.
What is needed to improve present correlation capabilities for
national infrastructure protection involves multiple steps.
Awareness
The principle of awareness involves an organization under-
standing the differences, in real time and at all times, between
observed and normal status in national infrastructure. This
status
can include risks, vulnerabilities, and behavior in the target
infra-
structure. Behavior refers here to the mix of user activity,
system
processing, network traffi c, and computing volumes in the soft-
ware, computers, and systems that comprise infrastructure. The
implication is that the organization can somehow characterize a
given situation as being either normal or abnormal.
Furthermore,
the organization must have the ability to detect and measure
differences between these two behavioral states. Correlation
analysis is usually inherent in such determinations, but the real
challenge is less the algorithms and more the processes that
must
be in place to ensure situational awareness every hour of every
day. For example, if a new vulnerability arises that has impact
on
the local infrastructure, then this knowledge must be obtained
and factored into management decisions immediately.
Managers of national infrastructure generally do not have to be
convinced that situational awareness is important. The big issue
instead is how to achieve this goal. In practice, real-time aware-
ness requires attentiveness and vigilance rarely found in normal
computer security. Data must fi rst be collected and enabled to
fl ow into a fusion center at all times so correlation can take
place.
The results of the correlation must be used to establish a profi
led
baseline of behavior so differences can be measured. This
sounds
easier than it is, because so many odd situations have the ability
to
mimic normal behavior (when it is really a problem) or a
problem
(when it really is nothing). Nevertheless, national infrastructure
protection demands that managers of assets create a locally rele-
vant means for being able to comment accurately on the state of
security at all times. This allows for proper management
decisions
about security (see Figure 1.11 ).
Interestingly, situational awareness has not been considered a
major component of the computer security equation to date. The
concept plays no substantive role in small-scale security, such
as in a home network, because when the computing base to be
protected is simple enough, characterizing real-time situational
status is just not necessary. Similarly, when a security manager
puts in place security controls for a small enterprise, situational
Awareness builds on
collection and correlation,
but is not limited to those
areas alone.
26 Chapter 1 INTRODUCTION
awareness is not the highest priority. Generally, the closest
one
might expect to some degree of real-time awareness for a small
system might be an occasional review of system log fi les. So,
the
transition from small-scale to large-scale infrastructure protec-
tion does require a new attentiveness to situational awareness
that is not well developed. It is also worth noting that the
general
notion of “user awareness” of security is also not the principle
specifi ed here. While it is helpful for end users to have knowl-
edge of security, any professionally designed program of
national
infrastructure security must presume that a high percentage
of end users will always make the wrong sorts of security deci-
sions if allowed. The implication is that national infrastructure
protection must never rely on the decision-making of end users
through programs of awareness.
A further advance that is necessary for situational awareness
involves enhancements in approaches to security metrics report-
ing. Where the non-cyber national intelligence community has
done a great job developing means for delivering daily
intelligence
briefs to senior government offi cials, the cyber security
commu-
nity has rarely considered this approach. The reality is that, for
sit-
uation awareness to become a structural component of national
infrastructure protection, valid metrics must be developed to
accurately portray status, and these must be codifi ed into a
suit-
able type of regular intelligence report that senior offi cials can
use to determine security status. It would not be unreasonable to
expect this cyber security intelligence to fl ow from a central
point
such as a fusion center, but in general this is not a requirement.
Response
The principle of response involves assurance that processes
are
in place to react to any security-related indicator that becomes
Large-scale infrastructure
protection requires a
higher level of awareness
than most groups currently
employ.
Targeted at
Managers
Collection
Raw Data
Combined Automation and Manual Process
Fusion
Intelligence
Situational
Awareness
Figure 1.11 Real-time situation awareness process fl ow.
Chapter 1 INTRODUCTION 27
available. These indicators should fl ow into the response pro-
cess primarily from the situational awareness layer. National
infrastructure response should emphasize indicators rather
than incidents. In most current computer security applications,
the response team waits for serious problems to occur, usually
including complaints from users, applications running poorly,
and networks operating in a sluggish manner. Once this occurs,
the response team springs into action, even though by this time
the security game has already been lost. For essential national
infrastructure services, the idea of waiting for the service to
degrade before responding does not make logical sense.
An additional response-related change for national infra-
structure protection is that the maligned concept of “false posi-
tive” must be reconsidered. In current small-scale environments,
a major goal of the computer security team is to minimize the
number of response cases that are initiated only to fi nd that
nothing was wrong after all. This is an easy goal to reach by
sim-
ply waiting for disasters to be confi rmed beyond a shadow of a
doubt before response is initiated. For national infrastructure,
however, this is obviously unacceptable. Instead, response must
follow indicators, and the concept of minimizing false positives
must not be part of the approach. The only quantitative metric
that must be minimized in national-level response is risk (see
Figure 1.12 ).
A challenge that must be considered in establishing response
functions for national asset protection is that relevant indica-
tors often arise long before any harmful effects are seen. This
suggests that infrastructure protecting must have accurate situ-
ational awareness that considers much more than just visible
impacts such as users having trouble, networks being down, or
services being unavailable. Instead, often subtle indicators must
• Higher False-Positive Rate
• Lower Security Risk
• Recommended for National Infrastructure
Response Process
(pre-attack)
indicator indicator indicator
• Lower False-Positive Rate
• Higher Security Risk
• Use for National Infrastructure Only If Required
effect effect effect
Response Process
(post-attack)
attack threshold time
Figure 1.12 National infrastructure security response
approach.
28 Chapter 1 INTRODUCTION
be analyzed carefully, which is where the challenges arise with
false positives. When response teams agree to consider such
indi-
cators, it becomes more likely that such indicators are benign. A
great secret to proper incident response for national infrastruc-
ture is that higher false positive rates might actually be a good
sign.
It is worth noting that the principles of collection, correlation,
awareness, and response are all consistent with the implemen-
tation of a national fusion center. Clearly, response activities
are
often dependent on a real-time, ubiquitous operations center to
coordinate activities, contact key individuals, collect data as it
becomes available, and document progress in the response
activ-
ities. As such, it should not be unexpected that national-level
response for cyber security should include some sort of central-
ized national center. The creation of such a facility should be
the
centerpiece of any national infrastructure protection program
and should involve the active participation of all organizations
with responsibility for national services.
Implementing the Principles Nationally
To effectively apply this full set of security principles in
practice
for national infrastructure protection, several practical imple-
mentation considerations emerge:
● Commissions and groups —Numerous commissions and
groups have been created over the years with the purpose of
national infrastructure protection. Most have had some minor
positive impact on infrastructure security, but none has had
suffi cient impact to reduce present national risk to accept-
able levels. An observation here is that many of these commis-
sions and groups have become the end rather than the means
toward a cyber security solution. When this occurs, their likeli-
hood of success diminishes considerably. Future commissions
and groups should take this into consideration.
● Information sharing —Too much attention is placed on
infor-
mation sharing between government and industry, perhaps
because information sharing would seem on the surface to
carry much benefi t to both parties. The advice here is that a
comprehensive information sharing program is not easy to
implement simply because organizations prefer to maintain
a low profi le when fi ghting a vulnerability or attack. In addi-
tion, the presumption that some organization—government
or commercial—might have some nugget of information
that could solve a cyber attack or reduce risk is not generally
A higher rate of false
positives must be tolerated
for national infrastructure
protection.
Chapter 1 INTRODUCTION 29
consistent with practice. Thus, the motivation for a commer-
cial entity to share vulnerability or incident-related informa-
tion with the government is low; very little value generally
comes from such sharing.
● International cooperation —National initiatives focused
on
creating government cyber security legislation must acknowl-
edge that the Internet is global, as are the shared services such
as the domain name system (DNS) that all national and global
assets are so dependent upon. Thus, any program of national
infrastructure protection must include provisions for interna-
tional cooperation, and such cooperation implies agreements
between participants that will be followed as long as everyone
perceives benefi t.
● Technical and operational costs —To implement the
princi-
ples described above, considerable technical and operational
costs will need to be covered across government and commer-
cial environments. While it is tempting to presume that the
purveyors of national infrastructure can simply absorb these
costs into normal business budgets, this has not been the case
in the past. Instead, the emphasis should be on rewards and
incentives for organizations that make the decision to imple-
ment these principles. This point is critical because it suggests
that the best possible use of government funds might be as
straightforward as helping to directly fund initiatives that will
help to secure national assets.
The bulk of our discussion in the ensuing chapters is techni-
cal in nature; that is, programmatic and political issues are
conve-
niently ignored. This does not diminish their importance, but
rather
is driven by our decision to separate our concerns and focus in
this
book on the details of “what” must be done, rather than “how.”
This page intentionally left blank
31
Cyber Attacks. DOI:
© Elsevier Inc. All rights reserved.
10.1016/B978-0-12-384917-5.00002-0
2011
DECEPTION
Create a highly controlled network. Within that network, you
place production systems and then monitor, capture, and analyze
all activity that happens within that network Because this is not
a production network, but rather our Honeynet, any traffic is
suspicious by nature .
The Honeynet Project 1
The use of deception in computing involves
deliberately mislead-
ing an adversary by creating a system component that looks real
but is in fact a trap. The system component, sometimes referred
to
as a honey pot , is usually functionality embedded in a
computing
or networking system, but it can also be a physical asset
designed
to trick an intruder. In both cases, a common interface is
presented
to an adversary who might access real functionality connected
to
real assets, but who might also unknowingly access deceptive
functionality connected to bogus assets. In a well-designed
decep-
tive system, the distinction between real and trap functionality
should not be apparent to the intruder (see Figure 2.1 ).
The purpose of deception, ultimately, is to enhance security,
so in the context of national infrastructure it can be used for
large-scale protection of assets. The reason why deception
works
is that it helps accomplish any or all of the following four
security
objectives:
● Attention —The attention of an adversary can be
diverted from
real assets toward bogus ones.
● Energy —The valuable time and energy of an adversary
can be
wasted on bogus targets.
2
1 The Honeynet Project, Know Your Enemy: Revealing the
Security Tools, Tactics, and
Motives of the Blackhat Community , Addison–Wesley
Professional, New York, 2002.
(I highly recommend this amazing and original book.) See also
B. Cheswick and
S. Bellovin, Firewalls and Internet Security: Repelling the Wily
Hacker , 1st ed., Addison–
Wesley Professional, New York, 1994; C. Stoll, The Cuckoo’s
Egg: Tracking a Spy Through
the Maze of Computer Espionage , Pocket Books, New York,
2005.
32 Chapter 2 DECEPTION
● Uncertainty —Uncertainty can be created around the
veracity
of a discovered vulnerability.
● Analysis —A basis can be provided for real-time security
analy-
sis of adversary behavior.
The fact that deception diverts the attention of adversaries,
while also wasting their time and energy, should be familiar to
anyone who has ever used a honey pot on a network. As long as
the trap is set properly and the honey pot is suffi ciently
realistic,
adversaries might direct their time, attention, and energy toward
something that is useless from an attack perspective. They
might
even plant time bombs in trap functionality that they believe
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx
   @author Jane Programmer  @cwid   123 45 678  @class.docx

More Related Content

Similar to @author Jane Programmer @cwid 123 45 678 @class.docx

Week 2 Assignment 2 Presentation TopicsSubmit Assignment· Due.docx
Week 2 Assignment 2 Presentation TopicsSubmit Assignment· Due.docxWeek 2 Assignment 2 Presentation TopicsSubmit Assignment· Due.docx
Week 2 Assignment 2 Presentation TopicsSubmit Assignment· Due.docxjessiehampson
 
Handbook all eng
Handbook all engHandbook all eng
Handbook all enganiqa7
 
Whitepaper on distributed ledger technology
Whitepaper on distributed ledger technologyWhitepaper on distributed ledger technology
Whitepaper on distributed ledger technologyUnder the sharing mood
 
The Defender's Dilemma
The Defender's DilemmaThe Defender's Dilemma
The Defender's DilemmaSymantec
 
Easttom C. Computer Security Fundamentals 3ed 2016.pdf
Easttom C. Computer Security Fundamentals 3ed 2016.pdfEasttom C. Computer Security Fundamentals 3ed 2016.pdf
Easttom C. Computer Security Fundamentals 3ed 2016.pdfJarellScott
 
Symantec Internet Security Threat Report - 2009
Symantec Internet Security Threat Report - 2009Symantec Internet Security Threat Report - 2009
Symantec Internet Security Threat Report - 2009guest6561cc
 
Deployment guide
Deployment guideDeployment guide
Deployment guidedonzerci
 
Learn C# Includes The C# 3.0 Features
Learn C# Includes The C# 3.0 FeaturesLearn C# Includes The C# 3.0 Features
Learn C# Includes The C# 3.0 FeaturesZEZUA Z.
 
Uni cambridge
Uni cambridgeUni cambridge
Uni cambridgeN/A
 
Peachpit mastering xcode 4 develop and design sep 2011
Peachpit mastering xcode 4 develop and design sep 2011Peachpit mastering xcode 4 develop and design sep 2011
Peachpit mastering xcode 4 develop and design sep 2011Jose Erickson
 
Beej Guide Network Programming
Beej Guide Network ProgrammingBeej Guide Network Programming
Beej Guide Network ProgrammingSriram Raj
 
State of the Art: IoT Honeypots
State of the Art: IoT HoneypotsState of the Art: IoT Honeypots
State of the Art: IoT HoneypotsBiagio Botticelli
 
Doors Getting Started
Doors Getting StartedDoors Getting Started
Doors Getting Startedsong4fun
 
Pc 811 transformation_guide
Pc 811 transformation_guidePc 811 transformation_guide
Pc 811 transformation_guideVenkat Madduru
 
iGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - ReportiGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - ReportNandu B Rajan
 

Similar to @author Jane Programmer @cwid 123 45 678 @class.docx (20)

Week 2 Assignment 2 Presentation TopicsSubmit Assignment· Due.docx
Week 2 Assignment 2 Presentation TopicsSubmit Assignment· Due.docxWeek 2 Assignment 2 Presentation TopicsSubmit Assignment· Due.docx
Week 2 Assignment 2 Presentation TopicsSubmit Assignment· Due.docx
 
Handbook all eng
Handbook all engHandbook all eng
Handbook all eng
 
Wisr2011 en
Wisr2011 enWisr2011 en
Wisr2011 en
 
z_remy_spaan
z_remy_spaanz_remy_spaan
z_remy_spaan
 
Whitepaper on distributed ledger technology
Whitepaper on distributed ledger technologyWhitepaper on distributed ledger technology
Whitepaper on distributed ledger technology
 
Begining j2 me
Begining j2 meBegining j2 me
Begining j2 me
 
The Defender's Dilemma
The Defender's DilemmaThe Defender's Dilemma
The Defender's Dilemma
 
Easttom C. Computer Security Fundamentals 3ed 2016.pdf
Easttom C. Computer Security Fundamentals 3ed 2016.pdfEasttom C. Computer Security Fundamentals 3ed 2016.pdf
Easttom C. Computer Security Fundamentals 3ed 2016.pdf
 
Symantec Internet Security Threat Report - 2009
Symantec Internet Security Threat Report - 2009Symantec Internet Security Threat Report - 2009
Symantec Internet Security Threat Report - 2009
 
Deployment guide
Deployment guideDeployment guide
Deployment guide
 
Learn C# Includes The C# 3.0 Features
Learn C# Includes The C# 3.0 FeaturesLearn C# Includes The C# 3.0 Features
Learn C# Includes The C# 3.0 Features
 
Uni cambridge
Uni cambridgeUni cambridge
Uni cambridge
 
Peachpit mastering xcode 4 develop and design sep 2011
Peachpit mastering xcode 4 develop and design sep 2011Peachpit mastering xcode 4 develop and design sep 2011
Peachpit mastering xcode 4 develop and design sep 2011
 
Beej Guide Network Programming
Beej Guide Network ProgrammingBeej Guide Network Programming
Beej Guide Network Programming
 
State of the Art: IoT Honeypots
State of the Art: IoT HoneypotsState of the Art: IoT Honeypots
State of the Art: IoT Honeypots
 
catalogo ck3
catalogo ck3catalogo ck3
catalogo ck3
 
Manual Ck3 honeywell - www.codeprint.com.br
Manual Ck3 honeywell - www.codeprint.com.brManual Ck3 honeywell - www.codeprint.com.br
Manual Ck3 honeywell - www.codeprint.com.br
 
Doors Getting Started
Doors Getting StartedDoors Getting Started
Doors Getting Started
 
Pc 811 transformation_guide
Pc 811 transformation_guidePc 811 transformation_guide
Pc 811 transformation_guide
 
iGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - ReportiGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - Report
 

More from ShiraPrater50

Read Chapter 3. Answer the following questions1.Wha.docx
Read Chapter 3. Answer the following questions1.Wha.docxRead Chapter 3. Answer the following questions1.Wha.docx
Read Chapter 3. Answer the following questions1.Wha.docxShiraPrater50
 
Read Chapter 15 and answer the following questions 1.  De.docx
Read Chapter 15 and answer the following questions 1.  De.docxRead Chapter 15 and answer the following questions 1.  De.docx
Read Chapter 15 and answer the following questions 1.  De.docxShiraPrater50
 
Read Chapter 2 and answer the following questions1.  List .docx
Read Chapter 2 and answer the following questions1.  List .docxRead Chapter 2 and answer the following questions1.  List .docx
Read Chapter 2 and answer the following questions1.  List .docxShiraPrater50
 
Read chapter 7 and write the book report  The paper should be .docx
Read chapter 7 and write the book report  The paper should be .docxRead chapter 7 and write the book report  The paper should be .docx
Read chapter 7 and write the book report  The paper should be .docxShiraPrater50
 
Read Chapter 7 and answer the following questions1.  What a.docx
Read Chapter 7 and answer the following questions1.  What a.docxRead Chapter 7 and answer the following questions1.  What a.docx
Read Chapter 7 and answer the following questions1.  What a.docxShiraPrater50
 
Read chapter 14, 15 and 18 of the class textbook.Saucier.docx
Read chapter 14, 15 and 18 of the class textbook.Saucier.docxRead chapter 14, 15 and 18 of the class textbook.Saucier.docx
Read chapter 14, 15 and 18 of the class textbook.Saucier.docxShiraPrater50
 
Read Chapter 10 APA FORMAT1. In the last century, what historica.docx
Read Chapter 10 APA FORMAT1. In the last century, what historica.docxRead Chapter 10 APA FORMAT1. In the last century, what historica.docx
Read Chapter 10 APA FORMAT1. In the last century, what historica.docxShiraPrater50
 
Read chapter 7 and write the book report  The paper should b.docx
Read chapter 7 and write the book report  The paper should b.docxRead chapter 7 and write the book report  The paper should b.docx
Read chapter 7 and write the book report  The paper should b.docxShiraPrater50
 
Read Chapter 14 and answer the following questions1.  Explain t.docx
Read Chapter 14 and answer the following questions1.  Explain t.docxRead Chapter 14 and answer the following questions1.  Explain t.docx
Read Chapter 14 and answer the following questions1.  Explain t.docxShiraPrater50
 
Read Chapter 2 first. Then come to this assignment.The first t.docx
Read Chapter 2 first. Then come to this assignment.The first t.docxRead Chapter 2 first. Then come to this assignment.The first t.docx
Read Chapter 2 first. Then come to this assignment.The first t.docxShiraPrater50
 
Journal of Public Affairs Education 515Teaching Grammar a.docx
 Journal of Public Affairs Education 515Teaching Grammar a.docx Journal of Public Affairs Education 515Teaching Grammar a.docx
Journal of Public Affairs Education 515Teaching Grammar a.docxShiraPrater50
 
Learner Guide TLIR5014 Manage suppliers TLIR.docx
 Learner Guide TLIR5014 Manage suppliers TLIR.docx Learner Guide TLIR5014 Manage suppliers TLIR.docx
Learner Guide TLIR5014 Manage suppliers TLIR.docxShiraPrater50
 
Lab 5 Nessus Vulnerability Scan Report © 2012 by Jone.docx
 Lab 5 Nessus Vulnerability Scan Report © 2012 by Jone.docx Lab 5 Nessus Vulnerability Scan Report © 2012 by Jone.docx
Lab 5 Nessus Vulnerability Scan Report © 2012 by Jone.docxShiraPrater50
 
Leveled and Exclusionary Tracking English Learners Acce.docx
 Leveled and Exclusionary Tracking English Learners Acce.docx Leveled and Exclusionary Tracking English Learners Acce.docx
Leveled and Exclusionary Tracking English Learners Acce.docxShiraPrater50
 
Lab 5 Nessus Vulnerability Scan Report © 2015 by Jone.docx
 Lab 5 Nessus Vulnerability Scan Report © 2015 by Jone.docx Lab 5 Nessus Vulnerability Scan Report © 2015 by Jone.docx
Lab 5 Nessus Vulnerability Scan Report © 2015 by Jone.docxShiraPrater50
 
MBA 6941, Managing Project Teams 1 Course Learning Ou.docx
 MBA 6941, Managing Project Teams 1 Course Learning Ou.docx MBA 6941, Managing Project Teams 1 Course Learning Ou.docx
MBA 6941, Managing Project Teams 1 Course Learning Ou.docxShiraPrater50
 
Inventory Decisions in Dells Supply ChainAuthor(s) Ro.docx
 Inventory Decisions in Dells Supply ChainAuthor(s) Ro.docx Inventory Decisions in Dells Supply ChainAuthor(s) Ro.docx
Inventory Decisions in Dells Supply ChainAuthor(s) Ro.docxShiraPrater50
 
It’s Your Choice 10 – Clear Values 2nd Chain Link- Trade-offs .docx
 It’s Your Choice 10 – Clear Values 2nd Chain Link- Trade-offs .docx It’s Your Choice 10 – Clear Values 2nd Chain Link- Trade-offs .docx
It’s Your Choice 10 – Clear Values 2nd Chain Link- Trade-offs .docxShiraPrater50
 
MBA 5101, Strategic Management and Business Policy 1 .docx
 MBA 5101, Strategic Management and Business Policy 1 .docx MBA 5101, Strategic Management and Business Policy 1 .docx
MBA 5101, Strategic Management and Business Policy 1 .docxShiraPrater50
 
MAJOR WORLD RELIGIONSJudaismJudaism (began .docx
 MAJOR WORLD RELIGIONSJudaismJudaism (began .docx MAJOR WORLD RELIGIONSJudaismJudaism (began .docx
MAJOR WORLD RELIGIONSJudaismJudaism (began .docxShiraPrater50
 

More from ShiraPrater50 (20)

Read Chapter 3. Answer the following questions1.Wha.docx
Read Chapter 3. Answer the following questions1.Wha.docxRead Chapter 3. Answer the following questions1.Wha.docx
Read Chapter 3. Answer the following questions1.Wha.docx
 
Read Chapter 15 and answer the following questions 1.  De.docx
Read Chapter 15 and answer the following questions 1.  De.docxRead Chapter 15 and answer the following questions 1.  De.docx
Read Chapter 15 and answer the following questions 1.  De.docx
 
Read Chapter 2 and answer the following questions1.  List .docx
Read Chapter 2 and answer the following questions1.  List .docxRead Chapter 2 and answer the following questions1.  List .docx
Read Chapter 2 and answer the following questions1.  List .docx
 
Read chapter 7 and write the book report  The paper should be .docx
Read chapter 7 and write the book report  The paper should be .docxRead chapter 7 and write the book report  The paper should be .docx
Read chapter 7 and write the book report  The paper should be .docx
 
Read Chapter 7 and answer the following questions1.  What a.docx
Read Chapter 7 and answer the following questions1.  What a.docxRead Chapter 7 and answer the following questions1.  What a.docx
Read Chapter 7 and answer the following questions1.  What a.docx
 
Read chapter 14, 15 and 18 of the class textbook.Saucier.docx
Read chapter 14, 15 and 18 of the class textbook.Saucier.docxRead chapter 14, 15 and 18 of the class textbook.Saucier.docx
Read chapter 14, 15 and 18 of the class textbook.Saucier.docx
 
Read Chapter 10 APA FORMAT1. In the last century, what historica.docx
Read Chapter 10 APA FORMAT1. In the last century, what historica.docxRead Chapter 10 APA FORMAT1. In the last century, what historica.docx
Read Chapter 10 APA FORMAT1. In the last century, what historica.docx
 
Read chapter 7 and write the book report  The paper should b.docx
Read chapter 7 and write the book report  The paper should b.docxRead chapter 7 and write the book report  The paper should b.docx
Read chapter 7 and write the book report  The paper should b.docx
 
Read Chapter 14 and answer the following questions1.  Explain t.docx
Read Chapter 14 and answer the following questions1.  Explain t.docxRead Chapter 14 and answer the following questions1.  Explain t.docx
Read Chapter 14 and answer the following questions1.  Explain t.docx
 
Read Chapter 2 first. Then come to this assignment.The first t.docx
Read Chapter 2 first. Then come to this assignment.The first t.docxRead Chapter 2 first. Then come to this assignment.The first t.docx
Read Chapter 2 first. Then come to this assignment.The first t.docx
 
Journal of Public Affairs Education 515Teaching Grammar a.docx
 Journal of Public Affairs Education 515Teaching Grammar a.docx Journal of Public Affairs Education 515Teaching Grammar a.docx
Journal of Public Affairs Education 515Teaching Grammar a.docx
 
Learner Guide TLIR5014 Manage suppliers TLIR.docx
 Learner Guide TLIR5014 Manage suppliers TLIR.docx Learner Guide TLIR5014 Manage suppliers TLIR.docx
Learner Guide TLIR5014 Manage suppliers TLIR.docx
 
Lab 5 Nessus Vulnerability Scan Report © 2012 by Jone.docx
 Lab 5 Nessus Vulnerability Scan Report © 2012 by Jone.docx Lab 5 Nessus Vulnerability Scan Report © 2012 by Jone.docx
Lab 5 Nessus Vulnerability Scan Report © 2012 by Jone.docx
 
Leveled and Exclusionary Tracking English Learners Acce.docx
 Leveled and Exclusionary Tracking English Learners Acce.docx Leveled and Exclusionary Tracking English Learners Acce.docx
Leveled and Exclusionary Tracking English Learners Acce.docx
 
Lab 5 Nessus Vulnerability Scan Report © 2015 by Jone.docx
 Lab 5 Nessus Vulnerability Scan Report © 2015 by Jone.docx Lab 5 Nessus Vulnerability Scan Report © 2015 by Jone.docx
Lab 5 Nessus Vulnerability Scan Report © 2015 by Jone.docx
 
MBA 6941, Managing Project Teams 1 Course Learning Ou.docx
 MBA 6941, Managing Project Teams 1 Course Learning Ou.docx MBA 6941, Managing Project Teams 1 Course Learning Ou.docx
MBA 6941, Managing Project Teams 1 Course Learning Ou.docx
 
Inventory Decisions in Dells Supply ChainAuthor(s) Ro.docx
 Inventory Decisions in Dells Supply ChainAuthor(s) Ro.docx Inventory Decisions in Dells Supply ChainAuthor(s) Ro.docx
Inventory Decisions in Dells Supply ChainAuthor(s) Ro.docx
 
It’s Your Choice 10 – Clear Values 2nd Chain Link- Trade-offs .docx
 It’s Your Choice 10 – Clear Values 2nd Chain Link- Trade-offs .docx It’s Your Choice 10 – Clear Values 2nd Chain Link- Trade-offs .docx
It’s Your Choice 10 – Clear Values 2nd Chain Link- Trade-offs .docx
 
MBA 5101, Strategic Management and Business Policy 1 .docx
 MBA 5101, Strategic Management and Business Policy 1 .docx MBA 5101, Strategic Management and Business Policy 1 .docx
MBA 5101, Strategic Management and Business Policy 1 .docx
 
MAJOR WORLD RELIGIONSJudaismJudaism (began .docx
 MAJOR WORLD RELIGIONSJudaismJudaism (began .docx MAJOR WORLD RELIGIONSJudaismJudaism (began .docx
MAJOR WORLD RELIGIONSJudaismJudaism (began .docx
 

Recently uploaded

EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxRaymartEstabillo3
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxDr.Ibrahim Hassaan
 
Planning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxPlanning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxLigayaBacuel1
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
Romantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxRomantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxsqpmdrvczh
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Jisc
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
 

Recently uploaded (20)

EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptx
 
Planning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxPlanning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptx
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
Romantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxRomantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptx
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
 

@author Jane Programmer @cwid 123 45 678 @class.docx

  • 1. /** * @author Jane Programmer * @cwid 123 45 678 * @class COSC 2336, Spring 2019 * @ide Visual Studio Community 2017 * @date April 8, 2019 * @assg Assignment 12 * * @description Assignment 12 Binary Search Trees */ #include <cassert> #include <iostream> #include "BinaryTree.hpp" using namespace std; /** main * The main entry point for this program. Execution of this program * will begin with this main function. * * @param argc The command line argument count which is the number of * command line arguments provided by user when they started * the program. * @param argv The command line arguments, an array of character * arrays. * * @returns An int value indicating program exit status. Usually 0 * is returned to indicate normal exit and a non-zero value
  • 2. * is returned to indicate an error condition. */ int main(int argc, char** argv) { // ----------------------------------------------------------------------- cout << "--------------- testing BinaryTree construction -------- --------" << endl; BinaryTree t; cout << "<constructor> Size of new empty tree: " << t.size() << endl; cout << t << endl; assert(t.size() == 0); cout << endl; // ----------------------------------------------------------------------- cout << "--------------- testing BinaryTree insertion ------------ -------" << endl; t.insert(10); cout << "<insert> Inserted into empty tree, size: " << t.size() << endl; cout << t << endl; assert(t.size() == 1); t.insert(3); t.insert(7); t.insert(12); t.insert(15); t.insert(2); cout << "<insert> inserted 5 more items, size: " << t.size() << endl; cout << t << endl; assert(t.size() == 6);
  • 3. cout << endl; // ----------------------------------------------------------------------- cout << "--------------- testing BinaryTree height --------------- ----" << endl; //cout << "<height> Current tree height: " << t.height() << endl; //assert(t.height() == 3); // increase height by 2 //t.insert(4); //t.insert(5); //cout << "<height> after inserting nodes, height: " << t.height() // << " size: " << t.size() << endl; //cout << t << endl; //assert(t.height() == 5); //assert(t.size() == 8); cout << endl; // ----------------------------------------------------------------------- cout << "--------------- testing BinaryTree clear ----------------- --" << endl; //t.clear(); //cout << "<clear> after clearing tree, height: " << t.height() // << " size: " << t.size() << endl; //cout << t << endl; //assert(t.size() == 0); //assert(t.height() == 0); cout << endl; // return 0 to indicate successful completion return 0;
  • 4. } C y b e r A t t a c k s “Dr. Amoroso’s fi fth book Cyber Attacks: Protecting National Infrastructure outlines the chal- lenges of protecting our nation’s infrastructure from cyber attack using security techniques established to protect much smaller and less complex environments. He proposes a brand new type of national infrastructure protection methodology and outlines a strategy presented as a series of ten basic design and operations principles ranging from deception to response. The bulk of the text covers each of these principles in technical detail. While several of these principles would be daunting to implement and practice they provide the fi rst clear and con- cise framework for discussion of this critical challenge. This text is thought-provoking and should be a ‘must read’ for anyone concerned with cybersecurity in the private or government sector.” — Clayton W. Naeve, Ph.D. , Senior Vice President and Chief Information Offi cer, Endowed Chair in Bioinformatics, St. Jude Children’s Research Hospital,
  • 5. Memphis, TN “Dr. Ed Amoroso reveals in plain English the threats and weaknesses of our critical infra- structure balanced against practices that reduce the exposures. This is an excellent guide to the understanding of the cyber-scape that the security professional navigates. The book takes complex concepts of security and simplifi es it into coherent and simple to understand concepts.” — Arnold Felberbaum , Chief IT Security & Compliance Offi cer, Reed Elsevier “The national infrastructure, which is now vital to communication, commerce and entertain- ment in everyday life, is highly vulnerable to malicious attacks and terrorist threats. Today, it is possible for botnets to penetrate millions of computers around the world in few minutes, and to attack the valuable national infrastructure. “As the New York Times reported, the growing number of threats by botnets suggests that this cyber security issue has become a serious problem, and we are losing the war against these attacks. “While computer security technologies will be useful for network systems, the reality tells us that this conventional approach is not effective enough for the complex, large-scale
  • 6. national infrastructure. “Not only does the author provide comprehensive methodologies based on 25 years of expe- rience in cyber security at AT&T, but he also suggests ‘security through obscurity,’ which attempts to use secrecy to provide security.” — Byeong Gi Lee , President, IEEE Communications Society, and Commissioner of the Korea Communications Commission (KCC) C y b e r A t t a c k s Protecting National Infrastructure Edward G. Amoroso AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Butterworth-Heinemann is an imprint of Elsevier Acquiring Editor: Pam Chester Development Editor: Gregory Chalson Project Manager: Paul Gottehrer Designer: Alisa Andreola
  • 7. Butterworth-Heinemann is an imprint of Elsevier 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA © 2011 Elsevier Inc. All rights reserved No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions . This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this fi eld are constantly changing. As new research and experience broaden our understanding, changes in research methods or professional practices, may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information or methods described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume
  • 8. any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data Amoroso, Edward G. Cyber attacks : protecting national infrastructure / Edward Amoroso. p. cm. Includes index. ISBN 978-0-12-384917-5 1. Cyberterrorism—United States—Prevention. 2. Computer security—United States. 3. National security—United States. I. Title. HV6773.2.A47 2011 363.325�90046780973—dc22 2010040626 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. Printed in the United States of America 10 11 12 13 14 10 9 8 7 6 5 4 3 2 1 For information on all BH publications visit our website at www.elsevierdirect.com/security CONTENTS v CONTENTS Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  • 9. . . . . . . . . . . . . . ix Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 National Cyber Threats, Vulnerabilities, and Attacks . . . . . . . . . . . . . . . . 4 Botnet Threat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 National Cyber Security Methodology Components . . . . . . . . . . . . . . . 9 Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Discretion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Implementing the Principles Nationally . . . . . . . . . . . . . . . . . . . . . . . . 28 Chapter 2 Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  • 10. . . . . . . . . . . . . . 31 Scanning Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Deliberately Open Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Discovery Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Deceptive Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Exploitation Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Procurement Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Exposing Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Interfaces Between Humans and Computers . . . . . . . . . . . . . . . . . . . . 47 National Deception Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 vi CONTENTS Chapter 3 Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 What Is Separation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Functional Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 National Infrastructure Firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 DDOS Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 SCADA Separation Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
  • 11. Physical Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Insider Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Asset Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Multilevel Security (MLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Chapter 4 Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Diversity and Worm Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Desktop Computer System Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Diversity Paradox of Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . 80 Network Technology Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Physical Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 National Diversity Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Chapter 5 Commonality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Meaningful Best Practices for Infrastructure Protection . . . . . . . . . . . . 92 Locally Relevant and Appropriate Security Policy . . . . . . . . . . . . . . . . 95 Culture of Security Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Infrastructure Simplifi cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Certifi cation and Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
  • 12. Career Path and Reward Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Responsible Past Security Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 National Commonality Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Chapter 6 Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Effectiveness of Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Layered Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Layered E-Mail Virus and Spam Protection . . . . . . . . . . . . . . . . . . . . . . 119 CONTENTS vii Layered Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Layered Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Layered Intrusion Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 National Program of Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Chapter 7 Discretion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Trusted Computing Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Security Through Obscurity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Information Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  • 13. . . . . . . . . . 135 Information Reconnaissance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Obscurity Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Organizational Compartments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 National Discretion Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Chapter 8 Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Collecting Network Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Collecting System Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Security Information and Event Management . . . . . . . . . . . . . . . . . . 154 Large-Scale Trending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Tracking a Worm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 National Collection Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Chapter 9 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Conventional Security Correlation Methods . . . . . . . . . . . . . . . . . . . . 167 Quality and Reliability Issues in Data Correlation . . . . . . . . . . . . . . . . 169 Correlating Data to Detect a Worm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Correlating Data to Detect a Botnet . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Large-Scale Correlation Process . . . . . . . . . . . . . . . . . . . . . .
  • 14. . . . . . . . . 174 National Correlation Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Chapter 10 Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Detecting Infrastructure Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Managing Vulnerability Information . . . . . . . . . . . . . . . . . . . . . . . . . . 184 viii CONTENTS Cyber Security Intelligence Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Risk Management Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Security Operations Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 National Awareness Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Chapter 11 Response. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Pre- Versus Post-Attack Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Indications and Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Incident Response Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Forensic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Law Enforcement Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
  • 15. Disaster Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 National Response Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Appendix Sample National Infrastructure Protection Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Sample Deception Requirements (Chapter 2) . . . . . . . . . . . . . . . . . . . 208 Sample Separation Requirements (Chapter 3) . . . . . . . . . . . . . . . . . . 209 Sample Diversity Requirements (Chapter 4) . . . . . . . . . . . . . . . . . . . . . 211 Sample Commonality Requirements (Chapter 5) . . . . . . . . . . . . . . . . 212 Sample Depth Requirements (Chapter 6) . . . . . . . . . . . . . . . . . . . . . . 213 Sample Discretion Requirements (Chapter 7) . . . . . . . . . . . . . . . . . . . 214 Sample Collection Requirements (Chapter 8) . . . . . . . . . . . . . . . . . . . 214 Sample Correlation Requirements (Chapter 9) . . . . . . . . . . . . . . . . . . 215 Sample Awareness Requirements (Chapter 10) . . . . . . . . . . . . . . . . . 216 Sample Response Requirements (Chapter 11) . . . . . . . . . . . . . . . . . . 216 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
  • 16. PREFACE ix PREFACE Man did not enter into society to become worse than he was before, nor to have fewer rights than he had before, but to have those rights better secured. Thomas Paine in Common Sense Before you invest any of your time with this book, please take a moment and look over the following points. They outline my basic philosophy of national infrastructure security. I think that your reaction to these points will give you a pretty good idea of what your reaction will be to the book. 1. Citizens of free nations cannot hope to express or enjoy their freedoms if basic security protections are not provided. Security does not suppress freedom—it makes freedom possible. 2. In virtually every modern nation, computers and networks power critical infrastructure elements. As a result, cyber attackers can use computers and networks to damage or ruin the infrastructures that citizens rely on. 3. Security protections, such as those in security books, were designed for small-scale environments such as enterprise computing environments. These protections do not extrapo- late to the protection of massively complex infrastructure.
  • 17. 4. Effective national cyber protections will be driven largely by cooperation and coordination between commercial, indus- trial, and government organizations. Thus, organizational management issues will be as important to national defense as technical issues. 5. Security is a process of risk reduction, not risk removal. Therefore, concrete steps can and should be taken to reduce, but not remove, the risk of cyber attack to national infrastructure. 6. The current risk of catastrophic cyber attack to national infra- structure must be viewed as extremely high, by any realistic measure. Taking little or no action to reduce this risk would be a foolish national decision. The chapters of this book are organized around ten basic principles that will reduce the risk of cyber attack to national infrastructure in a substantive manner. They are driven by x PREFACE experiences gained managing the security of one of the largest, most complex infrastructures in the world, by years of learning from various commercial and government organizations, and by years of interaction with students and academic researchers in the security fi eld. They are also driven by personal experiences dealing with a wide range of successful and unsuccessful cyber attacks, including ones directed at infrastructure of considerable value. The implementation of the ten principles in this book will require national resolve and changes to the way computing and networking elements are designed, built, and operated in the
  • 18. context of national infrastructure. My hope is that the sugges- tions offered in these pages will make this process easier. ACKNOWLEDGMENT xi ACKNOWLEDGMENT The cyber security experts in the AT&T Chief Security Offi ce, my colleagues across AT&T Labs and the AT&T Chief Technology Offi ce, my colleagues across the entire AT&T business, and my graduate and undergraduate students in the Computer Science Department at the Stevens Institute of Technology, have had a profound impact on my thinking and on the contents of this book. In addition, many prominent enterprise customers of AT&T with whom I’ve had the pleasure of serving, especially those in the United States Federal Government, have been great infl uencers in the preparation of this material. I’d also like to extend a great thanks to my wife Lee, daugh- ter Stephanie (17), son Matthew (15), and daughter Alicia (9) for their collective patience with my busy schedule. Edward G. Amoroso Florham Park, NJ September 2010 This page intentionally left blank
  • 19. 1 Cyber Attacks. DOI: © Elsevier Inc. All rights reserved. 10.1016/B978-0-12-384917-5.00001-9 2011 INTRODUCTION Somewhere in his writings—and I regret having forgotten where— John Von Neumann draws attention to what seemed to him a contrast. He remarked that for simple mechanisms it is often easier to describe how they work than what they do, while for more complicated mechanisms it was usually the other way round . Edsger W. Dijkstra 1 National infrastructure refers to the complex, underlying delivery and support systems for all large-scale services considered abso- lutely essential to a nation. These services include emergency response, law enforcement databases, supervisory control and data acquisition (SCADA) systems, power control networks, mili- tary support services, consumer entertainment systems, fi nancial applications, and mobile telecommunications. Some national services are provided directly by government, but most are pro- vided by commercial groups such as Internet service provid- ers, airlines, and banks. In addition, certain services considered essential to one nation might include infrastructure support that is controlled by organizations from another nation. This global interdependency is consistent with the trends referred to collec- tively by Thomas Friedman as a “fl at world.” 2
  • 20. National infrastructure, especially in the United States, has always been vulnerable to malicious physical attacks such as equipment tampering, cable cuts, facility bombing, and asset theft. The events of September 11, 2001, for example, are the most prominent and recent instance of a massive physical attack directed at national infrastructure. During the past couple of decades, however, vast portions of national infrastructure have become reliant on software, computers, and networks. This reli- ance typically includes remote access, often over the Internet, to 1 1 E.W. Dijkstra, Selected Writings on Computing: A Personal Perspective , Springer-Verlag, New York, 1982, pp. 212–213. 2 T. Friedman, The World Is Flat: A Brief History of the Twenty-First Century , Farrar, Straus, and Giroux, New York, 2007. (Friedman provides a useful economic backdrop to the global aspect of the cyber attack trends suggested in this chapter.) 2 Chapter 1 INTRODUCTION the systems that control national services. Adversaries thus can initiate cyber attacks on infrastructure using worms, viruses, leaks, and the like. These attacks indirectly target national infra- structure through their associated automated controls systems (see Figure 1.1 ). A seemingly obvious approach to dealing with this national
  • 21. cyber threat would involve the use of well-known computer security techniques. After all, computer security has matured substantially in the past couple of decades, and considerable expertise now exists on how to protect software, computers, and networks. In such a national scheme, safeguards such as fi re- walls, intrusion detection systems, antivirus software, passwords, scanners, audit trails, and encryption would be directly embed- ded into infrastructure, just as they are currently in small-scale environments. These national security systems would be con- nected to a centralized threat management system, and inci- dent response would follow a familiar sort of enterprise process. Furthermore, to ensure security policy compliance, one would expect the usual programs of end-user awareness, security train- ing, and third-party audit to be directed toward the people build- ing and operating national infrastructure. Virtually every national infrastructure protection initiative proposed to date has followed this seemingly straightforward path. 3 While well-known computer security techniques will certainly be useful for national infrastructure, most practical experience to date suggests that this conventional approach will not be suf- fi cient. A primary reason is the size, scale, and scope inherent in complex national infrastructure. For example, where an enter- prise might involve manageably sized assets, national infrastruc- ture will require unusually powerful computing support with the ability to handle enormous volumes of data. Such volumes Indirect Cyber Attacks
  • 22. Direct Physical Attacks “Worms, Viruses, Leaks” “Tampering, Cuts, Bombs” National Infrastructure Automated Control Software Computers Networks Figure 1.1 National infrastructure cyber and physical attacks. 3 Executive Offi ce of the President, Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure , U.S. White House, Washington, D.C., 2009 ( http://handle.dtic.mil/100.2/ADA501541 ).
  • 23. Chapter 1 INTRODUCTION 3 will easily exceed the storage and processing capacity of typical enterprise security tools such as a commercial threat manage- ment system. Unfortunately, this incompatibility confl icts with current initiatives in government and industry to reduce costs through the use of common commercial off-the-shelf products. In addition, whereas enterprise systems can rely on manual intervention by a local expert during a security disaster, large- scale national infrastructure generally requires a carefully orches- trated response by teams of security experts using predetermined processes. These teams of experts will often work in different groups, organizations, or even countries. In the worst cases, they will cooperate only if forced by government, often sharing just the minimum amount of information to avoid legal conse- quences. An additional problem is that the complexity associated with national infrastructure leads to the bizarre situation where response teams often have partial or incorrect understand- ing about how the underlying systems work. For these reasons, seemingly convenient attempts to apply existing small-scale security processes to large-scale infrastructure attacks will ulti- mately fail (see Figure 1.2 ). As a result, a brand-new type of national infrastructure protec- tion methodology is required—one that combines the best ele- ments of existing computer and network security techniques with the unique and diffi cult challenges associated with complex, large-
  • 24. scale national services. This book offers just such a protection methodology for national infrastructure. It is based on a quarter century of practical experience designing, building, and operating Small-Scale Small Volume Possibly Manual Local Expert High Focused High Volume Large-Scale Process-Based Distributed Expertise Partial or Incorrect Broad Collection Emergency Expertise Knowledge
  • 25. Analysis Large-Scale Attributes Complicate Cyber Security Figure 1.2 Differences between small- and large-scale cyber security. National infrastructure databases far exceed the size of even the largest commercial databases. 4 Chapter 1 INTRODUCTION cyber security systems for government, commercial, and con- sumer infrastructure. It is represented as a series of protection principles that can be applied to new or existing systems. Because of the unique needs of national infrastructure, especially its mas- sive size, scale, and scope, some aspects of the methodology will be unfamiliar to the computer security community. In fact, certain elements of the approach, such as our favorable view of “security through obscurity,” might appear in direct confl ict with conven- tional views of how computers and networks should be protected.
  • 26. National Cyber Threats, Vulnerabilities, and Attacks Conventional computer security is based on the oft-repeated tax- onomy of security threats which includes confi dentiality, integrity, availability, and theft. In the broadest sense, all four diverse threat types will have applicability in national infrastructure. For example, protections are required equally to deal with sensitive information leaks (confi dentiality ), worms affecting the operation of some criti- cal application (integrity), botnets knocking out an important system (availability), or citizens having their identities compromised (theft). Certainly, the availability threat to national services must be viewed as particularly important, given the nature of the threat and its rela- tion to national assets. One should thus expect particular attention to availability threats to national infrastructure. Nevertheless, it makes sense to acknowledge that all four types of security threats in the conventional taxonomy of computer security must be addressed in any national infrastructure protection methodology. Vulnerabilities are more diffi cult to associate with any taxon- omy. Obviously, national infrastructure must address well- known
  • 27. problems such as improperly confi gured equipment, poorly designed local area networks, unpatched system software, exploit- able bugs in application code, and locally disgruntled employ- ees. The problem is that the most fundamental vulnerability in national infrastructure involves the staggering complexity inher- ent in the underlying systems. This complexity is so pervasive that many times security incidents uncover aspects of computing functionality that were previously unknown to anyone, including sometimes the system designers. Furthermore, in certain cases, the optimal security solution involves simplifying and cleaning up poorly conceived infrastructure. This is bad news, because most large organizations are inept at simplifying much of anything. The best one can do for a comprehensive view of the vulner- abilities associated with national infrastructure is to address their Any of the most common security concerns— confi dentiality, integrity, availability, and theft— threaten our national infrastructure. Chapter 1 INTRODUCTION 5 relative exploitation points. This can be done with an abstract national infrastructure cyber security model that includes three types of malicious adversaries: external adversary (hackers on the Internet), internal adversary (trusted insiders), and
  • 28. supplier adversary (vendors and partners). Using this model, three exploi- tation points emerge for national infrastructure: remote access (Internet and telework), system administration and normal usage (management and use of software, computers, and networks), and supply chain (procurement and outsourcing) (see Figure 1.3 ). These three exploitation points and three types of adversaries can be associated with a variety of possible motivations for initi- ating either a full or test attack on national infrastructure. Remote Access System Administration and Normal Usage External Adversary Three Exploitation Points National Infrastructure Three Adversaries Supply
  • 29. Chain Internal Adversary Software Computers Networks Supplier Adversary Figure 1.3 Adversaries and exploitation points in national infrastructure. Five Possible Motivations for an Infrastructure Attack ● Country-sponsored warfare —National infrastructure attacks sponsored and funded by enemy countries must be considered the most signifi cant potential motivation, because the intensity of adversary capability and willingness to attack is potentially unlimited. ● Terrorist attack —The terrorist motivation is also signifi cant, especially because groups driven by terror can easily obtain suffi cient capability and funding to perform signifi cant attacks on infrastructure. ● Commercially motivated attack —When one company chooses to utilize cyber attacks to gain a commercial advantage, it becomes a national infrastructure incident if the target company is a purveyor of some national asset.
  • 30. ● Financially driven criminal attack —Identify theft is the most common example of a fi nancially driven attack by criminal groups, but other cases exist, such as companies being extorted to avoid a cyber incident. ● Hacking —One must not forget that many types of attacks are still driven by the motivation of hackers, who are often just mischievous youths trying to learn or to build a reputation within the hacking community. This is much less a sinister motivation, and national leaders should try to identify better ways to tap this boundless capability and energy. 6 Chapter 1 INTRODUCTION Each of the three exploitation points might be utilized in a cyber attack on national infrastructure. For example, a supplier might use a poorly designed supply chain to insert Trojan horse code into a software component that controls some national asset, or a hacker on the Internet might take advantage of some unprotected Internet access point to break into a vulnerable ser- vice. Similarly, an insider might use trusted access for either sys- tem administration or normal system usage to create an attack. The potential also exists for an external adversary to gain valu- able insider access through patient, measured means, such as gaining employment in an infrastructure-supporting organiza- tion and then becoming trusted through a long process of work performance. In each case, the possibility exists that a limited type of engagement might be performed as part of a planned test or exercise. This seems especially likely if the attack is country or terrorist sponsored, because it is consistent with past practice.
  • 31. At each exploitation point, the vulnerability being used might be a well-known problem previously reported in an authoritative public advisory, or it could be a proprietary issue kept hidden by a local organization. It is entirely appropriate for a recognized authority to make a detailed public vulnerability advisory if the benefi ts of notifying the good guys outweigh the risks of alert- ing the bad guys. This cost–benefi t result usually occurs when many organizations can directly benefi t from the information and can thus take immediate action. When the reported vulner- ability is unique and isolated, however, then reporting the details might be irresponsible, especially if the notifi cation process does not enable a more timely fi x. This is a key issue, because many government authorities continue to consider new rules for man- datory reporting. If the information being demanded is not prop- erly protected, then the reporting process might result in more harm than good. Botnet Threat Perhaps the most insidious type of attack that exists today is the botnet . 4 In short, a botnet involves remote control of a collec- tion of compromised end-user machines, usually broadband- connected PCs. The controlled end-user machines, which are referred to as bots , are programmed to attack some target that is designated by the botnet controller. The attack is tough to stop 4 Much of the material on botnets in this chapter is derived from work done by Brian Rexroad, David Gross, and several others from AT&T.
  • 32. When to issue a vulnerability risk advisory and when to keep the risk confi dential must be determined on a case- by-case basis, depending on the threat. Chapter 1 INTRODUCTION 7 because end-user machines are typically administered in an inef- fective manner. Furthermore, once the attack begins, it occurs from sources potentially scattered across geographic, political, and service provider boundaries. Perhaps worse, bots are pro- grammed to take commands from multiple controller systems, so any attempts to destroy a given controller result in the bots sim- ply homing to another one. The Five Entities That Comprise a Botnet Attack ● Botnet operator —This is the individual, group, or country that creates the botnet, including its setup and operation. When the botnet is used for fi nancial gain, it is the operator who will benefi t. Law enforcement and cyber security initiatives have found it very diffi cult to identify the operators. The press, in particular, has done a poor job reporting on the presumed identity of botnet operators, often suggesting sponsorship by some country when little supporting evidence exists. ● Botnet controller —This is the set of servers that
  • 33. command and control the operation of a botnet. Usually these servers have been maliciously compromised for this purpose. Many times, the real owner of a server that has been compromised will not even realize what has occurred. The type of activity directed by a controller includes all recruitment, setup, communication, and attack activity. Typical botnets include a handful of controllers, usually distributed across the globe in a non-obvious manner. ● Collection of bots —These are the end-user, broadband- connected PCs infected with botnet malware. They are usually owned and operated by normal citizens, who become unwitting and unknowing dupes in a botnet attack. When a botnet includes a concentration of PCs in a given region, observers often incorrectly attribute the attack to that region. The use of smart mobile devices in a botnet will grow as upstream capacity and device processing power increase. ● Botnet software drop —Most botnets include servers designed to store software that might be useful for the botnets during their lifecycle. Military personnel might refer to this as an arsenal . Like controllers, botnet software drop points are usually servers compromised for this purpose, often unknown to the normal server operator. ● Botnet target —This is the location that is targeted in the attack. Usually, it is a website, but it can really be any device, system, or network that is visible to the bots. In most cases, botnets target prominent and often controversial websites, simply because they are visible via the Internet and generally have a great deal at stake in terms of their availability. This increases gain and leverage for the attacker. Logically, however, botnets can target anything visible. The way a botnet works is that the controller is set up to com-
  • 34. municate with the bots via some designated protocol, most often Internet Relay Chat (IRC). This is done via malware inserted into the end-user PCs that comprise the bots. A great challenge in this regard is that home PCs and laptops are so poorly administered. Amazingly, over time, the day-to-day system and security admin- istration task for home computers has gravitated to the end user. 8 Chapter 1 INTRODUCTION This obligation results in both a poor user experience and gen- eral dissatisfaction with the security task. For example, when a typical computer buyer brings a new machine home, it has prob- ably been preloaded with security software by the retailer. From this point onward, however, that home buyer is then tasked with all responsibility for protecting the machine. This includes keep- ing fi rewall, intrusion detection, antivirus, and antispam software up to date, as well as ensuring that all software patches are cur- rent. When these tasks are not well attended, the result is a more vulnerable machine that is easily turned into a bot. (Sadly, even if a machine is properly managed, expert bot software designers might fi nd a way to install the malware anyway.) Once a group of PCs has been compromised into bots, attacks can thus be launched by the controller via a command to the bots, which would then do as they are instructed. This might not occur instantaneously with the infection; in fact, experi- ence suggests that many botnets lay dormant for a great deal of time. Nevertheless, all sorts of attacks are possible in a bot-
  • 35. net arrangement, including the now-familiar distributed denial of service attack (DDOS). In such a case, the bots create more inbound traffi c than the target gateway can handle. For example, if some theoretical gateway allows for 1 Gbps of inbound traffi c, and the botnet creates an inbound stream larger than 1 Gbps, then a logjam results at the inbound gateway, and a denial of service condition occurs (see Figure 1.4 ). Any serious present study of cyber security must acknowl- edge the unique threat posed by botnets. Virtually any Internet- connected system is vulnerable to major outages from a botnet-originated DDOS attack. The physics of the situation are especially depressing; that is, a botnet that might steal 500 Kbps Broadband Carriers Capacity Excess Creates Jam Bots Target A’s Designated Carrier 1 Gbps Ingress
  • 36. Target A 1 Gbps DDOS Traffic Aimed at Target A Figure 1.4 Sample DDOS attack from a botnet. Home PC users may never know they are being used for a botnet scheme. A DDOS attack is like a cyber traffi c jam. Chapter 1 INTRODUCTION 9 of upstream capacity from each bot (which would generally allow for concurrent normal computing and networking) would only need three bots to collapse a target T1 connection. Following this logic, only 16,000 bots would be required theoretically to fi ll up a 10-Gbps connection. Because most of the thousands of bot- nets that have been observed on the Internet are at least this size, the threat is obvious; however, many recent and prominent bot- nets such as Storm and Confi cker are much larger, comprising as many as several million bots, so the threat to national infrastruc- ture is severe and immediate.
  • 37. National Cyber Security Methodology Components Our proposed methodology for protecting national infrastruc- ture is presented as a series of ten basic design and operation principles. The implication is that, by using these principles as a guide for either improving existing infrastructure components or building new ones, the security result will be desirable, includ- ing a reduced risk from botnets. The methodology addresses all four types of security threats to national infrastructure; it also deals with all three types of adversaries to national infrastructure, as well as the three exploitation points detailed in the infrastruc- ture model. The list of principles in the methodology serves as a guide to the remainder of this chapter, as well as an outline for the remaining chapters of the book: ● Chapter 2: Deception —The openly advertised use of deception creates uncertainty for adversaries because they will not know if a discovered problem is real or a trap. The more common hid- den use of deception allows for real-time behavioral analysis if an intruder is caught in a trap. Programs of national infrastruc- ture protection must include the appropriate use of deception, especially to reduce the malicious partner and supplier risk. ● Chapter 3: Separation —Network separation is currently accomplished using fi rewalls, but programs of national infra- structure protection will require three specifi c changes. Specifi cally, national infrastructure must include network- based fi rewalls on high-capacity backbones to throttle DDOS attacks, internal fi rewalls to segregate infrastructure and reduce the risk of sabotage, and better tailoring of fi rewall fea- tures for specifi c applications such as SCADA protocols. 5
  • 38. 5 R. Kurtz, Securing SCADA Systems , Wiley, New York, 2006. (Kurtz provides an excellent overview of SCADA systems and the current state of the practice in securing them.) 10 Chapter 1 INTRODUCTION ● Chapter 4: Diversity —Maintaining diversity in the products, services, and technologies supporting national infrastruc- ture reduces the chances that one common weakness can be exploited to produce a cascading attack. A massive program of coordinated procurement and supplier management is required to achieve a desired level of national diversity across all assets. This will be tough, because it confl icts with most cost-motivated information technology procurement initia- tives designed to minimize diversity in infrastructure. ● Chapter 5: Commonality —The consistent use of security best practices in the administration of national infrastructure ensures that no infrastructure component is either poorly managed or left completely unguarded. National programs of standards selection and audit validation, especially with an emphasis on uniform programs of simplifi cation, are thus required. This can certainly include citizen end users, but one should never rely on high levels of security compliance in the broad population. ● Chapter 6: Depth —The use of defense in depth in national infrastructure ensures that no critical asset is reliant on a single security layer; thus, if any layer should fail, an addi- tional layer is always present to mitigate an attack. Analysis is
  • 39. required at the national level to ensure that all critical assets are protected by at least two layers, preferably more. ● Chapter 7: Discretion —The use of personal discretion in the sharing of information about national assets is a practical technique that many computer security experts fi nd diffi cult to accept because it confl icts with popular views on “security through obscurity.” Nevertheless, large-scale infrastructure protection cannot be done properly unless a national culture of discretion and secrecy is nurtured. It goes without saying that such discretion should never be put in place to obscure illegal or unethical practices. ● Chapter 8: Collection —The collection of audit log informa- tion is a necessary component of an infrastructure security scheme, but it introduces privacy, size, and scale issues not seen in smaller computer and network settings. National infrastructure protection will require a data collection approach that is acceptable to the citizenry and provides the requisite level of detail for security analysis. ● Chapter 9: Correlation —Correlation is the most fundamen- tal of all analysis techniques for cyber security, but modern attack methods such as botnets greatly complicate its use for attack-related indicators. National-level correlation must be performed using all available sources and the best available Chapter 1 INTRODUCTION 11 technology and algorithms. Correlating information around a botnet attack is one of the more challenging present tasks in
  • 40. cyber security. ● Chapter 10: Awareness —Maintaining situational awareness is more important in large-scale infrastructure protection than in traditional computer and network security because it helps to coordinate the real-time aspect of multiple infrastructure components. A program of national situational awareness must be in place to ensure proper management decision- making for national assets. ● Chapter 11: Response —Incident response for national infra- structure protection is especially diffi cult because it gener- ally involves complex dependencies and interactions between disparate government and commercial groups. It is best accomplished at the national level when it focuses on early indications, rather than on incidents that have already begun to damage national assets. The balance of this chapter will introduce each principle, with discussion on its current use in computer and network security, as well as its expected benefi ts for national infrastructure protection. Deception The principle of deception involves the deliberate introduc- tion of misleading functionality or misinformation into national infrastructure for the purpose of tricking an adversary. The idea is that an adversary would be presented with a view of national infrastructure functionality that might include services or inter- face components that are present for the sole purpose of fakery. Computer scientists refer to this functionality as a honey pot , but the use of deception for national infrastructure could go far beyond this conventional view. Specifi cally, deception can
  • 41. be used to protect against certain types of cyber attacks that no other security method will handle. Law enforcement agen- cies have been using deception effectively for many years, often catching cyber stalkers and criminals by spoofi ng the reported identity of an end point. Even in the presence of such obvi- ous success, however, the cyber security community has yet to embrace deception as a mainstream protection measure. Deception in computing typically involves a layer of clev- erly designed trap functionality strategically embedded into the internal and external interfaces for services. Stated more simply, deception involves fake functionality embedded into real inter- faces. An example might be a deliberately planted trap link on Deception is an oft-used tool by law enforcement agencies to catch cyber stalkers and predators. 12 Chapter 1 INTRODUCTION a website that would lead potential intruders into an environ- ment designed to highlight adversary behavior. When the decep- tion is open and not secret, it might introduce uncertainty for adversaries in the exploitation of real vulnerabilities, because the adversary might suspect that the discovered entry point is a trap. When it is hidden and stealth, which is the more common situa- tion, it serves as the basis for real-time forensic analysis of adver- sary behavior. In either case, the result is a public interface that includes real services, deliberate honey pot traps, and the inevi-
  • 42. table exploitable vulnerabilities that unfortunately will be pres- ent in all nontrivial interfaces (see Figure 1.5 ). Only relatively minor tests of honey pot technology have been reported to date, usually in the context of a research effort. Almost no reports are available on the day-to-day use of decep- tion as a structural component of a real enterprise security program. In fact, the vast majority of security programs for com- panies, government agencies, and national infrastructure would include no such functionality. Academic computer scientists have shown little interest in this type of security, as evidenced by the relatively thin body of literature on the subject. This lack of interest might stem from the discomfort associated with using computing to mislead. Another explanation might be the relative ineffectiveness of deception against the botnet threat, which is clearly the most important security issue on the Internet today. Regardless of the cause, this tendency to avoid the use of decep- tion is unfortunate, because many cyber attacks, such as subtle break-ins by trusted insiders and Trojan horses being maliciously inserted by suppliers into delivered software, cannot be easily remedied by any other means. The most direct benefi t of deception is that it enables foren- sic analysis of intruder activity. By using a honey pot, unique insights into attack methods can be gained by watching what is occurring in real time. Such deception obviously works best in a hidden, stealth mode, unknown to the intruder, because if Interface to Valid Services
  • 43. Trap Interface to Honey Pot Should Resemble Valid Services Vulnerabilities Possible Uncertainty Real Assets Honey Pot ??? Figure 1.5 Components of an interface with deception. Deception is less effective against botnets than other types of attack methods. Chapter 1 INTRODUCTION 13 the intruder realizes that some vulnerable exploitation point is a fake, then no exploitation will occur. Honey pot pioneers Cliff Stoll, Bill Cheswick, and Lance Spitzner have provided a major-
  • 44. ity of the reported experience in real-time forensics using honey pots. They have all suggested that the most diffi cult task involves creating believability in the trap. It is worth noting that connect- ing a honey pot to real assets is a terrible idea. An additional potential benefi t of deception is that it can introduce the clever idea that some discovered vulnerability might instead be a deliberately placed trap. Obviously, such an approach is only effective if the use of deception is not hidden; that is, the adversary must know that deception is an approved and accepted technique used for protection. It should therefore be obvious that the major advantage here is that an accidental vulnerability, one that might previously have been an open door for an intruder, will suddenly look like a possible trap. A further profound notion, perhaps for open discussion, is whether just the implied statement that deception might be present (perhaps without real justifi cation) would actually reduce risk. Suppliers, for example, might be less willing to take the risk of Trojan horse insertion if the procuring organization advertises an open research and development program of detailed software test and inspection against this type of attack. Separation The principle of separation involves enforcement of access policy restrictions on the users and resources in a computing environ- ment. Access policy restrictions result in separation domains, which are arguably the most common security architectural concept in use today. This is good news, because the creation of access-policy-based separation domains will be essential in the protection of national infrastructure. Most companies today will typically use fi rewalls to create perimeters around their
  • 45. presumed enterprise, and access decisions are embedded in the associated rules sets. This use of enterprise fi rewalls for separation is com- plemented by several other common access techniques: ● Authentication and identity management —These methods are used to validate and manage the identities on which separa- tion decisions are made. They are essential in every enterprise but cannot be relied upon solely for infrastructure security. Malicious insiders, for example, will be authorized under such systems. In addition, external attacks such as DDOS are unaf- fected by authentication and identity management. Do not connect honey pots to real assets! 14 Chapter 1 INTRODUCTION ● Logical access controls —The access controls inherent in oper- ating systems and applications provide some degree of sepa- ration, but they are also weak in the presence of compromised insiders. Furthermore, underlying vulnerabilities in appli- cations and operating systems can often be used to subvert these methods. ● LAN controls —Access control lists on local area network (LAN) components can provide separation based on infor- mation such as Internet Protocol (IP) or media access control (MAC) address. In this regard, they are very much like fi rewalls
  • 46. but typically do not extend their scope beyond an isolated segment. ● Firewalls —For large-scale infrastructure, fi rewalls are particu- larly useful, because they separate one network from another. Today, every Internet-based connection is almost certainly protected by some sort of fi rewall functionality. This approach worked especially well in the early years of the Internet, when the number of Internet connections to the enterprise was small. Firewalls do remain useful, however, even with the massive connectivity of most groups to the Internet. As a result, national infrastructure should continue to include the use of fi rewalls to protect known perimeter gateways to the Internet. Given the massive scale and complexity associated with national infrastructure, three specifi c separation enhancements are required, and all are extensions of the fi rewall concept. Required Separation Enhancements for National Infrastructure Protection 1. The use of network-based fi rewalls is absolutely required for many national infrastructure applications, especially ones vulnerable to DDOS attacks from the Internet. This use of network-based mediation can take advantage of high-capacity network backbones if the service provider is involved in running the fi rewalls. 2. The use of fi rewalls to segregate and isolate internal infrastructure components from one another is a mandatory technique for simplifying the implementation of access control policies in an organization. When insiders have malicious intent, any exploit they might attempt should be
  • 47. explicitly contained by internal fi rewalls. 3. The use of commercial off-the-shelf fi rewalls, especially for SCADA usage, will require tailoring of the fi rewall to the unique protocol needs of the application. It is not acceptable for national infrastructure protection to retrofi t the use of a generic, commercial, off-the-shelf tool that is not optimized for its specifi c use (see Figure 1.6 ). Chapter 1 INTRODUCTION 15 With the advent of cloud computing, many enterprise and government agency security managers have come to acknowl- edge the benefi ts of network-based fi rewall processing. The approach scales well and helps to deal with the uncontrolled complexity one typically fi nds in national infrastructure. That said, the reality is that most national assets are still secured by placing a fi rewall at each of the hundreds or thousands of pre- sumed choke points. This approach does not scale and leads to a false sense of security. It should also be recognized that the fi rewall is not the only device subjected to such scale problems. Intrusion detection systems, antivirus fi ltering, threat manage- ment, and denial of service fi ltering also require a network- based approach to function properly in national infrastructure. An additional problem that exists in current national infrastruc- ture is the relative lack of architectural separation used in an internal, trusted network. Most security engineers know that large systems are best protected by dividing them into smaller systems. Firewalls or
  • 48. packet fi ltering routers can be used to segregate an enterprise net- work into manageable domains. Unfortunately, the current state of the practice in infrastructure protection rarely includes a disciplined approach to separating internal assets. This is unfortunate, because it allows an intruder in one domain to have access to a more expan- sive view of the organizational infrastructure. The threat increases when the fi rewall has not been optimized for applications such as SCADA that require specialized protocol support. Required New Separation Mechanisms (Less Familiar) Existing Separation Mechanisms (Less Familiar) Internet Service Provider Commercial and Government Infrastructure
  • 49. Commercial Off-the-Shelf Perimeter Firewalls Authentification and Identity Management, Logical Access Controls, LAN Controls Internal Firewalls Tailored Firewalls (SCADA) Network-Based Firewalls (Carrier) Figure 1.6 Firewall enhancements for national infrastructure. Parceling a network into
  • 50. manageable smaller domains creates an environment that is easier to protect. 16 Chapter 1 INTRODUCTION Diversity The principle of diversity involves the selection and use of tech- nology and systems that are intentionally different in substan- tive ways. These differences can include technology source, programming language, computing platform, physical location, and product vendor. For national infrastructure, realizing such diversity requires a coordinated program of procurement to ensure a proper mix of technologies and vendors. The purpose of introducing these differences is to deliberately create a measure of non-interoperability so that an attack cannot easily cascade from one component to another through exploitation of some common vulnerability. Certainly, it would be possible, even in a diverse environment, for an exploit to cascade, but the likelihood is reduced as the diversity profi le increases. This concept is somewhat controversial, because so much of computer science theory and information technology prac- tice in the past couple of decades has been focused on maxi- mizing interoperability of technologies. This might help explain the relative lack of attentiveness that diversity considerations receive in these fi elds. By way of analogy, however, cyber attacks on national infrastructure are mitigated by diversity technol- ogy just as disease propagation is reduced by a diverse biologi-
  • 51. cal ecosystem. That is, a problem that originates in one area of infrastructure with the intention of automatic propagation will only succeed in the presence of some degree of interoperability. If the technologies are suffi ciently diverse, then the attack propa- gation will be reduced or even stopped. As such, national asset managers are obliged to consider means for introducing diver- sity in a cost-effective manner to realize its security benefi ts (see Figure 1.7 ). Attack Target Component 3 Attack Target Component 2 Non-Diverse (Attack Propagates) Diverse (Attack Propagation Stops) Attack Adversary
  • 52. Target Component 1 Figure 1.7 Introducing diversity to national infrastructure. Chapter 1 INTRODUCTION 17 Diversity is especially tough to implement in national infra- structure for several reasons. First, it must be acknowledged that a single, major software vendor tends to currently dominate the personal computer (PC) operating system business landscape in most government and enterprise settings. This is not likely to change, so national infrastructure security initiatives must sim- ply accept an ecosystem lacking in diversity in the PC landscape. The profi le for operating system software on computer servers is slightly better from a diversity perspective, but the choices remain limited to a very small number of available sources. Mobile oper- ating systems currently offer considerable diversity, but one can- not help but expect to see a trend toward greater consolidation. Second, diversity confl icts with the often-found organiza- tional goal of simplifying supplier and vendor relationships; that is, when a common technology is used throughout an organiza- tion, day-to-day maintenance, administration, and training costs
  • 53. are minimized. Furthermore, by purchasing in bulk, better terms are often available from a vendor. In contrast, the use of diversity could result in a reduction in the level of service provided in an organization. For example, suppose that an Internet service pro- vider offers particularly secure and reliable network services to an organization. Perhaps the reliability is even measured to some impressive quantitative availability metric. If the organization is committed to diversity, then one might be forced to actually introduce a second provider with lower levels of reliability. In spite of these drawbacks, diversity carries benefi ts that are indisputable for large-scale infrastructure. One of the great chal- lenges in national infrastructure protection will thus involve fi nd- ing ways to diversify technology products and services without increasing costs and losing business leverage with vendors. Consistency The principle of consistency involves uniform attention to secu- rity best practices across national infrastructure components. Determining which best practices are relevant for which national asset requires a combination of local knowledge about the asset, as well as broader knowledge of security vulnerabilities in generic infrastructure protection. Thus, the most mature approach to consistency will combine compliance with relevant standards such as the Sarbanes–Oxley controls in the United States, with locally derived security policies that are tailored to the organiza- tional mission. This implies that every organization charged with
  • 54. the design or operation of national infrastructure must have a Enforcing diversity of products and services might seem counterintuitive if you have a reliable provider. 18 Chapter 1 INTRODUCTION local security policy. Amazingly, some large groups do not have such a policy today. The types of best practices that are likely to be relevant for national infrastructure include well-defi ned software lifecycle methodologies, timely processes for patching software and sys- tems, segregation of duty controls in system administration, threat management of all collected security information, secu- rity awareness training for all system administrators, operational confi gurations for infrastructure management, and use of soft- ware security tools to ensure proper integrity management. Most security experts agree on which best practices to include in a generic set of security requirements, as evidenced by the inclu- sion of a common core set of practices in every security standard. Attentiveness to consistency is thus one of the less controversial of our recommended principles. The greatest challenge in implementing best practice consis- tency across infrastructure involves auditing. The typical audit process is performed by an independent third-party entity doing
  • 55. an analysis of target infrastructure to determine consistency with a desired standard. The result of the audit is usually a numeric score, which is then reported widely and used for management decisions. In the United States, agencies of the federal govern- ment are audited against a cyber security standard known as FISMA (Federal Information Security Management Act). While auditing does lead to improved best practice coverage, there are often problems. For example, many audits are done poorly, which results in confusion and improper management deci- sions. In addition, with all the emphasis on numeric ratings, many agencies focus more on their score than on good security practice. Today, organizations charged with protecting national infra- structure are subjected to several types of security audits. Streamlining these standards would certainly be a good idea, but some additional items for consideration include improving the types of common training provided to security administrators, as well as including past practice in infrastructure protection in common audit standards. The most obvious practical consid- eration for national infrastructure, however, would be national- level agreement on which standard or standards would be used to determine competence to protect national assets. While this is a straightforward concept, it could be tough to obtain wide con- currence among all national participants. A related issue involves commonality in national infrastructure operational confi gu- rations; this reduces the chances that a rogue confi guration A good audit score is important but should not replace good security practices.
  • 56. A national standard of competence for protecting our assets is needed. Chapter 1 INTRODUCTION 19 installed for malicious purposes, perhaps by compromised insiders. Depth The principle of depth involves the use of multiple security layers of protection for national infrastructure assets. These layers pro- tect assets from both internal and external attacks via the familiar “defense in depth” approach; that is, multiple layers reduce the risk of attack by increasing the chances that at least one layer will be effective. This should appear to be a somewhat sketchy situ- ation, however, from the perspective of traditional engineering. Civil engineers, for example, would never be comfortable design- ing a structure with multiple fl awed supports in the hopes that one of them will hold the load. Unfortunately, cyber security experts have no choice but to rely on this fl awed notion, perhaps highlighting the relative immaturity of security as an engineering discipline. One hint as to why depth is such an important requirement is that national infrastructure components are currently con- trolled by software, and everyone knows that the current state
  • 57. of software engineering is abysmal. Compared to other types of engineering, software stands out as the only one that accepts the creation of knowingly fl awed products as acceptable. The result is that all nontrivial software has exploitable vulnerabilities, so the idea that one should create multiple layers of security defense is unavoidable. It is worth mentioning that the degree of diversity in these layers will also have a direct impact on their effectiveness (see Figure 1.8 ). To maximize the usefulness of defense layers in national infra- structure, it is recommended that a combination of functional Software engineering standards do not contain the same level of quality as civil and other engineering standards. Attack Gets Through Here... ...Hopefully Stopped Here Multiple Layers of Protection Adversary Target Asset Asset Protected Via Depth Approach
  • 58. Figure 1.8 National infrastructure security through defense in depth. 20 Chapter 1 INTRODUCTION and procedural controls be included. For example, a common fi rst layer of defense is to install an access control mechanism for the admission of devices to the local area network. This could involve router controls in a small network or fi rewall access rules in an enterprise. In either case, this fi rst line of defense is clearly functional. As such, a good choice for a second layer of defense might involve something procedural, such as the deployment of scanning to determine if inappropriate devices have gotten through the fi rst layer. Such diversity will increase the chances that the cause of failure in one layer is unlikely to cause a similar failure in another layer. A great complication in national infrastructure protection is that many layers of defense assume the existence of a defi ned net- work perimeter. For example, the presence of many fl aws in enter- prise security found by auditors is mitigated by the recognition that intruders would have to penetrate the enterprise perimeter to exploit these weaknesses. Unfortunately, for most national assets, fi nding a perimeter is no longer possible. The assets of a country,
  • 59. for example, are almost impossible to defi ne within some geo- graphic or political boundary, much less a network one. Security managers must therefore be creative in identifying controls that will be meaningful for complex assets whose properties are not always evident. The risk of getting this wrong is that in providing multiple layers of defense, one might misapply the protections and leave some portion of the asset base with no layers in place. Discretion The principle of discretion involves individuals and groups making good decisions to obscure sensitive information about national infrastructure. This is done by combining formal man- datory information protection programs with informal discre- tionary behavior. Formal mandatory programs have been in place for many years in the U.S. federal government, where docu- ments are associated with classifi cations, and policy enforce- ment is based on clearances granted to individuals. In the most intense environments, such as top-secret compartments in the intelligence community, violations of access policies could be interpreted as espionage, with all of the associated criminal implications. For this reason, prominent breaches of highly clas- sifi ed government information are not common. In commercial settings, formal information protection pro- grams are gaining wider acceptance because of the increased need to protect personally identifi able information (PII) such as Naturally, top-secret information within the intelligence community is at great risk for attack or infi ltration.
  • 60. Chapter 1 INTRODUCTION 21 credit card numbers. Employees of companies around the world are starting to understand the importance of obscuring certain aspects of corporate activity, and this is healthy for national infra- structure protection. In fact, programs of discretion for national infrastructure protection will require a combination of corpo- rate and government security policy enforcement, perhaps with custom-designed information markings for national assets. The resultant discretionary policy serves as a layer of protection to prevent national infrastructure-related information from reach- ing individuals who have no need to know such information. A barrier in our recommended application of discretion is the maligned notion of “security through obscurity.” Security experts, especially cryptographers, have long complained that obscurity is an unacceptable protection approach. They correctly reference the problems of trying to secure a system by hiding its underly- ing detail. Inevitably, an adversary discovers the hidden design secrets and the security protection is lost. For this reason, con- ventional computer security correctly dictates an open approach to software, design, and algorithms. An advantage of this open approach is the social review that comes with widespread adver- tisement; for example, the likelihood is low of software ever being correct without a signifi cant amount of intense review by experts. So, the general computer security argument against “security through obscurity” is largely valid in most cases.
  • 61. Nevertheless, any manager charged with the protection of nontrivial, large-scale infrastructure will tell you that discretion and, yes, obscurity are indispensable components in a protec- tion program. Obscuring details around technology used, soft- ware deployed, systems purchased, and confi gurations managed will help to avoid or at least slow down certain types of attacks. Hackers often claim that by discovering this type of informa- tion about a company and then advertising the weaknesses they are actually doing the local security team a favor. They suggest that such advertisement is required to motivate a security team toward a solution, but this is actually nonsense. Programs around proper discretion and obscurity for infrastructure information are indispensable and must be coordinated at the national level. Collection The principle of collection involves automated gathering of sys- tem-related information about national infrastructure to enable security analysis. Such collection is usually done in real time and involves probes or hooks in applications, system software, net- work elements, or hardware devices that gather information of “Security through obscurity” may actually leave assets more vulnerable to attack than an open approach would. 22 Chapter 1 INTRODUCTION interest. The use of audit trails in small-scale computer
  • 62. security is an example of a long-standing collection practice that introduces very little controversy among experts as to its utility. Security devices such as fi rewalls produce log fi les, and systems purported to have some degree of security usefulness will also generate an audit trail output. The practice is so common that a new type of product, called a security information management system (SIMS), has been developed to process all this data. The primary operational challenge in setting up the right type of collection process for computers and networks has been two- fold: First, decisions must be made about what types of informa- tion are to be collected. If this decision is made correctly, then the information collected should correspond to exactly the type of data required for security analysis, and nothing else. Second, decisions must be made about how much information is actu- ally collected. This might involve the use of existing system func- tions, such as enabling the automatic generation of statistics on a router; or it could involve the introduction of some new type of function that deliberately gathers the desired information. Once these considerations are handled, appropriate mechanisms for collecting data from national infrastructure can be embedded into the security architecture (see Figure 1.9 ). The technical and operational challenges associated with the collection of logs and audit trails are heightened in the protec- tion of national assets. Because national infrastructure is so com- plex, determining what information should be collected turns out to be a diffi cult exercise. In particular, the potential arises with large-scale collection to intrude on the privacy of individu-
  • 63. als and groups within a nation. As such, any initiative to protect Typical Infrastructure Collection Points Type and Volume Issues Device Status Monitors Distributed Across Government and Industry Interpretation and Action Operating System Logs Network Monitors Application Hooks Transport Issues Privacy Issues Data
  • 64. Collection Repositories Figure 1.9 Collecting national infrastructure-related security information. Chapter 1 INTRODUCTION 23 infrastructure through the collection of data must include at least some measure of privacy policy determination. Similarly, the vol- umes of data collected from large infrastructure can exceed prac- tical limits. Telecommunications collection systems designed to protect the integrity of a service provider backbone, for example, can easily generate many terabytes of data in hours of processing. In both cases, technical and operational expertise must be applied to ensure that the appropriate data is collected in the proper amounts. The good news is that virtually all security protection algorithms require no deep, probing information of the type that might generate privacy or volumetric issues. The challenge arises instead when collection is done without proper advance analysis which often results in the collection of more data than is needed. This can easily lead to privacy problems in some national collection repositories, so planning is particularly necessary. In any event, a national strategy of data collection is required, with the usual sorts of legal and policy guidance on who collects what and under which circumstances. As we sug- gested above, this exercise must be guided by the requirements
  • 65. for security analysis—and nothing else. Correlation The principle of correlation involves a specifi c type of analysis that can be performed on factors related to national infrastructure protection. The goal of correlation is to identify whether security- related indicators might emerge from the analysis. For example, if some national computing asset begins operating in a sluggish man- ner, then other factors would be examined for a possible correlative relationship. One could imagine the local and wide area networks being analyzed for traffi c that might be of an attack nature. In addi- tion, similar computing assets might be examined to determine if they are experiencing a similar functional problem. Also, all soft- ware and services embedded in the national asset might be ana- lyzed for known vulnerabilities. In each case, the purpose of the correlation is to combine and compare factors to help explain a given security issue. This type of comparison-oriented analysis is indispensable for national infrastructure because of its complexity. Interestingly, almost every major national infrastructure pro- tection initiative attempted to date has included a fusion cen- ter for real-time correlation of data. A fusion center is a physical security operations center with means for collecting and ana-
  • 66. lyzing multiple sources of ingress data. It is not uncommon for such a center to include massive display screens with colorful, What and how much data to collect is an operational challenge. Only collect as much data as is necessary for security purposes. Monitoring and analyzing networks and data collection may reveal a hidden or emerging security threat. 24 Chapter 1 INTRODUCTION visualized representations, nor is it uncommon to fi nd such cen- ters in the military with teams of enlisted people performing the manual chores. This is an important point, because, while such automated fusion is certainly promising, best practice in cor- relation for national infrastructure protection must include the requirement that human judgment be included in the analysis. Thus, regardless of whether resources are centralized into one physical location, the reality is that human beings will need to be included in the processing (see Figure 1.10 ). In practice, fusion centers and the associated processes and correlation algorithms have been tough to implement, even in small-scale environments. Botnets, for example, involve the use
  • 67. of source systems that are selected almost arbitrarily. As such, the use of correlation to determine where and why the attack is occurring has been useless. In fact, correlating geographic infor- mation with the sources of botnet activity has even led to many false conclusions about who is attacking whom. Countless hours have been spent by security teams poring through botnet infor- mation trying to determine the source, and the best one can Correlation Process Output Recommended Actions Multiple Ingress Data Feeds Comparison and Analysis of Relevant Factors Derive Real-Time Conclusions Figure 1.10 National infrastructure high-level correlation
  • 68. approach. Three Steps to Improve Current Correlation Capabilities 1. The actual computer science around correlation algorithms needs to be better investigated. Little attention has been placed in academic computer science and applied mathematics departments to multifactor correlation of real-time security data. This could be changed with appropriate funding and grant emphasis from the government. 2. The ability to identify reliable data feeds needs to be greatly improved. Too much attention has been placed on ad hoc collection of volunteered feeds, and this complicates the ability for analysis to perform meaningful correlation. 3. The design and operation of a national-level fusion center must be given serious consideration. Some means must be identifi ed for putting aside political and funding problems in order to accomplish this important objective. Chapter 1 INTRODUCTION 25 hope for might be information about controllers or software drops. In the end, current correlation approaches fall short. What is needed to improve present correlation capabilities for national infrastructure protection involves multiple steps. Awareness The principle of awareness involves an organization under- standing the differences, in real time and at all times, between
  • 69. observed and normal status in national infrastructure. This status can include risks, vulnerabilities, and behavior in the target infra- structure. Behavior refers here to the mix of user activity, system processing, network traffi c, and computing volumes in the soft- ware, computers, and systems that comprise infrastructure. The implication is that the organization can somehow characterize a given situation as being either normal or abnormal. Furthermore, the organization must have the ability to detect and measure differences between these two behavioral states. Correlation analysis is usually inherent in such determinations, but the real challenge is less the algorithms and more the processes that must be in place to ensure situational awareness every hour of every day. For example, if a new vulnerability arises that has impact on the local infrastructure, then this knowledge must be obtained and factored into management decisions immediately. Managers of national infrastructure generally do not have to be convinced that situational awareness is important. The big issue instead is how to achieve this goal. In practice, real-time aware- ness requires attentiveness and vigilance rarely found in normal computer security. Data must fi rst be collected and enabled to fl ow into a fusion center at all times so correlation can take place. The results of the correlation must be used to establish a profi led baseline of behavior so differences can be measured. This sounds easier than it is, because so many odd situations have the ability to mimic normal behavior (when it is really a problem) or a
  • 70. problem (when it really is nothing). Nevertheless, national infrastructure protection demands that managers of assets create a locally rele- vant means for being able to comment accurately on the state of security at all times. This allows for proper management decisions about security (see Figure 1.11 ). Interestingly, situational awareness has not been considered a major component of the computer security equation to date. The concept plays no substantive role in small-scale security, such as in a home network, because when the computing base to be protected is simple enough, characterizing real-time situational status is just not necessary. Similarly, when a security manager puts in place security controls for a small enterprise, situational Awareness builds on collection and correlation, but is not limited to those areas alone. 26 Chapter 1 INTRODUCTION awareness is not the highest priority. Generally, the closest one might expect to some degree of real-time awareness for a small system might be an occasional review of system log fi les. So, the transition from small-scale to large-scale infrastructure protec- tion does require a new attentiveness to situational awareness that is not well developed. It is also worth noting that the general notion of “user awareness” of security is also not the principle specifi ed here. While it is helpful for end users to have knowl-
  • 71. edge of security, any professionally designed program of national infrastructure security must presume that a high percentage of end users will always make the wrong sorts of security deci- sions if allowed. The implication is that national infrastructure protection must never rely on the decision-making of end users through programs of awareness. A further advance that is necessary for situational awareness involves enhancements in approaches to security metrics report- ing. Where the non-cyber national intelligence community has done a great job developing means for delivering daily intelligence briefs to senior government offi cials, the cyber security commu- nity has rarely considered this approach. The reality is that, for sit- uation awareness to become a structural component of national infrastructure protection, valid metrics must be developed to accurately portray status, and these must be codifi ed into a suit- able type of regular intelligence report that senior offi cials can use to determine security status. It would not be unreasonable to expect this cyber security intelligence to fl ow from a central point such as a fusion center, but in general this is not a requirement. Response The principle of response involves assurance that processes are in place to react to any security-related indicator that becomes Large-scale infrastructure protection requires a higher level of awareness than most groups currently
  • 72. employ. Targeted at Managers Collection Raw Data Combined Automation and Manual Process Fusion Intelligence Situational Awareness Figure 1.11 Real-time situation awareness process fl ow. Chapter 1 INTRODUCTION 27 available. These indicators should fl ow into the response pro- cess primarily from the situational awareness layer. National infrastructure response should emphasize indicators rather than incidents. In most current computer security applications, the response team waits for serious problems to occur, usually including complaints from users, applications running poorly, and networks operating in a sluggish manner. Once this occurs, the response team springs into action, even though by this time the security game has already been lost. For essential national infrastructure services, the idea of waiting for the service to degrade before responding does not make logical sense.
  • 73. An additional response-related change for national infra- structure protection is that the maligned concept of “false posi- tive” must be reconsidered. In current small-scale environments, a major goal of the computer security team is to minimize the number of response cases that are initiated only to fi nd that nothing was wrong after all. This is an easy goal to reach by sim- ply waiting for disasters to be confi rmed beyond a shadow of a doubt before response is initiated. For national infrastructure, however, this is obviously unacceptable. Instead, response must follow indicators, and the concept of minimizing false positives must not be part of the approach. The only quantitative metric that must be minimized in national-level response is risk (see Figure 1.12 ). A challenge that must be considered in establishing response functions for national asset protection is that relevant indica- tors often arise long before any harmful effects are seen. This suggests that infrastructure protecting must have accurate situ- ational awareness that considers much more than just visible impacts such as users having trouble, networks being down, or services being unavailable. Instead, often subtle indicators must • Higher False-Positive Rate • Lower Security Risk • Recommended for National Infrastructure Response Process (pre-attack) indicator indicator indicator
  • 74. • Lower False-Positive Rate • Higher Security Risk • Use for National Infrastructure Only If Required effect effect effect Response Process (post-attack) attack threshold time Figure 1.12 National infrastructure security response approach. 28 Chapter 1 INTRODUCTION be analyzed carefully, which is where the challenges arise with false positives. When response teams agree to consider such indi- cators, it becomes more likely that such indicators are benign. A great secret to proper incident response for national infrastruc- ture is that higher false positive rates might actually be a good sign. It is worth noting that the principles of collection, correlation, awareness, and response are all consistent with the implemen- tation of a national fusion center. Clearly, response activities are often dependent on a real-time, ubiquitous operations center to coordinate activities, contact key individuals, collect data as it becomes available, and document progress in the response
  • 75. activ- ities. As such, it should not be unexpected that national-level response for cyber security should include some sort of central- ized national center. The creation of such a facility should be the centerpiece of any national infrastructure protection program and should involve the active participation of all organizations with responsibility for national services. Implementing the Principles Nationally To effectively apply this full set of security principles in practice for national infrastructure protection, several practical imple- mentation considerations emerge: ● Commissions and groups —Numerous commissions and groups have been created over the years with the purpose of national infrastructure protection. Most have had some minor positive impact on infrastructure security, but none has had suffi cient impact to reduce present national risk to accept- able levels. An observation here is that many of these commis- sions and groups have become the end rather than the means toward a cyber security solution. When this occurs, their likeli- hood of success diminishes considerably. Future commissions and groups should take this into consideration. ● Information sharing —Too much attention is placed on infor- mation sharing between government and industry, perhaps because information sharing would seem on the surface to carry much benefi t to both parties. The advice here is that a comprehensive information sharing program is not easy to implement simply because organizations prefer to maintain a low profi le when fi ghting a vulnerability or attack. In addi- tion, the presumption that some organization—government or commercial—might have some nugget of information
  • 76. that could solve a cyber attack or reduce risk is not generally A higher rate of false positives must be tolerated for national infrastructure protection. Chapter 1 INTRODUCTION 29 consistent with practice. Thus, the motivation for a commer- cial entity to share vulnerability or incident-related informa- tion with the government is low; very little value generally comes from such sharing. ● International cooperation —National initiatives focused on creating government cyber security legislation must acknowl- edge that the Internet is global, as are the shared services such as the domain name system (DNS) that all national and global assets are so dependent upon. Thus, any program of national infrastructure protection must include provisions for interna- tional cooperation, and such cooperation implies agreements between participants that will be followed as long as everyone perceives benefi t. ● Technical and operational costs —To implement the princi- ples described above, considerable technical and operational costs will need to be covered across government and commer- cial environments. While it is tempting to presume that the purveyors of national infrastructure can simply absorb these costs into normal business budgets, this has not been the case in the past. Instead, the emphasis should be on rewards and incentives for organizations that make the decision to imple-
  • 77. ment these principles. This point is critical because it suggests that the best possible use of government funds might be as straightforward as helping to directly fund initiatives that will help to secure national assets. The bulk of our discussion in the ensuing chapters is techni- cal in nature; that is, programmatic and political issues are conve- niently ignored. This does not diminish their importance, but rather is driven by our decision to separate our concerns and focus in this book on the details of “what” must be done, rather than “how.” This page intentionally left blank 31 Cyber Attacks. DOI: © Elsevier Inc. All rights reserved. 10.1016/B978-0-12-384917-5.00002-0 2011 DECEPTION Create a highly controlled network. Within that network, you place production systems and then monitor, capture, and analyze all activity that happens within that network Because this is not a production network, but rather our Honeynet, any traffic is suspicious by nature . The Honeynet Project 1
  • 78. The use of deception in computing involves deliberately mislead- ing an adversary by creating a system component that looks real but is in fact a trap. The system component, sometimes referred to as a honey pot , is usually functionality embedded in a computing or networking system, but it can also be a physical asset designed to trick an intruder. In both cases, a common interface is presented to an adversary who might access real functionality connected to real assets, but who might also unknowingly access deceptive functionality connected to bogus assets. In a well-designed decep- tive system, the distinction between real and trap functionality should not be apparent to the intruder (see Figure 2.1 ). The purpose of deception, ultimately, is to enhance security, so in the context of national infrastructure it can be used for large-scale protection of assets. The reason why deception works is that it helps accomplish any or all of the following four security objectives: ● Attention —The attention of an adversary can be diverted from real assets toward bogus ones. ● Energy —The valuable time and energy of an adversary can be wasted on bogus targets.
  • 79. 2 1 The Honeynet Project, Know Your Enemy: Revealing the Security Tools, Tactics, and Motives of the Blackhat Community , Addison–Wesley Professional, New York, 2002. (I highly recommend this amazing and original book.) See also B. Cheswick and S. Bellovin, Firewalls and Internet Security: Repelling the Wily Hacker , 1st ed., Addison– Wesley Professional, New York, 1994; C. Stoll, The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage , Pocket Books, New York, 2005. 32 Chapter 2 DECEPTION ● Uncertainty —Uncertainty can be created around the veracity of a discovered vulnerability. ● Analysis —A basis can be provided for real-time security analy- sis of adversary behavior. The fact that deception diverts the attention of adversaries, while also wasting their time and energy, should be familiar to anyone who has ever used a honey pot on a network. As long as the trap is set properly and the honey pot is suffi ciently realistic, adversaries might direct their time, attention, and energy toward something that is useless from an attack perspective. They might even plant time bombs in trap functionality that they believe