Ccie And Ccde Written Exam Evolving Technologies
Version 11 Study Guide Nicholas J Russo download
https://ebookbell.com/product/ccie-and-ccde-written-exam-
evolving-technologies-version-11-study-guide-nicholas-j-
russo-53639166
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Allinone Ccie Routing And Switching V50 Written Exam Guide 2nd Edition
Paul Adam
https://ebookbell.com/product/allinone-ccie-routing-and-
switching-v50-written-exam-guide-2nd-edition-paul-adam-6769026
Ccie And Ccde Evolving Technologies Study Guide 1st Edition Edgeworth
https://ebookbell.com/product/ccie-and-ccde-evolving-technologies-
study-guide-1st-edition-edgeworth-11381306
Ccnp And Ccie Enterprise Core Encor 350401 Official Cert Guide 1st
Edition Brad Edgeworth
https://ebookbell.com/product/ccnp-and-ccie-enterprise-core-
encor-350401-official-cert-guide-1st-edition-brad-edgeworth-48949898
Ccnp And Ccie Data Center Core Dccor 350601 Official Cert Guide
Unknown
https://ebookbell.com/product/ccnp-and-ccie-data-center-core-
dccor-350601-official-cert-guide-unknown-49420462
Ccnp And Ccie Security Core Scor 350701 Official Cert Guide 2nd
Edition Omar Santos
https://ebookbell.com/product/ccnp-and-ccie-security-core-
scor-350701-official-cert-guide-2nd-edition-omar-santos-58738388
Ccnp And Ccie Enterprise Core Encor 350401 Official Cert Guide 2nd
Edition Brad Edgeworth
https://ebookbell.com/product/ccnp-and-ccie-enterprise-core-
encor-350401-official-cert-guide-2nd-edition-brad-edgeworth-58755726
Ccnp And Ccie Security Core Scor 350701 Official Cert Guide Omar
Santos
https://ebookbell.com/product/ccnp-and-ccie-security-core-
scor-350701-official-cert-guide-omar-santos-23530246
Ccnp And Ccie Enterprise Core Encor 350401 Exam Cram Donald Bacha
https://ebookbell.com/product/ccnp-and-ccie-enterprise-core-
encor-350401-exam-cram-donald-bacha-42087112
Ccnp And Ccie Data Center Core Dccor 350601 Official Cert Guide 2nd
Edition 2nd Somit Maloo Iskren Nikolov Firas Ahmed
https://ebookbell.com/product/ccnp-and-ccie-data-center-core-
dccor-350601-official-cert-guide-2nd-edition-2nd-somit-maloo-iskren-
nikolov-firas-ahmed-52790820
CCIE™ and CCDE™
Written Exam
Evolving Technologies
Version 1.1 Study Guide
By: Nicholas J. Russo
CCIE #42518 (RS/SP)
CCDE #20160041
Revision 2.0 (1 July 2018)
2
By: Nick Russo www.njrusmc.net
About the Author
Nicholas (Nick) Russo holds active CCIE certifications in Routing and Switching and Service Provider, as
well as CCDE. Nick authored a comprehensive study guide for the CCIE Service Provider version 4
examination and this document provides updates to the written test for all CCIE/CCDE tracks. Nick also
holds a Bachelor’s of Science in Computer Science, and a minor in International Relations, from the
Rochester Institute of Technology (RIT). Nick lives in Maryland, USA with his wife, Carla, and daughter,
Olivia. For updates to this document and Nick’s other professional publications, please follow the author
on his Twitter, LinkedIn, and personal website.
Technical Reviewers: Angelos Vassiliou, Leonid Danilov, and many from the RouterGods team.
This material is not sponsored or endorsed by Cisco Systems, Inc. Cisco, Cisco Systems, CCIE and the CCIE
Logo are trademarks of Cisco Systems, Inc. and its affiliates. All Cisco products, features, or technologies
mentioned in this document are trademarks of Cisco. This includes, but is not limited to, Cisco IOS®,
Cisco IOS-XE®, and Cisco IOS-XR®. Within the body of this document, not every instance of the
aforementioned trademarks are combined with the symbols ® or ™ as they are demonstrated above.
The same is true for non-Cisco products and is done for the sake of readability and brevity.
THE INFORMATION HEREIN IS PROVIDED ON AN "AS IS" BASIS, WITHOUT ANY WARRANTIES OR
REPRESENTATIONS, EXPRESS, IMPLIED OR STATUTORY, INCLUDING WITHOUT LIMITATION, WARRANTIES
OF NONINFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Author’s Notes
This book is designed for any CCIE track (as well as the CCDE) that introduces the "Evolving
Technologies" section of the blueprint for the written qualification exam. It is not specific to any
examination track and provides an overview of the three key evolving technologies: Cloud, Network
Programmability, and Internet of Things (IoT). Italic text represents cited text from another not created
by the author. This is typically directly from a Cisco document, which is appropriate given that this is a
summary of Cisco’s vision on the topics therein. This book is not an official publication, does not have an
ISBN assigned, and is not protected by any copyright. It is not for sale and is intended for free,
unrestricted distribution. The opinions expressed in this study guide belong to the author and are not
necessarily those of Cisco.
This book costs the author's personal money to pay for software used in the book's demonstrations. If
you like the book and want to help support its continued success, please consider donating here.
3
By: Nick Russo www.njrusmc.net
Contents
1. Cloud 5
1.1 Compare and contrast cloud deployment models 5
1.1.1 Infrastructure, platform, and software as a service (XaaS) 9
1.1.2 Performance, scalability, and high availability 12
1.1.3 Security compliance, compliance, and policy 15
1.1.4 Workload migration 16
1.2 Describe cloud infrastructure and Operations 17
1.2.1 Compute virtualization, hyper-convergence, and disaggregation 21
1.2.1.1 Virtual Machines 21
1.2.1.2 Containers (with Docker demonstration) 22
1.2.2 Connectivity (in virtualized networks) 33
1.2.2.1 Virtual switches 34
1.2.2.2 Software-defined Wide Area Network (SD-WAN Viptela demonstration) 35
1.2.2.3 Software-defined Access (SD-Access) 39
1.2.2.4 Software-defined Data Center (SD-DC) 40
1.2.3 Virtualization Functions 43
1.2.3.1 Network Functions Virtualization infrastructure (NFVi) 43
1.2.3.2 Virtual Network Functions on layers 1 through 4 (Cisco NFVIS demonstration) 44
1.2.4 Automation and Orchestration tools 51
1.2.4.1 Cloud Center 52
1.2.4.2 Digital Network Architecture Center (DNA-C with demonstration) 52
1.2.4.3 Kubernetes orchestration (Demonstration with minikube) 59
1.3 References and Resources 66
2. Network Programmability 67
2.1 Describe architectural and operational considerations 67
2.1.1 Data models and structures 67
2.1.1.1 YANG 67
2.1.1.2 YAML 71
2.1.1.3 JSON 72
2.1.1.4 XML 73
2.1.2 Device programmability 74
2.1.2.1 Google Remote Procedure Call (gRPC demonstration on IOS-XR) 74
2.1.2.2 NETCONF (IOS-XE demonstration) 81
2.1.2.3 RESTCONF (IOS-XE demonstration) 85
2.1.2.4 REST API (IOS-XE demonstration) 86
2.1.3 Controller-based network design 93
4
By: Nick Russo www.njrusmc.net
2.1.4 Configuration management tools 99
2.1.4.1 Agent-based 99
2.1.4.2 Agent-less (Ansible demonstration) 100
2.1.5 Version control systems 105
2.1.5.1 Git with Github 105
2.1.5.2 Git with Amazon Web Services (AWS) CodeCommit and CodeBuild 108
2.1.5.3 Subversion (SVN) and comparison to Git 116
2.2 References and Resources 123
3. Internet of Things (IoT) 124
3.1 Describe architectural framework and deployment considerations for IoT 124
3.1.1 IoT Technology Stack 127
3.1.1.1 IoT Network Hierarchy 127
3.1.1.2 Data Acquisition and Flow 128
3.1.2 IoT standards and protocols 130
3.1.3 IoT security 134
3.1.4 IoT edge and fog computing 136
3.1.4.1 Data aggregation 137
3.1.4.2 Edge intelligence 139
3.2 References and Resources 141
4. Cloud topics from v1.0 blueprint 142
4.1.1 Troubleshooting and management 142
4.1.2 OpenStack components and demonstration 142
4.1.3 Cloud comparison chart 156
5. Network programmability topics from v1.0 blueprint 156
5.1 Describe functional elements of network programmability and how they interact 156
5.1.1 Controllers 156
5.2 Describe aspects of virtualization and automation in network environments 158
5.2.1 DevOps methodologies, tools and workflows (Jenkins demonstration) 158
6. Internet of Things (IoT) topics from the v1.0 blueprint 170
6.1 Describe architectural framework and deployment considerations 170
6.1.1 Performance, reliability and scalability 170
7. Glossary of Terms 171
5
By: Nick Russo www.njrusmc.net
1. Cloud
Cisco has defined cloud as follows:
IT resources and services that are abstracted from the underlying infrastructure and provided "on-
demand" and "at scale" in a multitenant environment.
Cisco identifies three key components from this definition that differentiate cloud deployments from
ordinary data center (DC) outsourcing strategies:
a. "On-demand" means that resources can be provisioned immediately when needed, released
when no longer required, and billed only when used.
b. "At-scale" means the service provides the illusion of infinite resource availability in order to meet
whatever demands are made of it.
c. "Multitenant environment" means that the resources are provided to many consumers from a
single implementation, saving the provider significant costs.
These distinctions are important for a few reasons. Some organizations joke that migrating to cloud is
simple; all they have to do is update their on-premises DC diagram with the words "Private Cloud" and
upper management will be satisfied. While it is true that the term "cloud" is often abused, it is
important to differentiate it from a traditional private DC. This is discussed next.
1.1 Compare and contrast cloud deployment models
Cloud architectures generally come in four variants:
a. Public: Public clouds are generally the type of cloud most people think about when the word
"cloud" is spoken. They rely on a third party organization (off-premise) to provide infrastructure
where a customer pays a subscription fee for a given amount of compute/storage, time, data
transferred, or any other metric that meaningfully represents the customer’s "use" of the cloud
provider’s shared infrastructure. Naturally, the supported organizations do not need to maintain
the cloud’s physical equipment. This is viewed by many businesses as a way to reduce capital
expenses (CAPEX) since purchasing new DC equipment is unnecessary. It can also reduce
operating expenses (OPEX) since the cost of maintaining an on-premise DC, along with trained
staff, could be more expensive than a public cloud solution. A basic public cloud design is shown
on the following page; the enterprise/campus edge uses some kind of transport to reach the
Cloud Service Provider (CSP) network. The transport could be the public Internet, an Internet
Exchange Point (IXP), a private Wide Area Network (WAN), or something else.
6
By: Nick Russo www.njrusmc.net
CAMPUS
EDGE
CSP EDGE
INTERNET,
WAN, IXP
CAMPUS
b. Private: Like the joke above, this model is like an on-premises DC except it must supply the
three key ingredients identified by Cisco to be considered a "private cloud". Specifically, this
implies automation/orchestration, workload mobility, and compartmentalization must all be
supported in an on-premises DC to qualify. The organization is responsible for maintaining the
cloud’s physical equipment, which is extended to include the automation and provisioning
systems. This can increase OPEX as it requires trained staff. Like the on-premises DC, private
clouds provide application services to a given organization and multi-tenancy is generally limited
to business units or projects/programs within that organization (as opposed to external
customers). The diagram on the following page illustrates a high-level example of a private
cloud.
7
By: Nick Russo www.njrusmc.net
PRIVATE CLOUD
(ON-PREM DC)
CAMPUS CORE
c. Virtual Private: A virtual private cloud is a combination of public and private clouds. An
organization may decide to use this to offload some (but not all) of its DC resources into the
public cloud, while retaining some things in-house. This can be seen as a phased migration to
public cloud, or by some skeptics, as a non-committal trial. This allows a business to objectively
assess whether the cloud is the "right business decision". This option is a bit complex as it may
require moving workloads between public/private clouds on a regular basis. At the very
minimum, there is the initial private-to-public migration; this could be time consuming,
challenging, and expensive. This design is sometimes called a "hybrid cloud" and could, in fact,
represent a business’ IT end-state. The diagram on the following page illustrates a high-level
example of a virtual-private (hybrid) cloud.
8
By: Nick Russo www.njrusmc.net
CAMPUS
EDGE
CSP EDGE
INTERNET,
WAN, IXP
PRIVATE CLOUD
(ON-PREM DC)
CAMPUS
d. Inter-cloud: Like the Internet (an interconnection of various autonomous systems provide
reachability between all attached networks), Cisco suggests that, in the future, the contiguity of
cloud computing may extend between many third-party organizations. This is effectively how
the Internet works; a customer signs a contract with a given service provider (SP) yet has access
to resources from several thousand other service providers on the Internet. The same concept
could be applied to cloud and this is an active area of research for Cisco.
Below is a based-on-a-true-story discussion that highlights some of the decisions and constraints
relating to cloud deployments.
a. An organization decides to retain their existing on-premises DC for legal/compliance reasons. By
adding automation/orchestration and multi-tenancy components, they are able to quickly
increase and decrease virtual capacity. Multiple business units or supported organizations are
free to adjust their security policy requirements within the shared DC in a manner that is secure
and invisible to other tenants; this is the result of compartmentalization within the cloud
architecture. This deployment would qualify as a "private cloud".
9
By: Nick Russo www.njrusmc.net
b. Years later, the same organization decides to keep their most important data on-premises to
meet seemingly-inflexible Government regulatory requirements, yet feels that migrating a
portion of their private cloud to the public cloud is a solution to reduce OPEX. This helps
increase the scalability of the systems for which the Government does not regulate, such as
virtualized network components or identity services, as the on-premises DC is bound by CAPEX
reductions. The private cloud footprint can now be reduced as it is used only for a subset of
tightly controlled systems, while the more generic platforms can be hosted from a cloud
provider at lower cost. Note that actually exchanging/migrating workloads between the two
clouds at will is not appropriate for this organization as they are simply trying to outsource
capacity to reduce cost. As discussed earlier, this deployment could be considered a "virtual
private cloud" by Cisco, but is also commonly referred to as a "hybrid cloud".
c. Years later still, this organization considers a full migration to the public cloud. Perhaps this is
made possible by the relaxation of the existing Government regulations or by the new security
enhancements offered by cloud providers. In either case, the organization can migrate its
customized systems to the public cloud and consider a complete decommission of their existing
private cloud. Such decommissioning could be done gracefully, perhaps by first shutting down
the entire private cloud and leaving it in "cold standby" before removing the physical racks.
Rather than using the public cloud to augment the private cloud (like a virtual private cloud), the
organization could migrate to a fully public cloud solution.
1.1.1 Infrastructure, platform, and software as a service (XaaS)
Cisco defines four critical service layers of cloud computing:
a. Software as a Service (SaaS) is where application services are delivered over the network on a
subscription and on-demand basis. A simple example would be to create a document but not
installing the appropriate text editor on a user’s personal computer. Instead, the application is
hosted "as a service" that a user can access anywhere, anytime, from any machine. SaaS is an
interface between users and a hosted application, often times a hosted web application.
Examples of SaaS include Cisco WebEx, Microsoft Office 365, github.com, blogger.com, and even
Amazon Web Services (AWS) Lambda functions. This last example is particularly interesting
since, according to Amazon, it provides "more granular model provides us with a much richer set
of opportunities to align tenant activity with resource consumption". Being "serverless", lambda
functions execute a specific task based on what the customer needs, and only the resources
consumed during that task's execution (compute, storage, and network) are billed.
b. Platform as a Service (PaaS) consists of run-time environments and software development
frameworks and components delivered over the network on a pay-as-you-go basis. PaaS
offerings are typically presented as API to consumers. Similar to SaaS, PaaS is focused on
providing a complete development environment for computer programmers to test new
10
By: Nick Russo www.njrusmc.net
applications, typically in the development (dev) phase. Although less commonly used by
organizations using mostly commercial-off-the-shelf (COTS) applications, it is a valuable offering
for organizations developing and maintaining specific, in-house applications. PaaS is an interface
between a hosted application and a development/scripting environment that supports it. Cisco
provides WebEx Connect as a PaaS offering. Other examples of PaaS include the specific-
purpose AWS services like Route 53 for Domain Name Service (DNS) support,
CloudFront/CloudWatch for collecting performance metrics, and a wide variety of Relational
Database Service (RDS) offerings for storing data. The customer consumes these services but
does not have to maintain them (patching, updates, etc.) as part of their network operations.
c. Infrastructure as a Service (IaaS) is where compute, network, and storage are delivered over the
network on a pay-as-you-go basis. The approach that Cisco is taking is to enable service
providers to move into this area. This is likely the first thing that comes to mind when individuals
think of "cloud". It represents the classic "outsourced DC" mentality that has existed for years
and gives the customer flexibility to deploy any applications they wish. Compared to SaaS, IaaS
just provides the "hardware", roughly speaking, while SaaS provides both the underlying
hardware and software application running on it. IaaS may also provide a virtualization layer by
means of a hypervisor. A good example of an IaaS deployment could be a miniature public cloud
environment within an SP point of presence (POP) which provides additional services for each
customer: firewall, intrusion prevention, WAN acceleration, etc. IaaS is effectively an interface
between an operating system and the underlying hardware resources. More general-purpose
EC2 services such as Elastic Compute Cloud (EC2) and Simple Storage Service (S3) qualify as IaaS
since the AWS' management is limited to the underlying infrastructure, not the objects within
each service. The customer is responsible for basic maintenance (patching, hardening, etc.) of
these virtual instances and data products.
d. IT foundation is the basis of the above value chain layers. It provides basic building blocks to
architect and enable the above layers. While more abstract than the XaaS layers already
discussed, the IT foundation is generally a collection of core technologies that evolve over time.
For example, DC virtualization became very popular about 15 years ago and many organizations
spent most of the last decade virtualizing "as much as possible". DC fabrics have also changed in
recent years; the original designs represented a traditional core/distribution/access layer design
yet the newer designs represent leaf/spine architectures. These are "IT foundation" changes
that occur over time which help shape the XaaS offerings, which are always served using the
architecture defined at this layer. Cisco views DC evolution in five phases:
a. Consolidation: Driven mostly by a business need to reduce costs, this phase focused on
reducing edge computing and reducing the number of total DCs within an enterprise.
DCs started to take form with two major components:
i. Data Center Network (DCN): Provides the underlying reachability between
attached devices in the DC, such as compute, storage, and management tools.
11
By: Nick Russo www.njrusmc.net
ii. Storage Area Network (SAN): While this may be integrated or entirely separate
from the DCN, it is a core component in the DC. Storage devices are
interconnected over a SAN which typically extends to servers needing to access
the storage.
b. Abstraction: To further reduce costs and maximize return on investment (ROI), this
phase introduces pervasive virtualization. This provides virtual machine and workload
mobility and availability to DC operators.
c. Automation: To improve business agility, automation is introduced to rapidly and
consistently "do things" within a DC. These things include routine system management,
service provisioning, or business-specific tasks like processing credit card information.
d. Cloud: With the previous phases complete, the cloud model of IT services delivered as a
utility becomes possible for many enterprises. Such designs may include a mix of public
and private cloud solutions.
e. Intercloud: Briefly discussed earlier, this is Cisco's vision of cloud interconnection to
generally mirror the Internet concept. At this phase, internal and external clouds will
coexist, federate, and share resources dynamically.
Although not defined in formal Cisco documentation, there are many more flavors of XaaS. Below are
some additional examples of storage related services commonly offered by large cloud providers:
a. Database-as-a-Service: Some applications require databases, especially relational databases like
the SQL family. This service would provide the database itself and the ability for the database to
connect to the application so it can be utilized. AWS RDS services qualify as offerings in this
category.
b. Object-Storage-as-a-Service: Sometimes cloud users only need access to files independent from
a specific application. Object storage is effectively a remote file share for this purpose, which in
many cases can also be utilized by an application internally. This service provides the object
storage service as well as the interfaces necessary for users and applications to access it. AWS S3
is an example of this service, which in some cases is a subnet of IaaS/PaaS.
c. Block-Storage-as-a-Service: These services are commonly tied to applications that require
access to the disks themselves. Applications can format the disks and add whatever file system
is necessary, or perhaps use the disk for some other purpose. This service provides the block
storage assets (disks, logical unit number or LUNs, etc.) and the interfaces to connect the
storage assets to the applications themselves. AWS Elastic Block Storage (EBS) is an example of
this service.
This book provides a more complete look into popular cloud service offerings in the OpenStack section.
Note OpenStack was removed from the new v1.1 blueprint but was retained at the end of this book.
12
By: Nick Russo www.njrusmc.net
1.1.2 Performance, scalability, and high availability
Assessing the performance and reliability of cloud networks presents an interesting set of trade-offs. For
years, network designers have considered creating "failure domains" in the network so as to isolate
faults. With routing protocols, this is conceptually easy to understand, but often times difficult to design
and implement, especially when considering business/technical constraints. Designing a DC comes with
its own set of trade-offs when identifying the "failure domains" (which are sometimes called "availability
zones" within a fabric), but that is outside the scope of this document. The real trade-offs with a cloud
environment revolve around the introduction of automation. Automation is discussed in detail in section
1.2.1 but the trade-offs are discussed here as they directly influence the performance and reliability of a
system. Note that this discussion is typically relevant for private and virtual private clouds, as a public
cloud provider will always be large enough to warrant several automation tools.
Automation usually reduces the total cost of ownership (TCO), which is desirable for any business. This is
the result of reducing the time (and labor wages) it takes for individuals to "do things": provision a new
service, create a backup, add VLANs to switches, test MPLS traffic-engineering tunnel computations, etc.
The trade-off is that all software (including the automation system being discussed) requires
maintenance, whether that is in the form of in-house development or a subscription fee from a third-
party. If in the form of in-house development, software engineers are paid to maintain and troubleshoot
the software which could potentially be more expensive than just doing things manually, depending on
how much maintenance and unit testing the software requires. Most individuals who have worked as
software developers (including the author) know that bugs or feature requests always seem to pop up,
and maintenance is continuous for any non-trivial piece of code. Businesses must also consider the cost
of the subscription for the automation software against the cost of not having it (in labor wages).
Typically this becomes a simple choice as the network grows; automation often shines here. Automation
is such a key component of cloud environments because the cost of dealing with software maintenance
is almost always less than the cost of a large IT staff.
Automation can also be used for root cause analysis (RCA) whereby the tool can examine all the
components of a system to test for faults. For example, suppose an eBGP session fails between two
organizations. The script might test for IP reachability between the eBGP routers first, followed by
verifying no changes to the infrastructure access lists applied on the interface. It might also collect
performance characteristics of the inter-AS link to check for packet loss. Last, it might check for
fragmentation on the link by sending large pings with "don’t fragment" set. This information can feed
into the RCA which is reviewed by the network staff and presented to management after an outage.
The main takeaway is that automation should be deployed where it makes sense (TCO reduction) and
where it can be maintained with a reasonable amount of effort. Failing to provide the maintenance
resources needed to sustain an automation infrastructure can lead to disastrous results. With
automation, the "blast radius", or potential scope of damage, can be very large. A real-life story from
the author: when updating SNMPv3 credentials, the wrong privacy algorithm was configured, causing
100% of devices to be unmanageable via SNMPv3 for a short time. Correcting the change was easily
13
By: Nick Russo www.njrusmc.net
done using automation, and the business impact was minimal, but it negatively affected every router,
switch, and firewall in the network.
Automation helps maximize the performance and reliability of a cloud environment. Another key aspect
of cloud design is accessibility, which assumes sufficient network bandwidth to reach the cloud
environment. A DC that was once located at a corporate site with 2,000 employees was accessible to
those employees over a company’s campus LAN architecture. Often times this included high-speed core
and DC edge layers whereby accessing DC resources was fast and highly available. With public cloud, the
Internet/private WAN becomes involved, so cloud access becomes an important consideration.
Achieving cloud scalability is often reliant on many components supporting the cloud architecture.
These components include the network fabric, the application design, the virtualization/segmentation
design, and others. The ability of cloud networks to provide seamless and simple interoperability
between applications can be difficult to assess. Applications that are written in-house will probably
interoperate better in the private cloud since the third-party provider may not have a simple mechanism
to integrate with these custom applications. This is very common in the military space as in-house
applications are highly customized and often lack standards-based APIs. Some cloud providers may not
have this problem, but this depends entirely on their network/application hosting software (OpenStack
is one example discussed later in this document). If the application is coded "correctly", APIs would be
exposed so that additional provider-hosted applications can integrate with the in-house application. Too
often, custom applications are written in a silo where no such APIs are presented.
14
By: Nick Russo www.njrusmc.net
Below is a table that compares access methods, reliability, and other characteristics of the different
cloud solutions.
Public Cloud Private Cloud Virtual Private
Cloud
Inter-Cloud
Access (private
WAN, IXP,
Internet VPN,
etc)
Often times relies
on Internet VPN,
but could also use
an Internet
Exchange (IX) or
private WAN
Corporate LAN or
WAN, which is
often private.
Could be Internet-
based if SD-WAN
deployments (e.g.
Cisco IWAN) are
considered.
Combination of
corporate WAN for
the private cloud
components and
whatever the
public cloud access
method is.
Same as public
cloud, except relies
on the Internet as
transport between
clouds/cloud
deployments
Reliability
(including
accessibility)
Heavily dependent
on highly-available
and high-
bandwidth links to
the cloud provider
Often times high
given the common
usage of private
WANs (backed by
carrier SLAs)
Typically higher
reliability to access
the private WAN
components, but
depends entirely
on the public cloud
access method
Assuming
applications are
distributed,
reliability can be
quite high if at least
one "cloud" is
accessible (anycast)
Fault Tolerance
/ intra-cloud
availability
Typically high as
the cloud provider
is expected to
have a highly
redundant
architecture
(vary based on $$)
Often constrained
by corporate
CAPEX; tends to be
a bit lower than a
managed cloud
service given the
smaller DCs
Unlike public or
private, the
networking link
between clouds is
an important
consideration for
fault tolerance
Assuming
applications are
distributed, fault-
tolerance can be
quite high if at least
one "cloud" is
accessible (anycast)
Performance
(speed, etc)
Typically high as
the cloud provider
is expected to
have a very dense
compute/storage
architecture
Often constrained
by corporate
CAPEX; tends to be
a bit lower than a
managed cloud
service given the
smaller DCs
Unlike public or
private, the
networking link
between clouds is
an important
consideration,
especially when
applications are
distributed across
the two clouds
Unlike public or
private, the
networking link
between clouds is
an important
consideration,
especially when
applications are
distributed across
the two clouds
Scalability Appears to be
"infinite" which
allows the
customer to
provision new
services quickly
High CAPEX and
OPEX to expand it,
which limits scale
within a business
Scales well given
public cloud
resources
Highest; massively
distributed
architecture
15
By: Nick Russo www.njrusmc.net
1.1.3 Security compliance, compliance, and policy
From a purely network-focused perspective, many would argue that public cloud security is superior to
private cloud security. This is the result of hiring an organization whose entire business revolves around
providing a secure, high-performing, and highly-available network. A business where "the network is not
the business" may be less inclined or less interested in increasing OPEX within the IT department, the
dreaded cost center. The counter-argument is that public cloud physical security is always questionable,
even if the digital security is strong. Should a natural disaster strike a public cloud facility where disk
drives are scattered across a large geographic region (tornado comes to mind), what is the cloud
provider’s plan to protect customer data? What if the data is being stored in a region of the world
known to have unfriendly relations towards the home country of the supported business? These are
important questions to ask because when data is in the public cloud, the customer never really knows
exactly "where" the data is physically stored. This uncertainty can be offset by using "availability zones"
where some cloud providers will ensure the data is confined to a given geographic region. In many
cases, this sufficiently addresses the concern for most customers, but not always. As a customer, it is
also hard to enforce and prove this. This sometimes comes with an additional cost, too. Note that
disaster recovery (DR) is also a component of business continuity (BC) but like most things, it has
security considerations as well.
Privacy in the cloud is achieved mostly by introducing multi-tenancy separation. Compartmentalization
at the host, network, and application layers ensure that the entire cloud architecture keeps data private;
that is to say, customers can never access data from other customers. Sometimes this multi-tenancy can
be done as crudely as separating different customers onto different hosts, which use different VLANs
and are protected behind different virtual firewall contexts. Sometimes the security is integrated with
an application shared by many customers using some kind of public key infrastructure (PKI). Often times
maintaining this security and privacy is a combination of many techniques. Like all things, the security
posture is a continuum which could be relaxed between tenants if, for example, the two of them were
partners and wanted to share information within the same public cloud provider (like a cloud extranet).
The following page compares the security and privacy characteristics between the different cloud
deployment options.
16
By: Nick Russo www.njrusmc.net
Public Cloud Private Cloud Virtual Private
Cloud
Inter-Cloud
Digital security Typically has best
trained staff,
focused on the
network and not
much else
(network is the
business)
Focused IT staff
but likely not IT-
focused upper
management
(network is likely
not the business)
Coordination
between clouds
could provide
attack surfaces,
but isn’t wide-
spread
Coordination
between clouds
could provide attack
surfaces (like what
BGPsec is designed
to solve)
Physical
security
One cannot
pinpoint their data
within the cloud
provider’s network
Generally high as a
business knows
where the data is
stored, breaches
notwithstanding
Combination of
public and private;
depends on
application
component
distribution
One cannot
pinpoint their data
anywhere in the
world
Privacy Transport from
premises to cloud
should be secured
(Internet VPN,
secure private
WAN, etc)
Generally secure
assuming
corporate WAN is
secure
Need to ensure
any replicated
traffic between
public/private
clouds is
protected;
generally this is
true as the link to
the public cloud is
protected
Need to ensure any
replicated traffic
between distributed
public clouds is
protected; not the
responsibility of
end-customers, but
cloud providers
should provide it
1.1.4 Workload migration
Workload mobility is a generic goal and has been around since the first virtualized DCs were created.
This gives IT administrators an increased ability to share resources amount different workloads within
the virtual DC (which could consist of multiple DCs connected across a Data Center Interconnect, or DCI).
It also allows workloads to be balanced across a collection of resources. For example, if 4 hosts exist in a
cluster, one of them might be performing more than 50% of the computationally-expensive work while
the others are underutilized. The ability to move these workloads is an important capability.
It is important to understand that workload mobility is not necessarily the same thing as VM mobility.
For example, a workload’s accessibility can be abstracted using anycast while the application exists in
multiple availability zones (AZ) spread throughout the cloud provider’s network. Using Domain Name
System (DNS), different application instances can be utilized based on geographic location, time of day,
etc. The VMs have not actually moved but the resource performing the workload may vary.
Although this concept has been around since the initial virtualization deployments, it is even more
relevant in cloud, since the massively scalable and potentially distributed nature of that environment is
abstracted into a single "cloud" entity. Using the cluster example from above, those 4 hosts might not
even be in the same DC, or even within the same cloud provider (with hybrid or Inter-cloud
17
By: Nick Russo www.njrusmc.net
deployments). The concept of workload mobility needs to be extended large-scale; note that this
doesn’t necessarily imply layer-2 extensions across the globe. It simply implies that the workload needs
to be moved or distributed differently, which can be solved with geographically-based anycast solutions,
for example.
As discussed in the automation/orchestration section above, orchestrating workloads is a major goal of
cloud computing. The individual tasks that are executed in sequence (and conditionally) by the
orchestration engine could be distributed throughout the cloud. The task itself (and the code for it) is
likely centralized in a code repository, which helps promote the "infrastructure as code" concept. The
task/script code can be modified, ultimately changing the infrastructure without logging into individual
devices. This has CM benefits for the managed device, since the device's configuration does not need to
be under CM at all anymore.
1.2 Describe cloud infrastructure and Operations
Cloud implementation can be broken into 2 main categories: how the cloud provider works, and how
customers connect to the cloud. The second question is more straightforward to answer and is
discussed first. There are three main options for connecting to a cloud provider, but this list is by no
means exhaustive:
a. Private WAN (like MPLS L3VPN): Using the existing private WAN, the cloud provider is
connected as an extranet. To use MPLS L3VPN as an example, the cloud-facing PE exports a
central service route-target (RT) and imports corporate VPN RT. This approach could give direct
cloud access to all sites in a highly scalable, highly performing fashion. Traffic performance
would (should) be protected under the ISP’s SLA to cover both site-to-site customer traffic and
site-to-cloud/cloud-to-site customer traffic. The ISP may even offer this cloud service natively as
part of the service contract. Certain services could be collocated in an SP POP as part of that SP's
cloud offering. The private WAN approach is likely to be expensive and as companies try to drive
OPEX down, a private WAN may not even exist. Private WAN is also good for virtual private
(hybrid) cloud assuming the ISP’s SLA is honored and is routinely measuring better performance
than alternative connectivity options. Virtual private cloud makes sense over private WAN
because the SLA is assumed to be better, therefore the intra-DC traffic (despite being inter-site)
will not suffer performance degradation. Services could be spread between the private and
public clouds assuming the private WAN bandwidth is very high and latency is very low, both of
which would be required in a cloud environment. It is not recommended to do this as the
amount of intra-workflow bandwidth (database server on-premises and application/web server
in the cloud, for example) is expected to be very high. The diagram on the following page
depicts private WAN connectivity assuming MPLS L3VPN. In this design, branches could directly
access cloud resources without transiting the main site.
18
By: Nick Russo www.njrusmc.net
b. Internet Exchange Point (IXP): A customer’s network is connected via the IXP LAN (might be a
LAN/VLAN segment or a layer-2 overlay) into the cloud provider’s network. The IXP network is
generally access-like and connects different organizations together so that they can peer with
Border Gateway Protocol (BGP) directly, but typically does not provide transit services between
sites like a private WAN. Some describe an IXP as a "bandwidth bazaar" or "bandwidth
marketplace" where such exchanges can happen in a local area. A strict SLA may not be
guaranteed but performance would be expected to be better than the Internet VPN. This is
likewise an acceptable choice for virtual private (hybrid) cloud but lacks the tight SLA typically
offered in private WAN deployments. A company could, for example, use internet VPNs for inter-
site traffic and an IXP for public cloud access. A private WAN for inter-site access is also
acceptable. The diagram on the following page shows a private WAN for branch connectivity and
an IXP used for cloud connectivity from the main campus site.
Random documents with unrelated
content Scribd suggests to you:
55
56
Because of the plume impingement problem on Voyager 1,
Voyager 2’s first trajectory correction maneuver was adjusted
to allow for the possibility of a 20 percent loss in thrust. The Voyager
2 maneuver was successful, but controllers felt that additional action
was required to conserve fuel. One way to save was by reducing
requirements on control of the spacecraft orientation. Less control
fuel would be needed if the already miniscule pressure exerted on the
spacecraft by the solar wind could be reduced. Flight engineers at JPL
calculated that the pressure would be reduced if the spacecraft were
tipped upside down; however, to accomplish this, the spacecraft
would have to be steered by a new set of guide stars. By
reprogramming the attitude control system it was found possible to
substitute the northern star, Deneb, in the constellation of Cygnus,
for the original reference star, Canopus, in the southern constellation
of Carina. With this change, as well as readjustment of Voyager 2’s
trajectory near Jupiter, inflight consumption of hydrazine was reduced
significantly.
In late August 1978 both Voyagers were reprogrammed to ensure
better science results at Jupiter encounter; for example, the
reprogramming would prevent imaging (TV) photographs from
blurring when the tape recorder was operating. By early November,
flight crews had begun training exercises to rehearse for the Voyager
1 flyby of Jupiter on March 5, 1979. A near encounter test was
performed on December 12-14, 1978: a complete runthrough of
Voyager 1’s 39-hour near encounter period, which would take place
March 3-5, 1979. Participants included the flight team, the Deep
Space Network tracking stations, the scientists, and the spacecraft
itself. Results: Voyager and the Voyager team were all ready for the
encounter.
Meanwhile, the spacecraft were busy returning scientific data to
Earth. Technically, the Voyagers were in the cruise phase of the
mission—a period that, for Voyager 2, would last until April 24, 1979,
and for Voyager 1, until January 4, 1979, when each spacecraft
would enter its respective observatory phase.
Cruise Phase Science
In the first few days after launch, the spacecrafts’ instruments were
turned on and calibrated; various tests for each instrument would
continue to be performed throughout the cruise phase. This period
presented a great opportunity for the Voyagers to study the
interplanetary magnetic fields, solar flares, and the solar wind. In
addition, ultraviolet and infrared radiation studies of the sky were
performed. In mid-September 1977 the television cameras on
Voyager 1 recorded a number of photographs of the Earth and Moon.
A photograph taken September 18 captured both crescent Moon and
crescent Earth. It was the first time the two celestial bodies had ever
been photographed together.
In November both Voyagers crossed the orbit of Mars, entering the
asteroid belt a month later. On December 15, at a distance of about
170 million kilometers from Earth, Voyager 1 finally speeded past its
slower twin. The journey through the asteroid belt was long but
uneventful: Voyager 1 emerged safely in September 1978, and
Voyager 2 in October. Unlike Pioneers 10 and 11, the Voyagers
carried no instruments to look at debris in the asteroid belt.
By April both spacecraft were already halfway to Jupiter and, about
two months later, Voyager 1, still approximately 265 million
kilometers from Jupiter, began returning photographs of the planet
that showed considerable detail, although less than could be obtained
with telescopes on Earth. Both the imaging (TV) and the planetary
radio astronomy instruments began observing Jupiter, and, by
October 2, 1978, officials announced that “the polarization
characteristics of Jupiter’s radio emissions have been defined. In the
high frequencies, there is consistent right-hand circular polarization,
while in the low frequencies, there is a consistent left-hand circular
polarization. This was an unexpected result.” In addition to scientific
studies of the interplanetary medium and a first look at Jupiter, the
plasma wave instrument, which studies waves of charged particles
over a range of frequencies that includes audio frequencies, was able
57
to record the sound of the spacecraft thrusters firing, as hydrazine
fuel is decomposed and ejected into space. The sound was described
as being “somewhat like a 5-gallon can being hit with a leather-
wrapped mallet.”
On December 10, 1978, 83 million kilometers from Jupiter, Voyager 1
took photographs that surpassed the best photographs ever taken
from ground-based telescopes, and scientists were anxiously awaiting
the start of continuous coverage of the rapidly changing cloud forms.
These pictures, together with data from several ground-based
observatories, were carefully scrutinized by a team of scientists at JPL
to select the final targets in the atmosphere of Jupiter to be studied
at high resolution during the flyby. The observatory phase of Voyager
1’s journey to Jupiter was about to begin.
As Voyager 1 approached Jupiter, the resolution of the images steadily
improved. In October 1978, at a distance of about 125 million kilometers,
the image was less clear than would be obtained with an Earth-based
telescope. [P-20790]
By December 10, the spacecraft had moved to a distance of 85 million
kilometers, and the resolution was about 2000 kilometers, comparable to
the best telescopic images. [P-20829C]
On January 9, 1979 (c), at a distance of 54 million kilometers, the image
surpassed all ground-based views and approached the resolution of the
Pioneer 10 and 11 photos. [P-20926C]
58
In this, taken January 24 at a distance of 40 million kilometers, the
resolution exceeded 1000 kilometers. [P-20945C]
The Observatory Phase
The observatory phase, originally scheduled to start on December 15,
1978, eighty days before encounter, was postponed until January 4,
1979, to provide the flight team a holiday-season break. For the next
two months, Voyager would carry out a long-term scientific study—a
“time history”—of Jupiter, its satellites, and its magnetosphere. On
January 6, Voyager 1 began photographing Jupiter every two hours—
each time taking a series of four photographs through different color
filters as part of a long-duration study of large-scale atmospheric
processes, so scientists could study the changing cloud patterns on
Jupiter. Even the first of these pictures showed that the atmosphere
was dynamic “with more convective structure than had previously
been thought.” Particularly striking were the changes in the planet,
especially near the Great Red Spot, that had taken place since the
Pioneer flybys in 1973 and 1974. Jupiter was wearing a new face for
Voyager.
In mid-January, photos of Jupiter were already being praised for
“showing exceptional details of the planet’s multicolored bands of
clouds.” Still 47 million kilometers from the giant, Voyager 1 had by
now taken more than 500 photographs of Jupiter. Movements of
cloud patterns were becoming more obvious; feathery structures
seemed painted across some of the bands that encircle the planet;
swirling features were huddled near the Great Red Spot. The
satellites were also beginning to look more like worlds, with a few
bright spots visible on Ganymede and dark red poles and a bright
equatorial region clearly seen on Io when it passed once each orbit
across the turbulent face of Jupiter.
From January 30 to February 3, Voyager sent back one photograph of
Jupiter every 96 seconds over a 100-hour period. Using a total of
three different color filters, the spacecraft thus produced one color
picture every 4¾ minutes, in order to make a “movie” covering ten
Jovian “days.” To receive these pictures, sent back over the high-rate
X-band transmitter, the Deep Space Network’s 64-meter antennas
provided round-the-clock coverage. Voyager 1 was ready for its far
encounter phase.
59
The Galilean satellites of Jupiter first began to show as tiny worlds, not
mere points of light, as the Voyager 1 observatory phase began. In this
view taken January 17, 1979, at a range of 47 million kilometers, the
differing sizes and surface reflectivities (albedos) of Ganymede (right
center) and Europa (top right) are clearly visible. The view of Jupiter is
unusual in that the Great Red Spot is not easily visible, but can just be
seen at the right edge of the planet. Most pictures selected for
publication include the photogenic Red Spot. [P-20938C]
Far Encounter Phase
By early February Jupiter loomed too large in the narrow-angle
camera to be photographed in one piece; 2 × 2 three-color (violet,
orange, green) sets of pictures were taken for the next two weeks;
by February 21, Jupiter had grown too large even for that, and 3 × 3
sets were scheduled to begin. When these sets of pictures are pasted
together to form a single picture, the result is called a mosaic.
By February 1, 1979, Voyager 1 was only 30 million kilometers from
Jupiter, and the resolution of the imaging system corresponded to about
600 kilometers at the cloud tops of the giant planet. At this time, a great
deal of unexpected complexity became apparent around the Red Spot,
and movie sequences clearly showed the cloud motions, including the
apparent six-day rotation of the Red Spot. [P-20993C]
60
One of the most spectacular planetary photographs ever taken was
obtained on February 13 as Voyager 1 continued its approach to Jupiter.
By this time, at a range of 20 million kilometers, Jupiter loomed too large
to fit within a single narrow-angle imaging frame. Passing in front of the
planet are the inner two Galilean satellites. Io, on the left, already shows
brightly colored patterns on its surface, while Europa, on the right, is a
bland ice-covered world. The scale of all of these objects is huge by
terrestrial standards; Io and Europa are each the size of our Moon, and
the Red Spot is larger than the Earth. [P-21082C]
While the imaging experiments were in the limelight, the other
scientific instruments had also begun to concentrate on the Jovian
system. The ultraviolet spectrometer had been scanning the region
eight times a day; the infrared spectrometer (IRIS) spent 1½ hours a
day analyzing infrared emissions from various longitudes in Jupiter’s
atmosphere; the planetary radio astronomy and plasma wave
61
instruments looked for radio bursts from Jupiter and for plasma
disturbances in the region; the photopolarimeter had begun
searching for the edge of Io’s sodium torus; and a watch was begun
for the bow shock—the outer boundary of the Jovian magnetosphere.
On February 10, Voyager 1 crossed the orbit of Sinope, Jupiter’s
outermost satellite. Yet the spacecraft even then had a long way to
go—still 23 million kilometers from Jupiter, but closing in on the
planet at nearly a million kilometers a day. A week later, targeted
photographs of Callisto began to provide coverage of the satellite all
around its orbit; similar photos of Ganymede began on February 25.
Meanwhile excitement was building. As early as February 8 and 9,
delight with the mission and anticipation of the results of the
encounter in March were already evident. Garry E. Hunt, from the
Laboratory for Planetary Atmospheres, University College,
London, a member of the Imaging Team, discussing the
appearance of Jupiter’s atmosphere as seen by Voyager during the
previous month, said “It seems to be far more photogenic now than
it did during the Pioneer encounters; I’m more than delighted by it—
it’s an incredible state of affairs. There are infinitely more details than
ever imagined.”
The Voyager Project was operated from the Jet Propulsion Laboratory
managed for NASA by the California Institute of Technology. Located in
the hills above Pasadena, California, JPL is the main center for U.S.
exploration of the solar system. [JB17249BC]
The Voyager TV cameras do not take color pictures directly as do
commercial cameras. Instead, a color image is reconstructed on the
ground from three separate monochromatic images, obtained through
color filters and transmitted separately to Earth. There are a number of
possible filter combinations, but the most nearly “true” color is obtained
with originals photographed in blue (480 nanometers), green (565
nanometers), and orange (590 nanometers) light. Before these can be
combined, the individual frames must be registered, correcting for any
change in spacecraft position or pointing between exposures. Often, only
part of a scene is contained in all three original pictures. Shown here is a
reconstruction of a plume on Jupiter, photographed on March 1, 1979.
62
The colors used to print the three separate frames can be seen clearly in
the nonoverlapping areas. For other pictures in this book, the
nonoverlapping partial frames are omitted. [P-21192]
The pictures from Voyager are “clearly spectacular,” said Lonne Lane,
Assistant Project Scientist for Jupiter. “We’re getting even better
results than we had anticipated. We have seen new phenomena in
both optical and radio emissions. We have definitely seen things that
are different—in at least in one case, unanticipated—and are begging
for answers we haven’t got.” There was already, still almost a month
from encounter, a strong feeling of accomplishment among the
scientists and engineers; they had done a difficult task and it has
been successful.
By the last week of February 1979, the attention of thousands of
individuals was focused on the activities at JPL. Scientists had arrived
from universities and laboratories around the country and from
abroad, many bringing graduate students or faculty colleagues to
assist them. Engineers and technicians from JPL contractors joined
NASA officials as the Pasadena motels filled up. Special badges were
issued and reserved parking areas set aside for the Voyager influx.
Twenty-four hours a day, lights burned in the flight control rooms, the
science offices, the computer areas, and the photo processing labs.
In order to protect those with critical tasks to perform from the
friendly distraction of the new arrivals, special computer-controlled
locks were placed on doors, and security officers began to patrol the
halls. By the end of the month the press had begun to arrive. Amid
accelerating excitement, the Voyager 1 encounter was about to
begin.
The smaller-scale clouds on Jupiter tend to be more irregular than the
large ovals and plumes. At the lower right, one of the three large white
ovals clearly shows internal structure, with the swirling cloud pattern
indicating counterclockwise, or anticyclonic, flow. A smaller anticyclonic
white feature near the center is surrounded by a dark, cloud-free band
63
where one can see to greater depths in the atmosphere. This photo was
taken March 1 from a distance of 4 million kilometers. [P-21183C]
CHAPTER 6
THE FIRST ENCOUNTER
The Giant Is Full of Surprises
The Voyager 1 encounter took place at 4:42 a.m. PST, March 5, 1979.
About six hours before, while the spacecraft continued to hurtle on
toward Jupiter, overflow crowds had poured out of Beckman
Auditorium on the campus of the California Institute of Technology.
There had been a symposium entitled Jupiter and the Mind of Man.
More than one face that evening had turned to look toward the
Pasadena night, for there, glittering against the fabric of the sky, was
the “star” of the show—a planet so huge that the Earth would be but
a blemish on its Great Red Spot. One might have tried to imagine
Voyager 1, large by spacecraft standards, rapidly closing in on Jupiter
—a pesky, investigative “mosquito” buzzing around the Jovian
system.
Meanwhile, at JPL, pictures taken by Voyager 1 were flashing back
every 48 seconds, pictures that were already revealing more and
more of Jupiter’s atmosphere and would soon disclose four new
worlds—the Galilean satellites. The encounter was almost at hand.
4:42 a.m.—Voyager 1 had made its closest approach to Jupiter 37
minutes earlier. Only now were the signals that had been sent out
from the spacecraft at 4:05 a.m. reaching the Earth.
But the moment of closest encounter did not have the same impact a
landing would have had. Although the champagne would flow later
for those who had worked so long and hard on this successful
mission, there was none now. In the press room there were just four
64
coffeepots working overtime to help keep the press alert. In the
science areas, focus had shifted to the satellite encounters, which
would stretch over the next 24 hours; meanwhile, a few persons tried
to catch a little sleep at their desks before the first closeups of Io
came in. The instant of encounter came ... and went ... with no
screams, no New Year’s noisemakers. All the excitement of the
mission lay both behind ... and ahead.
Thursday, February 22.
The encounter activity was kicked off in Washington, D.C., by a press
conference at NASA Headquarters. After introductory statements by
NASA and JPL officials, some of the scientific results from the
observatory and far encounter phases were presented.
Both the plasma wave and the planetary radio astronomy instruments
had detected very-low-frequency radio emission that generally
increased in intensity whenever either the north or south magnetic
pole of Jupiter tipped toward the spacecraft. The plasma wave
experiment had also detected radiation that seemed to come from a
region either near or beyond the orbit of Io—perhaps even as far as
the outer magnetosphere. In addition, Frederick L. Scarf, Principal
Investigator for the plasma wave instrument, released a recording of
the ion sound waves created by 5 to 10 kilovolt energy protons
traveling upstream from Jupiter.
James Warwick, Principal Investigator of the radio astronomy
investigation, enthusiastically reported the detection of a striking new
low-frequency radiation from Jupiter, at wavelengths of tens of
kilometers. Such radio waves cannot penetrate the ionosphere of the
Earth, and thus had never been detected before. Because of their
huge scale, these bursts probably did not originate at Jupiter itself,
but in magnetospheric regions above the planet—perhaps in
association with the high-density plasma torus associated with Io.
Warwick commented that “only from the magnificent
perspective of the Voyager as it approaches Jupiter have we been
able to get this complete picture.”
The Voyager 1 encounter with Jupiter took place during a little more than
48 hours, from the inbound to the outbound crossing of the orbit of
Callisto. This figure shows the spacecraft path as it would be seen from
above the north pole of Jupiter. Closest approach to Jupiter was 350 000
kilometers. The close flybys of Io, Ganymede, and Callisto all took place
as the spacecraft was outbound.
Voyager 1 trajectory
Launch date = 9/5/77
Jupiter arrival date = 3/5/79
Satellite closest approach
Sun occultation
Earth occultation
Io
Ganymede
Periapsis
Callisto
Amalthea
Europa
Another unexpected result was announced by Lyle Broadfoot,
Principal Investigator for the ultraviolet spectrometer investigation.
The scientists had expected to find very weak ultraviolet emissions on
the sunlit side of Jupiter, caused by sunlight being scattered from
hydrogen and helium in Jupiter’s upper atmosphere. “Instead we are
seeing a spectacular auroral display. There are two features of the
emission—the auroral emission which comes from the planet and a
second type of emission which appears to come from a radiating
torus or shell around the planet at the orbit of Io. The spectral
content of these two radiating sources is distinctly different. What we
find is that the auroral emission from Jupiter’s atmosphere is so
strong that it completely dominates the emission spectrum even on
the sunlit side of the atmosphere.”
“Not since Mariner 4 carried its TV camera to Mars fifteen years ago
have we been less prepared—have we been less certain of what we
are about to see over the next two weeks,” said Bradford Smith,
Imaging Team Leader. He mentioned the time-lapse “rotation movie,”
in which the colorful planet spun through ten full Jupiter days; tiny
images of the satellites passed across Jupiter’s face as though being
whipped along by the rotation of the giant. A week or so earlier,
when this film had been shown for the first time to the full Imaging
Team, it provided an occasion for good-humored rivalry between
planet people and satellite people, with jokes about the satellites
getting in the way of the important studies of Jupiter. For the next
few days, the imaging focus remained on Jupiter; it shifted to Io,
Ganymede, and Callisto, as each was passed in turn after closest
approach to the planet.
65
Tuesday, February 27.
(Range to Jupiter, 7.1 million kilometers). At a distance of 660 million
kilometers from Earth, within 90 Jovian radii (RJ) of Jupiter, Voyager 1
was prepared to begin the encounter with the planet’s
magnetosphere. On the previous day the spacecraft had crossed the
point, at 100 RJ, at which Pioneers 10 and 11 had found the bow
shock, the first indication of the magnetospheric boundary. The start
of Voyager’s plunge into the Jovian magnetosphere was overdue, and
scientists anxiously watched the data from the particles and fields
instruments, looking for the first indication of disordered magnetic
fields and altered particle densities that would mark the bow shock.
Apparently, higher solar wind pressure, associated with increased
solar activity since 1974, had compressed the magnetosphere, but no
one could predict how strong this compression might be.
ENCOUNTER DISTANCES FOR VOYAGER 1
Object
Range to Center at Closest
Approach (kilometers)
Best Image Resolution
(km per line pair)
Jupiter 349 000 8
Amalthea 417 000 8
Io 21 000 1
Europa 734 000 33
Ganymede 115 000 2
Callisto 126 000 2
For the first time since its discovery, Lyle Broadfoot and his UVS
colleagues suggested a probable identification for the unexpected
ultraviolet emission near the orbit of Io. The most likely candidate
was sulfur atoms with two electrons removed (S III), at an inferred
temperature perhaps as high as 200 000 K. An additional
indication of sulfur came from Mike Krimigis, who reported that
the low-energy charged particles instrument had detected bursts of
sulfur ions streaming away from Jupiter that had apparently escaped
from the inner magnetosphere. No explanations were offered,
however, for the presence of large amounts of this element.
At JPL, a press room had been opened in Von Karman Auditorium to
accommodate the hundred or so reporters expected to arrive. To
keep all the interested people informed of Voyager progress, frequent
television reports were beamed over closed-circuit TV throughout
JPL. From an in-lab television studio called the Blue Room, JPL
scientist Al Hibbs, who had played a similar role during the Viking
Mission to Mars, provided hourly reports and interviewed members of
the Voyager teams. As the pressure for constant commentary and
instant analysis increased, Garry Hunt of the Imaging Team was also
called on to host activities in the Blue Room, where his British accent
added an additional touch of class to the operation.
As the encounter progressed, the JPL television reports reached a
wider audience. In the Los Angeles area, KCET Public Television
began a nightly “Jupiter Watch” program, with Dr. Hibbs as host.
During the encounter days, service was extended to interested public
television stations throughout the nation. In this way, tens of
thousands of persons were able to experience the thrill of discovery,
seeing the closeup pictures of Jupiter and its satellites at the same
moment as the scientists at JPL saw them, and listening to the
excited and frequently awestruck commentary as the first tentative
interpretations were attempted. Unfortunately, the commercial
television networks did not make use of this opportunity, and the
greatest coverage available to most of the country was a 90-second
commentary on the nightly network news.
Wednesday, February 28.
(Range to Jupiter, 5.9 million kilometers). At 6:33 a.m., at a range of
86 RJ, Voyager 1 finally reached Jupiter’s bow shock. But by 12:28
p.m. the solar wind had pushed the magnetosphere back toward
Jupiter, and Voyager was once more outside, back in the solar wind.
Not until March 2, at a distance of less than 45 RJ, would the
spacecraft enter the magnetosphere for the final time.
At 11 a.m. the first daily briefing to the press was given. “After nearly
two months of atmospheric imaging and perhaps a week or two of
satellite viewing, [we’re] happily bewildered,” said Brad Smith. The
Jovian atmosphere is “where our greatest state of confusion seems to
exist right at the moment, although over the next several days we
may find that some of our smirking geology friends will find
themselves in a similar state. I think, for the most part, we have to
say that the existing atmospheric circulation models have all been
shot to hell by Voyager. Although these models can still explain some
of the coarse zonal flow patterns, they fail entirely in explaining the
detailed behavior that Voyager is now revealing.” It was thought,
from Pioneer results, that Jupiter’s atmosphere showed primarily
horizontal or zonal flow near the equatorial region, but that the zonal
flow pattern broke down at high latitudes. But Voyager found that
“zonal flow exists poleward as far as we can see.”
Smith also showed a time-lapse movie of Jupiter assembled from
images obtained during the month of January. Once each rotation,
approximately every ten hours, a color picture had been taken.
Viewed consecutively, these frames displayed the complex cloud
motions on a single hemisphere of Jupiter, as they would be seen
from a fixed point above the equator of the planet. The film revealed
that clouds move around the Great Red Spot in a period of about six
days, at speeds of perhaps 100 meters per second. The Great Red
Spot, as well as many of the smaller spots that dot the planet,
appeared to be rotating anticyclonically. Anticyclonic motion is
characteristic of high-pressure regions, unlike terrestrial storms.
Smith noted that “Jupiter is far more complex in its atmospheric
motions than we had ever imagined. We are seeing a much more
complicated flow of cyclonic and anticyclonic vorticity, circulation. We
see currents which flow along and seem to interact with an obstacle
and turn around and flow back.” There is a Jovian jet stream that is
“moving along at well over 100 meters per second. Several of these
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

Ccie And Ccde Written Exam Evolving Technologies Version 11 Study Guide Nicholas J Russo

  • 1.
    Ccie And CcdeWritten Exam Evolving Technologies Version 11 Study Guide Nicholas J Russo download https://ebookbell.com/product/ccie-and-ccde-written-exam- evolving-technologies-version-11-study-guide-nicholas-j- russo-53639166 Explore and download more ebooks at ebookbell.com
  • 2.
    Here are somerecommended products that we believe you will be interested in. You can click the link to download. Allinone Ccie Routing And Switching V50 Written Exam Guide 2nd Edition Paul Adam https://ebookbell.com/product/allinone-ccie-routing-and- switching-v50-written-exam-guide-2nd-edition-paul-adam-6769026 Ccie And Ccde Evolving Technologies Study Guide 1st Edition Edgeworth https://ebookbell.com/product/ccie-and-ccde-evolving-technologies- study-guide-1st-edition-edgeworth-11381306 Ccnp And Ccie Enterprise Core Encor 350401 Official Cert Guide 1st Edition Brad Edgeworth https://ebookbell.com/product/ccnp-and-ccie-enterprise-core- encor-350401-official-cert-guide-1st-edition-brad-edgeworth-48949898 Ccnp And Ccie Data Center Core Dccor 350601 Official Cert Guide Unknown https://ebookbell.com/product/ccnp-and-ccie-data-center-core- dccor-350601-official-cert-guide-unknown-49420462
  • 3.
    Ccnp And CcieSecurity Core Scor 350701 Official Cert Guide 2nd Edition Omar Santos https://ebookbell.com/product/ccnp-and-ccie-security-core- scor-350701-official-cert-guide-2nd-edition-omar-santos-58738388 Ccnp And Ccie Enterprise Core Encor 350401 Official Cert Guide 2nd Edition Brad Edgeworth https://ebookbell.com/product/ccnp-and-ccie-enterprise-core- encor-350401-official-cert-guide-2nd-edition-brad-edgeworth-58755726 Ccnp And Ccie Security Core Scor 350701 Official Cert Guide Omar Santos https://ebookbell.com/product/ccnp-and-ccie-security-core- scor-350701-official-cert-guide-omar-santos-23530246 Ccnp And Ccie Enterprise Core Encor 350401 Exam Cram Donald Bacha https://ebookbell.com/product/ccnp-and-ccie-enterprise-core- encor-350401-exam-cram-donald-bacha-42087112 Ccnp And Ccie Data Center Core Dccor 350601 Official Cert Guide 2nd Edition 2nd Somit Maloo Iskren Nikolov Firas Ahmed https://ebookbell.com/product/ccnp-and-ccie-data-center-core- dccor-350601-official-cert-guide-2nd-edition-2nd-somit-maloo-iskren- nikolov-firas-ahmed-52790820
  • 5.
    CCIE™ and CCDE™ WrittenExam Evolving Technologies Version 1.1 Study Guide By: Nicholas J. Russo CCIE #42518 (RS/SP) CCDE #20160041 Revision 2.0 (1 July 2018)
  • 6.
    2 By: Nick Russowww.njrusmc.net About the Author Nicholas (Nick) Russo holds active CCIE certifications in Routing and Switching and Service Provider, as well as CCDE. Nick authored a comprehensive study guide for the CCIE Service Provider version 4 examination and this document provides updates to the written test for all CCIE/CCDE tracks. Nick also holds a Bachelor’s of Science in Computer Science, and a minor in International Relations, from the Rochester Institute of Technology (RIT). Nick lives in Maryland, USA with his wife, Carla, and daughter, Olivia. For updates to this document and Nick’s other professional publications, please follow the author on his Twitter, LinkedIn, and personal website. Technical Reviewers: Angelos Vassiliou, Leonid Danilov, and many from the RouterGods team. This material is not sponsored or endorsed by Cisco Systems, Inc. Cisco, Cisco Systems, CCIE and the CCIE Logo are trademarks of Cisco Systems, Inc. and its affiliates. All Cisco products, features, or technologies mentioned in this document are trademarks of Cisco. This includes, but is not limited to, Cisco IOS®, Cisco IOS-XE®, and Cisco IOS-XR®. Within the body of this document, not every instance of the aforementioned trademarks are combined with the symbols ® or ™ as they are demonstrated above. The same is true for non-Cisco products and is done for the sake of readability and brevity. THE INFORMATION HEREIN IS PROVIDED ON AN "AS IS" BASIS, WITHOUT ANY WARRANTIES OR REPRESENTATIONS, EXPRESS, IMPLIED OR STATUTORY, INCLUDING WITHOUT LIMITATION, WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Author’s Notes This book is designed for any CCIE track (as well as the CCDE) that introduces the "Evolving Technologies" section of the blueprint for the written qualification exam. It is not specific to any examination track and provides an overview of the three key evolving technologies: Cloud, Network Programmability, and Internet of Things (IoT). Italic text represents cited text from another not created by the author. This is typically directly from a Cisco document, which is appropriate given that this is a summary of Cisco’s vision on the topics therein. This book is not an official publication, does not have an ISBN assigned, and is not protected by any copyright. It is not for sale and is intended for free, unrestricted distribution. The opinions expressed in this study guide belong to the author and are not necessarily those of Cisco. This book costs the author's personal money to pay for software used in the book's demonstrations. If you like the book and want to help support its continued success, please consider donating here.
  • 7.
    3 By: Nick Russowww.njrusmc.net Contents 1. Cloud 5 1.1 Compare and contrast cloud deployment models 5 1.1.1 Infrastructure, platform, and software as a service (XaaS) 9 1.1.2 Performance, scalability, and high availability 12 1.1.3 Security compliance, compliance, and policy 15 1.1.4 Workload migration 16 1.2 Describe cloud infrastructure and Operations 17 1.2.1 Compute virtualization, hyper-convergence, and disaggregation 21 1.2.1.1 Virtual Machines 21 1.2.1.2 Containers (with Docker demonstration) 22 1.2.2 Connectivity (in virtualized networks) 33 1.2.2.1 Virtual switches 34 1.2.2.2 Software-defined Wide Area Network (SD-WAN Viptela demonstration) 35 1.2.2.3 Software-defined Access (SD-Access) 39 1.2.2.4 Software-defined Data Center (SD-DC) 40 1.2.3 Virtualization Functions 43 1.2.3.1 Network Functions Virtualization infrastructure (NFVi) 43 1.2.3.2 Virtual Network Functions on layers 1 through 4 (Cisco NFVIS demonstration) 44 1.2.4 Automation and Orchestration tools 51 1.2.4.1 Cloud Center 52 1.2.4.2 Digital Network Architecture Center (DNA-C with demonstration) 52 1.2.4.3 Kubernetes orchestration (Demonstration with minikube) 59 1.3 References and Resources 66 2. Network Programmability 67 2.1 Describe architectural and operational considerations 67 2.1.1 Data models and structures 67 2.1.1.1 YANG 67 2.1.1.2 YAML 71 2.1.1.3 JSON 72 2.1.1.4 XML 73 2.1.2 Device programmability 74 2.1.2.1 Google Remote Procedure Call (gRPC demonstration on IOS-XR) 74 2.1.2.2 NETCONF (IOS-XE demonstration) 81 2.1.2.3 RESTCONF (IOS-XE demonstration) 85 2.1.2.4 REST API (IOS-XE demonstration) 86 2.1.3 Controller-based network design 93
  • 8.
    4 By: Nick Russowww.njrusmc.net 2.1.4 Configuration management tools 99 2.1.4.1 Agent-based 99 2.1.4.2 Agent-less (Ansible demonstration) 100 2.1.5 Version control systems 105 2.1.5.1 Git with Github 105 2.1.5.2 Git with Amazon Web Services (AWS) CodeCommit and CodeBuild 108 2.1.5.3 Subversion (SVN) and comparison to Git 116 2.2 References and Resources 123 3. Internet of Things (IoT) 124 3.1 Describe architectural framework and deployment considerations for IoT 124 3.1.1 IoT Technology Stack 127 3.1.1.1 IoT Network Hierarchy 127 3.1.1.2 Data Acquisition and Flow 128 3.1.2 IoT standards and protocols 130 3.1.3 IoT security 134 3.1.4 IoT edge and fog computing 136 3.1.4.1 Data aggregation 137 3.1.4.2 Edge intelligence 139 3.2 References and Resources 141 4. Cloud topics from v1.0 blueprint 142 4.1.1 Troubleshooting and management 142 4.1.2 OpenStack components and demonstration 142 4.1.3 Cloud comparison chart 156 5. Network programmability topics from v1.0 blueprint 156 5.1 Describe functional elements of network programmability and how they interact 156 5.1.1 Controllers 156 5.2 Describe aspects of virtualization and automation in network environments 158 5.2.1 DevOps methodologies, tools and workflows (Jenkins demonstration) 158 6. Internet of Things (IoT) topics from the v1.0 blueprint 170 6.1 Describe architectural framework and deployment considerations 170 6.1.1 Performance, reliability and scalability 170 7. Glossary of Terms 171
  • 9.
    5 By: Nick Russowww.njrusmc.net 1. Cloud Cisco has defined cloud as follows: IT resources and services that are abstracted from the underlying infrastructure and provided "on- demand" and "at scale" in a multitenant environment. Cisco identifies three key components from this definition that differentiate cloud deployments from ordinary data center (DC) outsourcing strategies: a. "On-demand" means that resources can be provisioned immediately when needed, released when no longer required, and billed only when used. b. "At-scale" means the service provides the illusion of infinite resource availability in order to meet whatever demands are made of it. c. "Multitenant environment" means that the resources are provided to many consumers from a single implementation, saving the provider significant costs. These distinctions are important for a few reasons. Some organizations joke that migrating to cloud is simple; all they have to do is update their on-premises DC diagram with the words "Private Cloud" and upper management will be satisfied. While it is true that the term "cloud" is often abused, it is important to differentiate it from a traditional private DC. This is discussed next. 1.1 Compare and contrast cloud deployment models Cloud architectures generally come in four variants: a. Public: Public clouds are generally the type of cloud most people think about when the word "cloud" is spoken. They rely on a third party organization (off-premise) to provide infrastructure where a customer pays a subscription fee for a given amount of compute/storage, time, data transferred, or any other metric that meaningfully represents the customer’s "use" of the cloud provider’s shared infrastructure. Naturally, the supported organizations do not need to maintain the cloud’s physical equipment. This is viewed by many businesses as a way to reduce capital expenses (CAPEX) since purchasing new DC equipment is unnecessary. It can also reduce operating expenses (OPEX) since the cost of maintaining an on-premise DC, along with trained staff, could be more expensive than a public cloud solution. A basic public cloud design is shown on the following page; the enterprise/campus edge uses some kind of transport to reach the Cloud Service Provider (CSP) network. The transport could be the public Internet, an Internet Exchange Point (IXP), a private Wide Area Network (WAN), or something else.
  • 10.
    6 By: Nick Russowww.njrusmc.net CAMPUS EDGE CSP EDGE INTERNET, WAN, IXP CAMPUS b. Private: Like the joke above, this model is like an on-premises DC except it must supply the three key ingredients identified by Cisco to be considered a "private cloud". Specifically, this implies automation/orchestration, workload mobility, and compartmentalization must all be supported in an on-premises DC to qualify. The organization is responsible for maintaining the cloud’s physical equipment, which is extended to include the automation and provisioning systems. This can increase OPEX as it requires trained staff. Like the on-premises DC, private clouds provide application services to a given organization and multi-tenancy is generally limited to business units or projects/programs within that organization (as opposed to external customers). The diagram on the following page illustrates a high-level example of a private cloud.
  • 11.
    7 By: Nick Russowww.njrusmc.net PRIVATE CLOUD (ON-PREM DC) CAMPUS CORE c. Virtual Private: A virtual private cloud is a combination of public and private clouds. An organization may decide to use this to offload some (but not all) of its DC resources into the public cloud, while retaining some things in-house. This can be seen as a phased migration to public cloud, or by some skeptics, as a non-committal trial. This allows a business to objectively assess whether the cloud is the "right business decision". This option is a bit complex as it may require moving workloads between public/private clouds on a regular basis. At the very minimum, there is the initial private-to-public migration; this could be time consuming, challenging, and expensive. This design is sometimes called a "hybrid cloud" and could, in fact, represent a business’ IT end-state. The diagram on the following page illustrates a high-level example of a virtual-private (hybrid) cloud.
  • 12.
    8 By: Nick Russowww.njrusmc.net CAMPUS EDGE CSP EDGE INTERNET, WAN, IXP PRIVATE CLOUD (ON-PREM DC) CAMPUS d. Inter-cloud: Like the Internet (an interconnection of various autonomous systems provide reachability between all attached networks), Cisco suggests that, in the future, the contiguity of cloud computing may extend between many third-party organizations. This is effectively how the Internet works; a customer signs a contract with a given service provider (SP) yet has access to resources from several thousand other service providers on the Internet. The same concept could be applied to cloud and this is an active area of research for Cisco. Below is a based-on-a-true-story discussion that highlights some of the decisions and constraints relating to cloud deployments. a. An organization decides to retain their existing on-premises DC for legal/compliance reasons. By adding automation/orchestration and multi-tenancy components, they are able to quickly increase and decrease virtual capacity. Multiple business units or supported organizations are free to adjust their security policy requirements within the shared DC in a manner that is secure and invisible to other tenants; this is the result of compartmentalization within the cloud architecture. This deployment would qualify as a "private cloud".
  • 13.
    9 By: Nick Russowww.njrusmc.net b. Years later, the same organization decides to keep their most important data on-premises to meet seemingly-inflexible Government regulatory requirements, yet feels that migrating a portion of their private cloud to the public cloud is a solution to reduce OPEX. This helps increase the scalability of the systems for which the Government does not regulate, such as virtualized network components or identity services, as the on-premises DC is bound by CAPEX reductions. The private cloud footprint can now be reduced as it is used only for a subset of tightly controlled systems, while the more generic platforms can be hosted from a cloud provider at lower cost. Note that actually exchanging/migrating workloads between the two clouds at will is not appropriate for this organization as they are simply trying to outsource capacity to reduce cost. As discussed earlier, this deployment could be considered a "virtual private cloud" by Cisco, but is also commonly referred to as a "hybrid cloud". c. Years later still, this organization considers a full migration to the public cloud. Perhaps this is made possible by the relaxation of the existing Government regulations or by the new security enhancements offered by cloud providers. In either case, the organization can migrate its customized systems to the public cloud and consider a complete decommission of their existing private cloud. Such decommissioning could be done gracefully, perhaps by first shutting down the entire private cloud and leaving it in "cold standby" before removing the physical racks. Rather than using the public cloud to augment the private cloud (like a virtual private cloud), the organization could migrate to a fully public cloud solution. 1.1.1 Infrastructure, platform, and software as a service (XaaS) Cisco defines four critical service layers of cloud computing: a. Software as a Service (SaaS) is where application services are delivered over the network on a subscription and on-demand basis. A simple example would be to create a document but not installing the appropriate text editor on a user’s personal computer. Instead, the application is hosted "as a service" that a user can access anywhere, anytime, from any machine. SaaS is an interface between users and a hosted application, often times a hosted web application. Examples of SaaS include Cisco WebEx, Microsoft Office 365, github.com, blogger.com, and even Amazon Web Services (AWS) Lambda functions. This last example is particularly interesting since, according to Amazon, it provides "more granular model provides us with a much richer set of opportunities to align tenant activity with resource consumption". Being "serverless", lambda functions execute a specific task based on what the customer needs, and only the resources consumed during that task's execution (compute, storage, and network) are billed. b. Platform as a Service (PaaS) consists of run-time environments and software development frameworks and components delivered over the network on a pay-as-you-go basis. PaaS offerings are typically presented as API to consumers. Similar to SaaS, PaaS is focused on providing a complete development environment for computer programmers to test new
  • 14.
    10 By: Nick Russowww.njrusmc.net applications, typically in the development (dev) phase. Although less commonly used by organizations using mostly commercial-off-the-shelf (COTS) applications, it is a valuable offering for organizations developing and maintaining specific, in-house applications. PaaS is an interface between a hosted application and a development/scripting environment that supports it. Cisco provides WebEx Connect as a PaaS offering. Other examples of PaaS include the specific- purpose AWS services like Route 53 for Domain Name Service (DNS) support, CloudFront/CloudWatch for collecting performance metrics, and a wide variety of Relational Database Service (RDS) offerings for storing data. The customer consumes these services but does not have to maintain them (patching, updates, etc.) as part of their network operations. c. Infrastructure as a Service (IaaS) is where compute, network, and storage are delivered over the network on a pay-as-you-go basis. The approach that Cisco is taking is to enable service providers to move into this area. This is likely the first thing that comes to mind when individuals think of "cloud". It represents the classic "outsourced DC" mentality that has existed for years and gives the customer flexibility to deploy any applications they wish. Compared to SaaS, IaaS just provides the "hardware", roughly speaking, while SaaS provides both the underlying hardware and software application running on it. IaaS may also provide a virtualization layer by means of a hypervisor. A good example of an IaaS deployment could be a miniature public cloud environment within an SP point of presence (POP) which provides additional services for each customer: firewall, intrusion prevention, WAN acceleration, etc. IaaS is effectively an interface between an operating system and the underlying hardware resources. More general-purpose EC2 services such as Elastic Compute Cloud (EC2) and Simple Storage Service (S3) qualify as IaaS since the AWS' management is limited to the underlying infrastructure, not the objects within each service. The customer is responsible for basic maintenance (patching, hardening, etc.) of these virtual instances and data products. d. IT foundation is the basis of the above value chain layers. It provides basic building blocks to architect and enable the above layers. While more abstract than the XaaS layers already discussed, the IT foundation is generally a collection of core technologies that evolve over time. For example, DC virtualization became very popular about 15 years ago and many organizations spent most of the last decade virtualizing "as much as possible". DC fabrics have also changed in recent years; the original designs represented a traditional core/distribution/access layer design yet the newer designs represent leaf/spine architectures. These are "IT foundation" changes that occur over time which help shape the XaaS offerings, which are always served using the architecture defined at this layer. Cisco views DC evolution in five phases: a. Consolidation: Driven mostly by a business need to reduce costs, this phase focused on reducing edge computing and reducing the number of total DCs within an enterprise. DCs started to take form with two major components: i. Data Center Network (DCN): Provides the underlying reachability between attached devices in the DC, such as compute, storage, and management tools.
  • 15.
    11 By: Nick Russowww.njrusmc.net ii. Storage Area Network (SAN): While this may be integrated or entirely separate from the DCN, it is a core component in the DC. Storage devices are interconnected over a SAN which typically extends to servers needing to access the storage. b. Abstraction: To further reduce costs and maximize return on investment (ROI), this phase introduces pervasive virtualization. This provides virtual machine and workload mobility and availability to DC operators. c. Automation: To improve business agility, automation is introduced to rapidly and consistently "do things" within a DC. These things include routine system management, service provisioning, or business-specific tasks like processing credit card information. d. Cloud: With the previous phases complete, the cloud model of IT services delivered as a utility becomes possible for many enterprises. Such designs may include a mix of public and private cloud solutions. e. Intercloud: Briefly discussed earlier, this is Cisco's vision of cloud interconnection to generally mirror the Internet concept. At this phase, internal and external clouds will coexist, federate, and share resources dynamically. Although not defined in formal Cisco documentation, there are many more flavors of XaaS. Below are some additional examples of storage related services commonly offered by large cloud providers: a. Database-as-a-Service: Some applications require databases, especially relational databases like the SQL family. This service would provide the database itself and the ability for the database to connect to the application so it can be utilized. AWS RDS services qualify as offerings in this category. b. Object-Storage-as-a-Service: Sometimes cloud users only need access to files independent from a specific application. Object storage is effectively a remote file share for this purpose, which in many cases can also be utilized by an application internally. This service provides the object storage service as well as the interfaces necessary for users and applications to access it. AWS S3 is an example of this service, which in some cases is a subnet of IaaS/PaaS. c. Block-Storage-as-a-Service: These services are commonly tied to applications that require access to the disks themselves. Applications can format the disks and add whatever file system is necessary, or perhaps use the disk for some other purpose. This service provides the block storage assets (disks, logical unit number or LUNs, etc.) and the interfaces to connect the storage assets to the applications themselves. AWS Elastic Block Storage (EBS) is an example of this service. This book provides a more complete look into popular cloud service offerings in the OpenStack section. Note OpenStack was removed from the new v1.1 blueprint but was retained at the end of this book.
  • 16.
    12 By: Nick Russowww.njrusmc.net 1.1.2 Performance, scalability, and high availability Assessing the performance and reliability of cloud networks presents an interesting set of trade-offs. For years, network designers have considered creating "failure domains" in the network so as to isolate faults. With routing protocols, this is conceptually easy to understand, but often times difficult to design and implement, especially when considering business/technical constraints. Designing a DC comes with its own set of trade-offs when identifying the "failure domains" (which are sometimes called "availability zones" within a fabric), but that is outside the scope of this document. The real trade-offs with a cloud environment revolve around the introduction of automation. Automation is discussed in detail in section 1.2.1 but the trade-offs are discussed here as they directly influence the performance and reliability of a system. Note that this discussion is typically relevant for private and virtual private clouds, as a public cloud provider will always be large enough to warrant several automation tools. Automation usually reduces the total cost of ownership (TCO), which is desirable for any business. This is the result of reducing the time (and labor wages) it takes for individuals to "do things": provision a new service, create a backup, add VLANs to switches, test MPLS traffic-engineering tunnel computations, etc. The trade-off is that all software (including the automation system being discussed) requires maintenance, whether that is in the form of in-house development or a subscription fee from a third- party. If in the form of in-house development, software engineers are paid to maintain and troubleshoot the software which could potentially be more expensive than just doing things manually, depending on how much maintenance and unit testing the software requires. Most individuals who have worked as software developers (including the author) know that bugs or feature requests always seem to pop up, and maintenance is continuous for any non-trivial piece of code. Businesses must also consider the cost of the subscription for the automation software against the cost of not having it (in labor wages). Typically this becomes a simple choice as the network grows; automation often shines here. Automation is such a key component of cloud environments because the cost of dealing with software maintenance is almost always less than the cost of a large IT staff. Automation can also be used for root cause analysis (RCA) whereby the tool can examine all the components of a system to test for faults. For example, suppose an eBGP session fails between two organizations. The script might test for IP reachability between the eBGP routers first, followed by verifying no changes to the infrastructure access lists applied on the interface. It might also collect performance characteristics of the inter-AS link to check for packet loss. Last, it might check for fragmentation on the link by sending large pings with "don’t fragment" set. This information can feed into the RCA which is reviewed by the network staff and presented to management after an outage. The main takeaway is that automation should be deployed where it makes sense (TCO reduction) and where it can be maintained with a reasonable amount of effort. Failing to provide the maintenance resources needed to sustain an automation infrastructure can lead to disastrous results. With automation, the "blast radius", or potential scope of damage, can be very large. A real-life story from the author: when updating SNMPv3 credentials, the wrong privacy algorithm was configured, causing 100% of devices to be unmanageable via SNMPv3 for a short time. Correcting the change was easily
  • 17.
    13 By: Nick Russowww.njrusmc.net done using automation, and the business impact was minimal, but it negatively affected every router, switch, and firewall in the network. Automation helps maximize the performance and reliability of a cloud environment. Another key aspect of cloud design is accessibility, which assumes sufficient network bandwidth to reach the cloud environment. A DC that was once located at a corporate site with 2,000 employees was accessible to those employees over a company’s campus LAN architecture. Often times this included high-speed core and DC edge layers whereby accessing DC resources was fast and highly available. With public cloud, the Internet/private WAN becomes involved, so cloud access becomes an important consideration. Achieving cloud scalability is often reliant on many components supporting the cloud architecture. These components include the network fabric, the application design, the virtualization/segmentation design, and others. The ability of cloud networks to provide seamless and simple interoperability between applications can be difficult to assess. Applications that are written in-house will probably interoperate better in the private cloud since the third-party provider may not have a simple mechanism to integrate with these custom applications. This is very common in the military space as in-house applications are highly customized and often lack standards-based APIs. Some cloud providers may not have this problem, but this depends entirely on their network/application hosting software (OpenStack is one example discussed later in this document). If the application is coded "correctly", APIs would be exposed so that additional provider-hosted applications can integrate with the in-house application. Too often, custom applications are written in a silo where no such APIs are presented.
  • 18.
    14 By: Nick Russowww.njrusmc.net Below is a table that compares access methods, reliability, and other characteristics of the different cloud solutions. Public Cloud Private Cloud Virtual Private Cloud Inter-Cloud Access (private WAN, IXP, Internet VPN, etc) Often times relies on Internet VPN, but could also use an Internet Exchange (IX) or private WAN Corporate LAN or WAN, which is often private. Could be Internet- based if SD-WAN deployments (e.g. Cisco IWAN) are considered. Combination of corporate WAN for the private cloud components and whatever the public cloud access method is. Same as public cloud, except relies on the Internet as transport between clouds/cloud deployments Reliability (including accessibility) Heavily dependent on highly-available and high- bandwidth links to the cloud provider Often times high given the common usage of private WANs (backed by carrier SLAs) Typically higher reliability to access the private WAN components, but depends entirely on the public cloud access method Assuming applications are distributed, reliability can be quite high if at least one "cloud" is accessible (anycast) Fault Tolerance / intra-cloud availability Typically high as the cloud provider is expected to have a highly redundant architecture (vary based on $$) Often constrained by corporate CAPEX; tends to be a bit lower than a managed cloud service given the smaller DCs Unlike public or private, the networking link between clouds is an important consideration for fault tolerance Assuming applications are distributed, fault- tolerance can be quite high if at least one "cloud" is accessible (anycast) Performance (speed, etc) Typically high as the cloud provider is expected to have a very dense compute/storage architecture Often constrained by corporate CAPEX; tends to be a bit lower than a managed cloud service given the smaller DCs Unlike public or private, the networking link between clouds is an important consideration, especially when applications are distributed across the two clouds Unlike public or private, the networking link between clouds is an important consideration, especially when applications are distributed across the two clouds Scalability Appears to be "infinite" which allows the customer to provision new services quickly High CAPEX and OPEX to expand it, which limits scale within a business Scales well given public cloud resources Highest; massively distributed architecture
  • 19.
    15 By: Nick Russowww.njrusmc.net 1.1.3 Security compliance, compliance, and policy From a purely network-focused perspective, many would argue that public cloud security is superior to private cloud security. This is the result of hiring an organization whose entire business revolves around providing a secure, high-performing, and highly-available network. A business where "the network is not the business" may be less inclined or less interested in increasing OPEX within the IT department, the dreaded cost center. The counter-argument is that public cloud physical security is always questionable, even if the digital security is strong. Should a natural disaster strike a public cloud facility where disk drives are scattered across a large geographic region (tornado comes to mind), what is the cloud provider’s plan to protect customer data? What if the data is being stored in a region of the world known to have unfriendly relations towards the home country of the supported business? These are important questions to ask because when data is in the public cloud, the customer never really knows exactly "where" the data is physically stored. This uncertainty can be offset by using "availability zones" where some cloud providers will ensure the data is confined to a given geographic region. In many cases, this sufficiently addresses the concern for most customers, but not always. As a customer, it is also hard to enforce and prove this. This sometimes comes with an additional cost, too. Note that disaster recovery (DR) is also a component of business continuity (BC) but like most things, it has security considerations as well. Privacy in the cloud is achieved mostly by introducing multi-tenancy separation. Compartmentalization at the host, network, and application layers ensure that the entire cloud architecture keeps data private; that is to say, customers can never access data from other customers. Sometimes this multi-tenancy can be done as crudely as separating different customers onto different hosts, which use different VLANs and are protected behind different virtual firewall contexts. Sometimes the security is integrated with an application shared by many customers using some kind of public key infrastructure (PKI). Often times maintaining this security and privacy is a combination of many techniques. Like all things, the security posture is a continuum which could be relaxed between tenants if, for example, the two of them were partners and wanted to share information within the same public cloud provider (like a cloud extranet). The following page compares the security and privacy characteristics between the different cloud deployment options.
  • 20.
    16 By: Nick Russowww.njrusmc.net Public Cloud Private Cloud Virtual Private Cloud Inter-Cloud Digital security Typically has best trained staff, focused on the network and not much else (network is the business) Focused IT staff but likely not IT- focused upper management (network is likely not the business) Coordination between clouds could provide attack surfaces, but isn’t wide- spread Coordination between clouds could provide attack surfaces (like what BGPsec is designed to solve) Physical security One cannot pinpoint their data within the cloud provider’s network Generally high as a business knows where the data is stored, breaches notwithstanding Combination of public and private; depends on application component distribution One cannot pinpoint their data anywhere in the world Privacy Transport from premises to cloud should be secured (Internet VPN, secure private WAN, etc) Generally secure assuming corporate WAN is secure Need to ensure any replicated traffic between public/private clouds is protected; generally this is true as the link to the public cloud is protected Need to ensure any replicated traffic between distributed public clouds is protected; not the responsibility of end-customers, but cloud providers should provide it 1.1.4 Workload migration Workload mobility is a generic goal and has been around since the first virtualized DCs were created. This gives IT administrators an increased ability to share resources amount different workloads within the virtual DC (which could consist of multiple DCs connected across a Data Center Interconnect, or DCI). It also allows workloads to be balanced across a collection of resources. For example, if 4 hosts exist in a cluster, one of them might be performing more than 50% of the computationally-expensive work while the others are underutilized. The ability to move these workloads is an important capability. It is important to understand that workload mobility is not necessarily the same thing as VM mobility. For example, a workload’s accessibility can be abstracted using anycast while the application exists in multiple availability zones (AZ) spread throughout the cloud provider’s network. Using Domain Name System (DNS), different application instances can be utilized based on geographic location, time of day, etc. The VMs have not actually moved but the resource performing the workload may vary. Although this concept has been around since the initial virtualization deployments, it is even more relevant in cloud, since the massively scalable and potentially distributed nature of that environment is abstracted into a single "cloud" entity. Using the cluster example from above, those 4 hosts might not even be in the same DC, or even within the same cloud provider (with hybrid or Inter-cloud
  • 21.
    17 By: Nick Russowww.njrusmc.net deployments). The concept of workload mobility needs to be extended large-scale; note that this doesn’t necessarily imply layer-2 extensions across the globe. It simply implies that the workload needs to be moved or distributed differently, which can be solved with geographically-based anycast solutions, for example. As discussed in the automation/orchestration section above, orchestrating workloads is a major goal of cloud computing. The individual tasks that are executed in sequence (and conditionally) by the orchestration engine could be distributed throughout the cloud. The task itself (and the code for it) is likely centralized in a code repository, which helps promote the "infrastructure as code" concept. The task/script code can be modified, ultimately changing the infrastructure without logging into individual devices. This has CM benefits for the managed device, since the device's configuration does not need to be under CM at all anymore. 1.2 Describe cloud infrastructure and Operations Cloud implementation can be broken into 2 main categories: how the cloud provider works, and how customers connect to the cloud. The second question is more straightforward to answer and is discussed first. There are three main options for connecting to a cloud provider, but this list is by no means exhaustive: a. Private WAN (like MPLS L3VPN): Using the existing private WAN, the cloud provider is connected as an extranet. To use MPLS L3VPN as an example, the cloud-facing PE exports a central service route-target (RT) and imports corporate VPN RT. This approach could give direct cloud access to all sites in a highly scalable, highly performing fashion. Traffic performance would (should) be protected under the ISP’s SLA to cover both site-to-site customer traffic and site-to-cloud/cloud-to-site customer traffic. The ISP may even offer this cloud service natively as part of the service contract. Certain services could be collocated in an SP POP as part of that SP's cloud offering. The private WAN approach is likely to be expensive and as companies try to drive OPEX down, a private WAN may not even exist. Private WAN is also good for virtual private (hybrid) cloud assuming the ISP’s SLA is honored and is routinely measuring better performance than alternative connectivity options. Virtual private cloud makes sense over private WAN because the SLA is assumed to be better, therefore the intra-DC traffic (despite being inter-site) will not suffer performance degradation. Services could be spread between the private and public clouds assuming the private WAN bandwidth is very high and latency is very low, both of which would be required in a cloud environment. It is not recommended to do this as the amount of intra-workflow bandwidth (database server on-premises and application/web server in the cloud, for example) is expected to be very high. The diagram on the following page depicts private WAN connectivity assuming MPLS L3VPN. In this design, branches could directly access cloud resources without transiting the main site.
  • 22.
    18 By: Nick Russowww.njrusmc.net b. Internet Exchange Point (IXP): A customer’s network is connected via the IXP LAN (might be a LAN/VLAN segment or a layer-2 overlay) into the cloud provider’s network. The IXP network is generally access-like and connects different organizations together so that they can peer with Border Gateway Protocol (BGP) directly, but typically does not provide transit services between sites like a private WAN. Some describe an IXP as a "bandwidth bazaar" or "bandwidth marketplace" where such exchanges can happen in a local area. A strict SLA may not be guaranteed but performance would be expected to be better than the Internet VPN. This is likewise an acceptable choice for virtual private (hybrid) cloud but lacks the tight SLA typically offered in private WAN deployments. A company could, for example, use internet VPNs for inter- site traffic and an IXP for public cloud access. A private WAN for inter-site access is also acceptable. The diagram on the following page shows a private WAN for branch connectivity and an IXP used for cloud connectivity from the main campus site.
  • 23.
    Random documents withunrelated content Scribd suggests to you:
  • 25.
  • 26.
    56 Because of theplume impingement problem on Voyager 1, Voyager 2’s first trajectory correction maneuver was adjusted to allow for the possibility of a 20 percent loss in thrust. The Voyager 2 maneuver was successful, but controllers felt that additional action was required to conserve fuel. One way to save was by reducing requirements on control of the spacecraft orientation. Less control fuel would be needed if the already miniscule pressure exerted on the spacecraft by the solar wind could be reduced. Flight engineers at JPL calculated that the pressure would be reduced if the spacecraft were tipped upside down; however, to accomplish this, the spacecraft would have to be steered by a new set of guide stars. By reprogramming the attitude control system it was found possible to substitute the northern star, Deneb, in the constellation of Cygnus, for the original reference star, Canopus, in the southern constellation of Carina. With this change, as well as readjustment of Voyager 2’s trajectory near Jupiter, inflight consumption of hydrazine was reduced significantly. In late August 1978 both Voyagers were reprogrammed to ensure better science results at Jupiter encounter; for example, the reprogramming would prevent imaging (TV) photographs from blurring when the tape recorder was operating. By early November, flight crews had begun training exercises to rehearse for the Voyager 1 flyby of Jupiter on March 5, 1979. A near encounter test was performed on December 12-14, 1978: a complete runthrough of Voyager 1’s 39-hour near encounter period, which would take place March 3-5, 1979. Participants included the flight team, the Deep Space Network tracking stations, the scientists, and the spacecraft itself. Results: Voyager and the Voyager team were all ready for the encounter. Meanwhile, the spacecraft were busy returning scientific data to Earth. Technically, the Voyagers were in the cruise phase of the mission—a period that, for Voyager 2, would last until April 24, 1979, and for Voyager 1, until January 4, 1979, when each spacecraft would enter its respective observatory phase.
  • 27.
    Cruise Phase Science Inthe first few days after launch, the spacecrafts’ instruments were turned on and calibrated; various tests for each instrument would continue to be performed throughout the cruise phase. This period presented a great opportunity for the Voyagers to study the interplanetary magnetic fields, solar flares, and the solar wind. In addition, ultraviolet and infrared radiation studies of the sky were performed. In mid-September 1977 the television cameras on Voyager 1 recorded a number of photographs of the Earth and Moon. A photograph taken September 18 captured both crescent Moon and crescent Earth. It was the first time the two celestial bodies had ever been photographed together. In November both Voyagers crossed the orbit of Mars, entering the asteroid belt a month later. On December 15, at a distance of about 170 million kilometers from Earth, Voyager 1 finally speeded past its slower twin. The journey through the asteroid belt was long but uneventful: Voyager 1 emerged safely in September 1978, and Voyager 2 in October. Unlike Pioneers 10 and 11, the Voyagers carried no instruments to look at debris in the asteroid belt. By April both spacecraft were already halfway to Jupiter and, about two months later, Voyager 1, still approximately 265 million kilometers from Jupiter, began returning photographs of the planet that showed considerable detail, although less than could be obtained with telescopes on Earth. Both the imaging (TV) and the planetary radio astronomy instruments began observing Jupiter, and, by October 2, 1978, officials announced that “the polarization characteristics of Jupiter’s radio emissions have been defined. In the high frequencies, there is consistent right-hand circular polarization, while in the low frequencies, there is a consistent left-hand circular polarization. This was an unexpected result.” In addition to scientific studies of the interplanetary medium and a first look at Jupiter, the plasma wave instrument, which studies waves of charged particles over a range of frequencies that includes audio frequencies, was able
  • 28.
    57 to record thesound of the spacecraft thrusters firing, as hydrazine fuel is decomposed and ejected into space. The sound was described as being “somewhat like a 5-gallon can being hit with a leather- wrapped mallet.” On December 10, 1978, 83 million kilometers from Jupiter, Voyager 1 took photographs that surpassed the best photographs ever taken from ground-based telescopes, and scientists were anxiously awaiting the start of continuous coverage of the rapidly changing cloud forms. These pictures, together with data from several ground-based observatories, were carefully scrutinized by a team of scientists at JPL to select the final targets in the atmosphere of Jupiter to be studied at high resolution during the flyby. The observatory phase of Voyager 1’s journey to Jupiter was about to begin.
  • 29.
    As Voyager 1approached Jupiter, the resolution of the images steadily improved. In October 1978, at a distance of about 125 million kilometers, the image was less clear than would be obtained with an Earth-based telescope. [P-20790]
  • 30.
    By December 10,the spacecraft had moved to a distance of 85 million kilometers, and the resolution was about 2000 kilometers, comparable to the best telescopic images. [P-20829C]
  • 31.
    On January 9,1979 (c), at a distance of 54 million kilometers, the image surpassed all ground-based views and approached the resolution of the Pioneer 10 and 11 photos. [P-20926C]
  • 32.
    58 In this, takenJanuary 24 at a distance of 40 million kilometers, the resolution exceeded 1000 kilometers. [P-20945C] The Observatory Phase
  • 33.
    The observatory phase,originally scheduled to start on December 15, 1978, eighty days before encounter, was postponed until January 4, 1979, to provide the flight team a holiday-season break. For the next two months, Voyager would carry out a long-term scientific study—a “time history”—of Jupiter, its satellites, and its magnetosphere. On January 6, Voyager 1 began photographing Jupiter every two hours— each time taking a series of four photographs through different color filters as part of a long-duration study of large-scale atmospheric processes, so scientists could study the changing cloud patterns on Jupiter. Even the first of these pictures showed that the atmosphere was dynamic “with more convective structure than had previously been thought.” Particularly striking were the changes in the planet, especially near the Great Red Spot, that had taken place since the Pioneer flybys in 1973 and 1974. Jupiter was wearing a new face for Voyager. In mid-January, photos of Jupiter were already being praised for “showing exceptional details of the planet’s multicolored bands of clouds.” Still 47 million kilometers from the giant, Voyager 1 had by now taken more than 500 photographs of Jupiter. Movements of cloud patterns were becoming more obvious; feathery structures seemed painted across some of the bands that encircle the planet; swirling features were huddled near the Great Red Spot. The satellites were also beginning to look more like worlds, with a few bright spots visible on Ganymede and dark red poles and a bright equatorial region clearly seen on Io when it passed once each orbit across the turbulent face of Jupiter. From January 30 to February 3, Voyager sent back one photograph of Jupiter every 96 seconds over a 100-hour period. Using a total of three different color filters, the spacecraft thus produced one color picture every 4¾ minutes, in order to make a “movie” covering ten Jovian “days.” To receive these pictures, sent back over the high-rate X-band transmitter, the Deep Space Network’s 64-meter antennas provided round-the-clock coverage. Voyager 1 was ready for its far encounter phase.
  • 34.
    59 The Galilean satellitesof Jupiter first began to show as tiny worlds, not mere points of light, as the Voyager 1 observatory phase began. In this view taken January 17, 1979, at a range of 47 million kilometers, the differing sizes and surface reflectivities (albedos) of Ganymede (right center) and Europa (top right) are clearly visible. The view of Jupiter is unusual in that the Great Red Spot is not easily visible, but can just be seen at the right edge of the planet. Most pictures selected for publication include the photogenic Red Spot. [P-20938C] Far Encounter Phase By early February Jupiter loomed too large in the narrow-angle camera to be photographed in one piece; 2 × 2 three-color (violet, orange, green) sets of pictures were taken for the next two weeks; by February 21, Jupiter had grown too large even for that, and 3 × 3 sets were scheduled to begin. When these sets of pictures are pasted together to form a single picture, the result is called a mosaic.
  • 35.
    By February 1,1979, Voyager 1 was only 30 million kilometers from Jupiter, and the resolution of the imaging system corresponded to about 600 kilometers at the cloud tops of the giant planet. At this time, a great deal of unexpected complexity became apparent around the Red Spot, and movie sequences clearly showed the cloud motions, including the apparent six-day rotation of the Red Spot. [P-20993C]
  • 36.
    60 One of themost spectacular planetary photographs ever taken was obtained on February 13 as Voyager 1 continued its approach to Jupiter. By this time, at a range of 20 million kilometers, Jupiter loomed too large to fit within a single narrow-angle imaging frame. Passing in front of the planet are the inner two Galilean satellites. Io, on the left, already shows brightly colored patterns on its surface, while Europa, on the right, is a bland ice-covered world. The scale of all of these objects is huge by terrestrial standards; Io and Europa are each the size of our Moon, and the Red Spot is larger than the Earth. [P-21082C] While the imaging experiments were in the limelight, the other scientific instruments had also begun to concentrate on the Jovian system. The ultraviolet spectrometer had been scanning the region eight times a day; the infrared spectrometer (IRIS) spent 1½ hours a day analyzing infrared emissions from various longitudes in Jupiter’s atmosphere; the planetary radio astronomy and plasma wave
  • 37.
    61 instruments looked forradio bursts from Jupiter and for plasma disturbances in the region; the photopolarimeter had begun searching for the edge of Io’s sodium torus; and a watch was begun for the bow shock—the outer boundary of the Jovian magnetosphere. On February 10, Voyager 1 crossed the orbit of Sinope, Jupiter’s outermost satellite. Yet the spacecraft even then had a long way to go—still 23 million kilometers from Jupiter, but closing in on the planet at nearly a million kilometers a day. A week later, targeted photographs of Callisto began to provide coverage of the satellite all around its orbit; similar photos of Ganymede began on February 25. Meanwhile excitement was building. As early as February 8 and 9, delight with the mission and anticipation of the results of the encounter in March were already evident. Garry E. Hunt, from the Laboratory for Planetary Atmospheres, University College, London, a member of the Imaging Team, discussing the appearance of Jupiter’s atmosphere as seen by Voyager during the previous month, said “It seems to be far more photogenic now than it did during the Pioneer encounters; I’m more than delighted by it— it’s an incredible state of affairs. There are infinitely more details than ever imagined.”
  • 38.
    The Voyager Projectwas operated from the Jet Propulsion Laboratory managed for NASA by the California Institute of Technology. Located in the hills above Pasadena, California, JPL is the main center for U.S. exploration of the solar system. [JB17249BC]
  • 39.
    The Voyager TVcameras do not take color pictures directly as do commercial cameras. Instead, a color image is reconstructed on the ground from three separate monochromatic images, obtained through color filters and transmitted separately to Earth. There are a number of possible filter combinations, but the most nearly “true” color is obtained with originals photographed in blue (480 nanometers), green (565 nanometers), and orange (590 nanometers) light. Before these can be combined, the individual frames must be registered, correcting for any change in spacecraft position or pointing between exposures. Often, only part of a scene is contained in all three original pictures. Shown here is a reconstruction of a plume on Jupiter, photographed on March 1, 1979.
  • 40.
    62 The colors usedto print the three separate frames can be seen clearly in the nonoverlapping areas. For other pictures in this book, the nonoverlapping partial frames are omitted. [P-21192] The pictures from Voyager are “clearly spectacular,” said Lonne Lane, Assistant Project Scientist for Jupiter. “We’re getting even better results than we had anticipated. We have seen new phenomena in both optical and radio emissions. We have definitely seen things that are different—in at least in one case, unanticipated—and are begging for answers we haven’t got.” There was already, still almost a month from encounter, a strong feeling of accomplishment among the scientists and engineers; they had done a difficult task and it has been successful. By the last week of February 1979, the attention of thousands of individuals was focused on the activities at JPL. Scientists had arrived from universities and laboratories around the country and from abroad, many bringing graduate students or faculty colleagues to assist them. Engineers and technicians from JPL contractors joined NASA officials as the Pasadena motels filled up. Special badges were issued and reserved parking areas set aside for the Voyager influx. Twenty-four hours a day, lights burned in the flight control rooms, the science offices, the computer areas, and the photo processing labs. In order to protect those with critical tasks to perform from the friendly distraction of the new arrivals, special computer-controlled locks were placed on doors, and security officers began to patrol the halls. By the end of the month the press had begun to arrive. Amid accelerating excitement, the Voyager 1 encounter was about to begin.
  • 41.
    The smaller-scale cloudson Jupiter tend to be more irregular than the large ovals and plumes. At the lower right, one of the three large white ovals clearly shows internal structure, with the swirling cloud pattern indicating counterclockwise, or anticyclonic, flow. A smaller anticyclonic white feature near the center is surrounded by a dark, cloud-free band
  • 42.
    63 where one cansee to greater depths in the atmosphere. This photo was taken March 1 from a distance of 4 million kilometers. [P-21183C]
  • 43.
    CHAPTER 6 THE FIRSTENCOUNTER The Giant Is Full of Surprises The Voyager 1 encounter took place at 4:42 a.m. PST, March 5, 1979. About six hours before, while the spacecraft continued to hurtle on toward Jupiter, overflow crowds had poured out of Beckman Auditorium on the campus of the California Institute of Technology. There had been a symposium entitled Jupiter and the Mind of Man. More than one face that evening had turned to look toward the Pasadena night, for there, glittering against the fabric of the sky, was the “star” of the show—a planet so huge that the Earth would be but a blemish on its Great Red Spot. One might have tried to imagine Voyager 1, large by spacecraft standards, rapidly closing in on Jupiter —a pesky, investigative “mosquito” buzzing around the Jovian system. Meanwhile, at JPL, pictures taken by Voyager 1 were flashing back every 48 seconds, pictures that were already revealing more and more of Jupiter’s atmosphere and would soon disclose four new worlds—the Galilean satellites. The encounter was almost at hand. 4:42 a.m.—Voyager 1 had made its closest approach to Jupiter 37 minutes earlier. Only now were the signals that had been sent out from the spacecraft at 4:05 a.m. reaching the Earth. But the moment of closest encounter did not have the same impact a landing would have had. Although the champagne would flow later for those who had worked so long and hard on this successful mission, there was none now. In the press room there were just four
  • 44.
    64 coffeepots working overtimeto help keep the press alert. In the science areas, focus had shifted to the satellite encounters, which would stretch over the next 24 hours; meanwhile, a few persons tried to catch a little sleep at their desks before the first closeups of Io came in. The instant of encounter came ... and went ... with no screams, no New Year’s noisemakers. All the excitement of the mission lay both behind ... and ahead. Thursday, February 22. The encounter activity was kicked off in Washington, D.C., by a press conference at NASA Headquarters. After introductory statements by NASA and JPL officials, some of the scientific results from the observatory and far encounter phases were presented. Both the plasma wave and the planetary radio astronomy instruments had detected very-low-frequency radio emission that generally increased in intensity whenever either the north or south magnetic pole of Jupiter tipped toward the spacecraft. The plasma wave experiment had also detected radiation that seemed to come from a region either near or beyond the orbit of Io—perhaps even as far as the outer magnetosphere. In addition, Frederick L. Scarf, Principal Investigator for the plasma wave instrument, released a recording of the ion sound waves created by 5 to 10 kilovolt energy protons traveling upstream from Jupiter. James Warwick, Principal Investigator of the radio astronomy investigation, enthusiastically reported the detection of a striking new low-frequency radiation from Jupiter, at wavelengths of tens of kilometers. Such radio waves cannot penetrate the ionosphere of the Earth, and thus had never been detected before. Because of their huge scale, these bursts probably did not originate at Jupiter itself, but in magnetospheric regions above the planet—perhaps in association with the high-density plasma torus associated with Io. Warwick commented that “only from the magnificent
  • 45.
    perspective of theVoyager as it approaches Jupiter have we been able to get this complete picture.” The Voyager 1 encounter with Jupiter took place during a little more than 48 hours, from the inbound to the outbound crossing of the orbit of Callisto. This figure shows the spacecraft path as it would be seen from above the north pole of Jupiter. Closest approach to Jupiter was 350 000 kilometers. The close flybys of Io, Ganymede, and Callisto all took place as the spacecraft was outbound. Voyager 1 trajectory Launch date = 9/5/77 Jupiter arrival date = 3/5/79 Satellite closest approach Sun occultation
  • 46.
    Earth occultation Io Ganymede Periapsis Callisto Amalthea Europa Another unexpectedresult was announced by Lyle Broadfoot, Principal Investigator for the ultraviolet spectrometer investigation. The scientists had expected to find very weak ultraviolet emissions on the sunlit side of Jupiter, caused by sunlight being scattered from hydrogen and helium in Jupiter’s upper atmosphere. “Instead we are seeing a spectacular auroral display. There are two features of the emission—the auroral emission which comes from the planet and a second type of emission which appears to come from a radiating torus or shell around the planet at the orbit of Io. The spectral content of these two radiating sources is distinctly different. What we find is that the auroral emission from Jupiter’s atmosphere is so strong that it completely dominates the emission spectrum even on the sunlit side of the atmosphere.” “Not since Mariner 4 carried its TV camera to Mars fifteen years ago have we been less prepared—have we been less certain of what we are about to see over the next two weeks,” said Bradford Smith, Imaging Team Leader. He mentioned the time-lapse “rotation movie,” in which the colorful planet spun through ten full Jupiter days; tiny images of the satellites passed across Jupiter’s face as though being whipped along by the rotation of the giant. A week or so earlier, when this film had been shown for the first time to the full Imaging Team, it provided an occasion for good-humored rivalry between planet people and satellite people, with jokes about the satellites getting in the way of the important studies of Jupiter. For the next few days, the imaging focus remained on Jupiter; it shifted to Io, Ganymede, and Callisto, as each was passed in turn after closest approach to the planet.
  • 47.
    65 Tuesday, February 27. (Rangeto Jupiter, 7.1 million kilometers). At a distance of 660 million kilometers from Earth, within 90 Jovian radii (RJ) of Jupiter, Voyager 1 was prepared to begin the encounter with the planet’s magnetosphere. On the previous day the spacecraft had crossed the point, at 100 RJ, at which Pioneers 10 and 11 had found the bow shock, the first indication of the magnetospheric boundary. The start of Voyager’s plunge into the Jovian magnetosphere was overdue, and scientists anxiously watched the data from the particles and fields instruments, looking for the first indication of disordered magnetic fields and altered particle densities that would mark the bow shock. Apparently, higher solar wind pressure, associated with increased solar activity since 1974, had compressed the magnetosphere, but no one could predict how strong this compression might be. ENCOUNTER DISTANCES FOR VOYAGER 1 Object Range to Center at Closest Approach (kilometers) Best Image Resolution (km per line pair) Jupiter 349 000 8 Amalthea 417 000 8 Io 21 000 1 Europa 734 000 33 Ganymede 115 000 2 Callisto 126 000 2 For the first time since its discovery, Lyle Broadfoot and his UVS colleagues suggested a probable identification for the unexpected ultraviolet emission near the orbit of Io. The most likely candidate was sulfur atoms with two electrons removed (S III), at an inferred temperature perhaps as high as 200 000 K. An additional indication of sulfur came from Mike Krimigis, who reported that the low-energy charged particles instrument had detected bursts of sulfur ions streaming away from Jupiter that had apparently escaped
  • 48.
    from the innermagnetosphere. No explanations were offered, however, for the presence of large amounts of this element. At JPL, a press room had been opened in Von Karman Auditorium to accommodate the hundred or so reporters expected to arrive. To keep all the interested people informed of Voyager progress, frequent television reports were beamed over closed-circuit TV throughout JPL. From an in-lab television studio called the Blue Room, JPL scientist Al Hibbs, who had played a similar role during the Viking Mission to Mars, provided hourly reports and interviewed members of the Voyager teams. As the pressure for constant commentary and instant analysis increased, Garry Hunt of the Imaging Team was also called on to host activities in the Blue Room, where his British accent added an additional touch of class to the operation. As the encounter progressed, the JPL television reports reached a wider audience. In the Los Angeles area, KCET Public Television began a nightly “Jupiter Watch” program, with Dr. Hibbs as host. During the encounter days, service was extended to interested public television stations throughout the nation. In this way, tens of thousands of persons were able to experience the thrill of discovery, seeing the closeup pictures of Jupiter and its satellites at the same moment as the scientists at JPL saw them, and listening to the excited and frequently awestruck commentary as the first tentative interpretations were attempted. Unfortunately, the commercial television networks did not make use of this opportunity, and the greatest coverage available to most of the country was a 90-second commentary on the nightly network news. Wednesday, February 28. (Range to Jupiter, 5.9 million kilometers). At 6:33 a.m., at a range of 86 RJ, Voyager 1 finally reached Jupiter’s bow shock. But by 12:28 p.m. the solar wind had pushed the magnetosphere back toward Jupiter, and Voyager was once more outside, back in the solar wind.
  • 49.
    Not until March2, at a distance of less than 45 RJ, would the spacecraft enter the magnetosphere for the final time. At 11 a.m. the first daily briefing to the press was given. “After nearly two months of atmospheric imaging and perhaps a week or two of satellite viewing, [we’re] happily bewildered,” said Brad Smith. The Jovian atmosphere is “where our greatest state of confusion seems to exist right at the moment, although over the next several days we may find that some of our smirking geology friends will find themselves in a similar state. I think, for the most part, we have to say that the existing atmospheric circulation models have all been shot to hell by Voyager. Although these models can still explain some of the coarse zonal flow patterns, they fail entirely in explaining the detailed behavior that Voyager is now revealing.” It was thought, from Pioneer results, that Jupiter’s atmosphere showed primarily horizontal or zonal flow near the equatorial region, but that the zonal flow pattern broke down at high latitudes. But Voyager found that “zonal flow exists poleward as far as we can see.” Smith also showed a time-lapse movie of Jupiter assembled from images obtained during the month of January. Once each rotation, approximately every ten hours, a color picture had been taken. Viewed consecutively, these frames displayed the complex cloud motions on a single hemisphere of Jupiter, as they would be seen from a fixed point above the equator of the planet. The film revealed that clouds move around the Great Red Spot in a period of about six days, at speeds of perhaps 100 meters per second. The Great Red Spot, as well as many of the smaller spots that dot the planet, appeared to be rotating anticyclonically. Anticyclonic motion is characteristic of high-pressure regions, unlike terrestrial storms. Smith noted that “Jupiter is far more complex in its atmospheric motions than we had ever imagined. We are seeing a much more complicated flow of cyclonic and anticyclonic vorticity, circulation. We see currents which flow along and seem to interact with an obstacle and turn around and flow back.” There is a Jovian jet stream that is “moving along at well over 100 meters per second. Several of these
  • 50.
    Welcome to ourwebsite – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com