This volume of the Open Datacenter Interoperable Network (ODIN) describes software defined networking (SDN) and OpenFlow. SDN is used to simplify network control and management, automate network virtualization services, and provide a platform from which to build agile ....
How AI, OpenAI, and ChatGPT impact business and software.
Towards an Open Data Center with an Interoperable Network (ODIN) Volume 3: Software Defined Networking and OpenFlow
1. Towards an Open Data Center with an Interoperable Network (ODIN)
Volume 3: Software Defined Networking and OpenFlow
May 2012 ®
Towards an Open Data Center
with an Interoperable Network
(ODIN)
Volume 3: Software Defined
Networking and OpenFlow
Casimer DeCusatis, Ph.D.
Distinguished Engineer
IBM System Networking, CTO Strategic Alliances
IBM Systems and Technology Group
May 2012
2. Towards an Open Data Center with an Interoperable Network (ODIN)
Volume 3: Software Defined Networking and OpenFlow
Page 2
Executive Overview
This volume of the Open Datacenter Interoperable Network (ODIN) describes software defined
networking (SDN) and OpenFlow. SDN is used to simplify network control and management,
automate network virtualization services, and provide a platform from which to build agile network
services. SDN leverages both IETF network virtualization overlays and the ONF OpenFlow
standards. OpenFlow is an emerging industry standard protocol which moves the network control
plane into software running on an attached server. The flow of network traffic can then be
controlled dynamically, without the need to rewire the data center network. Some of the benefits
of this approach include better scalability, larger layer 2 domains and virtual devices, faster
convergence, and better scalability. These technologies form the basis for networking as a
service (NaaS) in modern data centers.
3.1 Software-Defined Networking
While networks have supported significant innovations in compute and storage, networking
technologies have not tracked the expanding levels of virtualization and programmability of these
technologies. Networks thus are increasing in complexity due to the increased demand for multi-
tenancy, higher bandwidths, and the emergence of on-demand resource and application models
in the Cloud. As a result, network protocols, not initially designed with these requirements in
mind, are becoming cumbersome to configure and maintain, limiting scalability and agility.
Software-Defined Networking (SDN) was created to address these challenges by altering the
traditional paradigm for network control. By decoupling the control and data planes, introducing
programmability, centralizing intelligence, and abstracting applications from the underlying
network infrastructure; highly scalable and flexible networks can be designed that readily adapt to
changing business needs.
Consider the design of conventional data center networks. The network control plane implements
many complex networking protocols, each of which requires millions of lines of code. Each
protocol may be thought of as a programming language, with its own usage rules. As with any
language, the proper context and meaning can only be understood by someone familiar with both
the vocabulary (syntax) and the grammar (semantics). The typical operation of a networking
device is analogous to the complexity of learning to speak multiple languages. Further, the
availability of new features and functions on these devices is limited by the development priorities
of the companies who develop this equipment. For these reasons, there are advantages to
introducing an open networking language and an open switch programming model, known as
software-defined networking, similar to the use of Linux as an alternative to vendor proprietary
server operating systems. Open networking simplifies the network control and management, and
responds to the need for more agile deployment of network services. Such an approach is also
complimentary to other trends in the networking industry, including increasing data rates, higher
levels of virtualization, and intelligent management with more extensive automation.
SDN offers substantial benefits that may be realized within the data center. These include multi-
vendor environments, more granular network control (at the session, user, or device level), and
improved automation and management. SDN also promotes innovation from network equipment
providers by supporting the introduction of new capabilities or upgrades without the need to
access individual networking devices, and reducing inter-dependencies between network
services and infrastructure. SDN paves the way to a dynamic and flexible network architecture
that protects existing investments, yet future-proofs the network to support rapidly changing
business needs. Ultimately, the network evolves from infrastructure to a business-critical service
delivery platform.
3. Towards an Open Data Center with an Interoperable Network (ODIN)
Volume 3: Software Defined Networking and OpenFlow
Page 3
By abstracting the control and management aspects of a network into a logical software program,
SDN allows real-time programmability and manageability of networks comparable to what is
achieved on computers. It can leverage a centralized logical network view easily manipulated via
software to implement complex networking rules. This allows networks to achieve unprecedented
levels of scalability and flexibility, as well as dynamic behaviors matching cloud service-oriented
dynamics.
Figure 3.1 – Software-Defined Network Architecture
Figure 3.1 depicts a logical view of the SDN architecture. The infrastructure layer sends control
information via an interface to SDN control Software in the control layer, where an abstracted
view of the network is created and the configuration or status of the underlying infrastructure is
maintained. Network Services are generated leveraging the information contained in the SDN
controller software. Business applications then have access to network configuration and
infrastructure information via an API interface to these network services. Unlike traditional
networks where this information often can only be accessed manually and one network device at
a time, here information is exchanged in real time and can be processed automatically using
algorithms and programs.
SDN provides a new approach for managing end-to-end connectivity by maintaining a
centralized, global view of the network. By centralizing network state in the control layer,
management, configuration, security, and network resources are optimized through flexible,
dynamic and automated SDN programs. Global, controlled access to the data plane offers the
potential for unprecedented programmability, as network behavior easily can be adapted to the
needs of business applications. Such flexibility enables the scalability and flexibility needed to
keep up with dramatic shifts in user behavior, the ever-growing appetite for increased
bandwidths, and a range of new services.
Another important benefit of the SDN architecture is enhanced automation, allowing networks to
accommodate highly elastic and rapidly changing demands of users or cloud-based applications.
Cloud-based applications can now be managed through intelligent orchestration and provisioning
systems, beyond the compute and storage space and including the network. SDN open the door
4. Towards an Open Data Center with an Interoperable Network (ODIN)
Volume 3: Software Defined Networking and OpenFlow
Page 4
for on-demand resource allocation, self-service provisioning, and truly virtualized networking.
SDN is used for many purposes, including simplifying network control and management,
automating network virtualization services, and providing a platform from which to build agile
network services. To accomplish these goals, SDN leverages both IETF network virtualization
overlays and the ONF OpenFlow standards. We will discuss each of these approaches in the
following sections.
3.2 Virtual Network Overlays
Server virtualization brings with it new data center networking requirements. In addition to the
regular requirements of interconnecting physical servers, network designs for virtualized data
centers have to support the following:
Huge number of endpoints. Today physical hosts can effectively run tens of virtual machines,
each with its own networking requirements. In a few years, a single physical machine will be
able to host 100 or more virtual machines.
Large number of tenants fully isolated from each other. Scalable multi-tenancy support
requires a large number of networks that have address space isolation, management
isolation, and configuration independence. Combined with a large number of endpoints, these
factors will make multi-tenancy at the physical server level an important requirement in the
future.
Dynamic network and network endpoints. Server virtualization technology allows for dynamic
and automatic creation, deletion and migration of virtual machines. Networks must support
this function in a transparent fashion, without imposing restrictions due to, e.g., IP subnet
requirements
A decoupling of the current tight binding between the networking requirements of virtual
machines and the underlying physical network
Rather than treat virtual networks simply as an extension of physical networks, these
requirements can be met by creating virtual overlay networks in a way similar to creating virtual
servers over a physical server: independent of physical infrastructure characteristics, ideally
isolated from each other, dynamic, configurable and manageable. Hypervisor based overlay
networks (which take advantage of virtual Ethernet switches) can provide networking services to
virtual servers in a data center. Virtual Ethernet switches form part of the platform for creating
agile network services; they can also aid in simplifying network control and management and
automating network virtualization services. Overlay networks are a method for building one
network on top of another. The major advantage of overlay networks is their separation from the
underlying infrastructure in terms of address spaces, protocols and management. Overlay
networks allow a tenant to create networks designed to support specific distributed workloads,
without regards to how that network will be instantiated on the data center's physical network. In
standard, TCP/IP networks, overlays are usually implemented by tunneling. The overlay network
payload is encapsulated within an overlay header and delivered to the destination by tunneling
over the underlying infrastructure.
As multiple networking product providers have recognized overlay networks as a way to meet the
growing needs of virtualized data centers, multiple solutions have been proposed. Recently the
industry has begun work to find common areas of standardization. The first step towards this goal
has been to publish a common problem statement through the IETF and forming a working group
to standardize on solutions. For the remainder of this discussion, we will focus on Distributed
Overlay Virtual Ethernet (DOVE).
5. Towards an Open Data Center with an Interoperable Network (ODIN)
Volume 3: Software Defined Networking and OpenFlow
Page 5
Distributed Overlay Virtual Ethernet (DOVE)
DOVE is a layer 2/3 overlay network which employs packet encapsulation to form instances of
overlay networks that separate the virtual networks from the underlying infrastructure and from
each other. The separation means separate address spaces, ensuring that virtual network traffic
is seen only by network endpoints connected to their own virtual network, and allowing different
virtual networks to be managed by different administrators. A DOVE network instance can be
created and deleted and virtual machines can be attached to and detached from the DOVE
network instance as needed. Upon creation, every DOVE instance is assigned a unique identifier
and all the traffic sent over this overlay network carries the DOVE instance identifier in the
encapsulation header in order to be delivered to the correct destination virtual machine. In
principle, DOVE can also be extended across multiple data centers over long distances.
• Switches learn MAC addresses of physical hosts and not of VMs
• Routers route IP addresses of physical hosts and not of VMs
• Switches and routers are not aware of VMs and DOVE Networks
Data Center Network
DOVE Network 1
DOVE Network 2
DOVE Network 3
Host 3 Host 6
Host 2 VM
VM VM VM Host 5 VM
VM VM VM
Host 1 VM
VM VM VM Host 4 VM
VM VM VM
VAN VM
VM Module VM VM VM Module VM
VAN VM
VAN Module VAN Module
DOVE Switch 1 DOVE Switch 2
Figure 3-2 -– DOVE Switches
Figure 3.2 shows DOVE switches residing in data center hosts and providing network service for
hosted virtual machines so that virtual machines are connected to independent isolated overlay
networks. As virtual machine traffic never leaves physical hosts in a non-encapsulated form,
physical network devices are not aware of virtual machines, their addresses, and their
connectivity patterns.
Virtual machines connect to a DOVE network through network nodes located in physical hosts
known as DOVE switches. DOVE switches are similar in function to the traditional hypervisor
switches but also function as overlay network nodes. Virtual machine interfaces are marked as
being connected to a specific DOVE instance by the DOVE switch that resides in each DOVE
enabled physical host in a data center. DOVE switches are in the network I/O path of the virtual
machines and capture the virtual machine’s traffic, identify it as belonging to a particular DOVE
network, add the appropriate DOVE header, and then use the physical infrastructure to deliver
the encapsulated packet to the DOVE switch on the destination physical server. Upon receiving
the encapsulated packet from the physical network, the DOVE switch parses and removes the
encapsulation header and delivers the packet to the correct destination virtual machine as
6. Towards an Open Data Center with an Interoperable Network (ODIN)
Volume 3: Software Defined Networking and OpenFlow
Page 6
identified both by the target virtual machine address in the packet and by the virtual network
identifier in the encapsulation header. When the source and destination virtual machines reside
on the same physical server, the DOVE switch on that server delivers the packet directly without
using the physical network infrastructure. In addition to providing data delivery, DOVE switches
participate in control plane protocols to exchange and distribute information about virtual machine
location, virtual machine addresses, virtual machine migration events, etc.
DOVE networks connect to other non-DOVE networks through special purpose edge appliances
known as DOVE gateways. The DOVE gateways receive encapsulated packets from DOVE
switches in physical servers, strip the DOVE headers and forward the packets to the non-DOVE
network using the appropriate network interfaces. A DOVE gateway provides connectivity
between a virtual machine attached to a DOVE network and the external public network. A DOVE
gateway is also used to connect systems to the DOVE network without requiring them to be run
on DOVE capable hypervisors.
Using DOVE, virtual switches learn the MAC address of their physical host, not the VMs, and
route traffic using IP addressing. In this way, DOVE enables a single MAC address for each
physical server (or dual redundant addresses for high availability), significantly reducing the size
of TCAM and ACL tables. This overlay is transparent to physical switches external to the server,
and is thus compatible with other networking protocols (including Layer 3 ECMP or TRILL).
DOVE separates virtual networks from both the underlying infrastructure and from each other,
ensuring that virtual network traffic is seen only by network endpoints connected to their own
virtual network, and allowing different virtual networks to be managed by different administrators.
A DOVE network instance can be created and deleted and virtual machines can be attached to
and detached from the DOVE network instance as needed. Upon creation, every DOVE instance
is assigned a unique identifier and all the traffic sent over this overlay network carries the DOVE
instance identifier in the encapsulation header in order to be delivered to the correct destination
virtual machine.
DOVE meets the growing requirements of virtualized data centers by supporting the creation of a
very large number of virtual networks that are independent from the underlying physical
infrastructure, isolated from each other, can be separately managed and configured, have
independent address spaces and are dynamic. DOVE may be thought of as a multipoint tunnel
for communication between systems, including discovery mechanisms and provisions for
attachment to non-DOVE networks.
Overlay networks allow the virtual network to be defined through software and decouple the
virtual network from the limitations of the physical network. Therefore the physical network is
wired and configured once and the subsequent provisioning of the virtual networks does not
require physical network to be re-wired or re-configured. Overlay networks hide the MAC
addresses of the VMs from the physical infrastructure which significantly reduces the size of
TCAM and ACL tables. This overlay is transparent to physical switches external to the server,
and is thus compatible with other networking protocols (including Layer 3 ECMP or Layer 2
TRILL). This allows L3 routing along with ECMP to be more effectively utilized and reducing the
problems of larger broadcast domains within the data center. As the virtual network is
independent of the physical network topology, these approaches enable the ability to reduce the
broadcast domains within a data center while still retaining the ability to support VM migration. In
other words where VM migration typically required flat layer 2 domains, overlay networking
technologies allow segmenting a data center while still supporting VM migration across the data
center and potentially between different data centers.
7. Towards an Open Data Center with an Interoperable Network (ODIN)
Volume 3: Software Defined Networking and OpenFlow
Page 7
3.3 OpenFlow
Since part of its mission is to create the most relevant software-defined networking (SDN)
standards, the Open Networking Foundation (ONF) is driving the standardization of OpenFlow.
The OpenFlow specification is controlled and published by a recently-formed, nonprofit industry
trade organization called the Open Network Foundation (ONF), which will license the trademark
“OpenFlow Switching” to companies who adopt this standard. The ONF is led by a board of
directors from six companies that own and operate some of the largest networks in the world
(including Deutsche Telekom, Facebook, Google, Microsoft, Verizon, Yahoo, Goldman Sachs,
and NTT). These companies are expected to lead the next generation of OpenFlow adoption.
OpenFlow is a component which enables implementation of SDN, and it is the only standardized
SDN-oriented communication protocol between the network infrastructure and control layers.
There are many benefits of a standard which opens the control plane of the switch network, and a
flow paradigm that offers granular traffic control. OpenFlow also offers a global view of the
network, including traffic statistics, and is fully compatible with existing Layer 2 and 3 protocols. In
contrast to a traditional switch, which provides a separate management/control plane for each
switch element in the network, OpenFlow extracts the control plane from the network. In this
environment, networking services (security, multi-pathing, and more) run like apps on a software-
defined network stack. The use of OpenFlow to enable an ecosystem of network apps
development, as opposed to the closed, vendor proprietary approach used today, represents an
important change in the way networks services will be deployed in the future.
OpenFlow allows direct access and manipulation of the forwarding or data plane of network
infrastructure devices such as switches and routers, both physical and virtual (hypervisor-based).
In this way, OpenFlow can be compared to the instruction set of a CPU. The protocol specifies
basic primitives that can be used by an external software program on the network to program the
forwarding plane of network infrastructure devices, just like the instruction set of a CPU would
program a computer system. OpenFlow is an emerging technology with the potential to increase
the value of data center services dramatically. Implementing OpenFlow can provide network
administrators with greater control over their resources, integrated network and server
management, and an open management interface for routers and switches.
An OpenFlow switch consists of three parts, as illustrated in figure 3.3:
Figure 3.3 – Basic OpenFlow architecture
8. Towards an Open Data Center with an Interoperable Network (ODIN)
Volume 3: Software Defined Networking and OpenFlow
Page 8
● Flow Table—Tells the switch how to process each data flow by associating an action with each
flow table entry
● Secure Channel—Connects the switch to a remote control processor (called the Controller) so
commands and packets can be sent between the controller and the switch
● OpenFlow Protocol—Provides an open, standardized interface for the controller to
communicate with the switch and to remove, add, or change flow control entries
The OpenFlow Protocol allows entries in the Flow Table to be defined by a server external to the
switch. For example, a flow could be a TCP connection, all the packets from a particular MAC or
IP address, or all packets with the same VLAN tag. Each flow table entry has a specific action
associated with a particular flow, such as forwarding the flow to a given switch port (at line rate),
encapsulating and forwarding the flow to a controller for processing, or dropping a flow’s packets
(for example, to help prevent denial of service attacks). There are many applications for
OpenFlow in modern networks. For example, a network administrator could create on-demand
‘express lanes’ for voice and data traffic that are time-sensitive. Software could also be used to
combine several fiber optic links into a larger virtual pipe to handle a particularly heavy flow of
traffic temporarily. When the data rush is over, the links would automatically separate under the
supervision of the controller. In cloud computing environments, OpenFlow improves scalability
and enables resources to be shared efficiently among different services in response to the
number of users.
There are different types of messages used by an OpenFlow controller. The switch-controller
connection is discovered using a symmetric protocol (like a Hello packet) and maintained using
periodic echo request/reply messages. There are also specific unidirectional messages sent from
the controller to the switch, or from the switch to the controller. For example, the controller may
configure the switch, query the switch capabilities, manage flow tables, or direct packets across
the network. Asynchronous messages may also pass from the switch to the controller which
announce changes in the switch state, network status, packet errors, or which send ingress
packets to the controller (such as ARPs from a VM).
OpenFlow provides a basic set of global management abstractions, which can be used to control
features such as topology changes and packet filtering. OpenFlow takes advantage of the fact
that most modern Ethernet switches and routers contain flow tables, which run at line rate and
are used to implement functions such as quality of service (QoS), security firewalls, and statistical
analysis of data streams. OpenFlow standardizes a common set of functions that operate on
these flows and will be extended in the future as the standard evolves. The rules within OpenFlow
allow filtering on the N-tuples of an Ethernet frame, as shown in figure 3.4. A match-action table
provides logical mapping to a list of instructions which describe how to handle a packet. The
packet and byte counters are used to collect statistics on the interface. Different style masks can
be implemented to filter and redirect traffic as desired (for example, certain packets might be
routed to a firewall, others to a load balancer, or some combination of network appliances).
9. Towards an Open Data Center with an Interoperable Network (ODIN)
Volume 3: Software Defined Networking and OpenFlow
Page 9
Figure 3.4 - OpenFlow Rules, Match-Action Tables, and Statistics
As previously discussed, there are many potential applications for OpenFlow in modern data
center networks. Cloud computing environments which use multi-tenancy and resource pooling
can benefit from OpenFlow traffic steering capabilities. OpenFlow provides the isolation required
to host multiple tenants on the same server. Resource pooling helps reduce the need for multiple
appliances (load balancers, firewalls, and more) in each vertically oriented network stack. This in
turn reduces the number of physical appliances in the data center, reducing capital expense and
energy consumption; by load balancing across previously under-utilized appliances, overall
performance remains essentially unaffected.
Summary
SDN and OpenFlow represent emerging industry standards which hold the potential to reduce
capex and opex in the data center network. These approaches support highly virtualized data
centers and automate functions such as traffic filtering. By separating the data plane and control
plane within a switch, this approach enables use cases such as multi-tenancy and resource
pooling in cloud computing data centers. OpenFlow enables deterministic traffic flows for more
predictable network performance, as well as both lower and more consistent traffic latency.
OpenFlow is also used for policy driven content distribution, automated network configuration,
and dynamic reprovisioning of bandwidth on demand. Further, the interoperability of multiple SDN
controllers and networking resources helps promote interoperability and insure faster time to
value in heterogeneous multi-vendor networks.
10. Towards an Open Data Center with an Interoperable Network (ODIN)
Volume 3: Software Defined Networking and OpenFlow
Page 10
Technical References
Metzler, Dr. Jim Metzler Ashton Metzler & Associates Co-Founder, Webtorials Analyst Division
Networking Track Chair, Interop. “The 2011 Cloud Networking Report,” produced and distributed
by: WebTutorials, in association with:Interop. Retrieved from:
http://www.webtorials.com/content/2011/11/2011-cloud-networking-report.html
OpenFlow
For more information on OpenFlow, please visit www.opennetworkingfoundation.org
Or see the following articles:
Open Networking Foundation Pursues New Standards:
http://www.nytimes.com/2011/03/22/technology/internet/22internet.html?_r=1&ref=technology
How Software Will Redefine Networking:
http://gigaom.com/2011/03/21/open-networking-foundatio/
Tech Titans Back OpenFlow Networking Standard:
http://www.datacenterknowledge.com/archives/2011/03/22/tech-titans-back-openflow-networking-
standard/
A Case for Overlays in DCN Virtualization:
http://www.itc23.com/fileadmin/ITC23_files/papers/DC-CaVES-m1569472213.pdf
IETF Problem Statement: Overlays for Network Virtualization:
http://tools.ietf.org/html/draft-narten-nvo3-overlay-problem-statement-01
Virtual Network Services for Federated Cloud Computing:
http://domino.watson.ibm.com/library/Cyberdig.nsf/papers/3ADF4AD46CBB0E6B852576770056
B848/$File/H-0269.pdf
11. Towards an Open Data Center with an Interoperable Network (ODIN)
Volume 3: Software Defined Networking and OpenFlow
Page 11
For More Information
IBM System Networking http://ibm.com/networking/
IBM PureSystems http://ibm.com/puresystems/
IBM System x Servers http://ibm.com/systems/x
IBM Power Systems http://ibm.com/systems/power
IBM BladeCenter Server and options http://ibm.com/systems/bladecenter
IBM System x and BladeCenter Power Configurator http://ibm.com/systems/bladecenter/resources/powerconfig.html
IBM Standalone Solutions Configuration Tool http://ibm.com/systems/x/hardware/configtools.html
IBM Configuration and Options Guide http://ibm.com/systems/x/hardware/configtools.html
Technical Support http://ibm.com/server/support
Other Technical Support Resources http://ibm.com/systems/support
Legal Information This publication may contain links to third party sites that are
not under the control of or maintained by IBM. Access to any
IBM Systems and Technology Group such third party site is at the user's own risk and IBM is not
Route 100 responsible for the accuracy or reliability of any information,
data, opinions, advice or statements made on these sites. IBM
Somers, NY 10589.
provides these links merely as a convenience and the
Produced in the USA inclusion of such links does not imply an endorsement.
May 2012
Information in this presentation concerning non-IBM products
All rights reserved.
was obtained from the suppliers of these products, published
IBM, the IBM logo, ibm.com, BladeCenter, and VMready are announcement material or other publicly available sources.
trademarks of International Business Machines Corp., IBM has not tested these products and cannot confirm the
registered in many jurisdictions worldwide. Other product and accuracy of performance, compatibility or any other claims
service names might be trademarks of IBM or other related to non-IBM products. Questions on the capabilities of
companies. A current list of IBM trademarks is available on non-IBM products should be addressed to the suppliers of
the web at ibm.com/legal/copytrade.shtml those products.
InfiniBand is a trademark of InfiniBand Trade Association. MB, GB and TB = 1,000,000, 1,000,000,000 and
1,000,000,000,000 bytes, respectively, when referring to
Intel, the Intel logo, Celeron, Itanium, Pentium, and Xeon are
storage capacity. Accessible capacity is less; up to 3GB is
trademarks or registered trademarks of Intel Corporation or its
used in service partition. Actual storage capacity will vary
subsidiaries in the United States and other countries. based upon many factors and may be less than stated.
Linux is a registered trademark of Linus Torvalds.
Performance is in Internal Throughput Rate (ITR) ratio based
Lotus, Domino, Notes, and Symphony are trademarks or on measurements and projections using standard IBM
registered trademarks of Lotus Development Corporation benchmarks in a controlled environment. The actual
and/or IBM Corporation. throughput that any user will experience will depend on
considerations such as the amount of multiprogramming in the
Microsoft, Windows, Windows Server, the Windows logo, user’s job stream, the I/O configuration, the storage
Hyper-V, and SQL Server are trademarks or registered configuration and the workload processed. Therefore, no
trademarks of Microsoft Corporation. assurance can be given that an individual user will achieve
TPC Benchmark is a trademark of the Transaction Processing throughput improvements equivalent to the performance ratios
Performance Council. stated here.
UNIX is a registered trademark in the U.S. and/or other Maximum internal hard disk and memory capacities may
countries licensed exclusively through The Open Group. require the replacement of any standard hard drives and/or
memory and the population of all hard disk bays and memory
Other company, product and service names may be slots with the largest currently supported drives available.
trademarks or service marks of others. When referring to variable speed CD-ROMs, CD-Rs, CD-RWs
IBM reserves the right to change specifications or other and DVDs, actual playback speed will vary and is often less
product information without notice. References in this than the maximum possible.
publication to IBM products or services do not imply that IBM
intends to make them available in all countries in which IBM
operates. IBM PROVIDES THIS PUBLICATION “AS IS”
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS
OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. Some jurisdictions do not allow disclaimer of
express or implied warranties in certain transactions;
therefore, this statement may not apply to you.
QCW03021USEN-00