<Insert Picture Here>
<Insert Picture Here>
SPARC T4-1 System Technical Overview
Download this slide
http://ouo.io/58qIc
© 2011 Oracle Corporation Page 3
The following is intended to outline our general
product direction. It is intended for information
purposes only, and may not be incorporated into any
contract. It is not a commitment to deliver any
material, code, or functionality, and should not be
relied upon in making purchasing decisions. The
development, release, and timing of any features or
functionality described for Oracle’s products remains
at the sole discretion of Oracle.
© 2011 Oracle Corporation Page 4
Oracle SPARC T4-1 System
• Compute
• 1x SPARC T4 8-core CPU (64 threads)
• 16x DDR3 DIMMs (256GB max, 16GB DIMMs)
• I/O and Storage
• 6x Low-profile PCIe Gen2 x8 slots
• 2x 10GbE XAUI slots (shared w/ PCIe)
• 8x 2.5” hot-swap SAS-2 disks
• Availability and Management
• RAID 0/1 (5/6 + BBWC)
• Hot-swap fans, disks & PSUs
• ILOM Service Processor
1-Socket, 2RU Enterprise-Class Data center Server
© 2011 Oracle Corporation Page 5
T4-1 System Highlights
• Successor to SPARC T3-1 system
• Same 2U enclosure as SPARC T3-1 with PCIe 2.0
and SAS-2
• Same Service Processor module as SPARC T3
system
• Oracle Solaris 10 8/11 installed, Oracle Solaris 11
post-Release
• Oracle VM Server for SPARC 2.1 (Logical Domains)
© 2011 Oracle Corporation Page 6
T4-1 System
Target Applications
• Network infrastructure workloads to support the delivery of Web
and transaction services
• Middleware – for example Oracle Fusion
• Security Applications - The advanced SPARC T4 processor
features integrated on-chip cryptographic accelerators that provide
wire speed encryption capabilities for secure data center operation
without customers having to pay a penalty for encrypting large
amounts of data
• Application Development – With up to 64 computing threads
packed in a compact, 2RU design, a single system can now run
more applications and reduce the number of required servers
• Multithreaded Applications (for example: Siebel CRM)
• Java Applications – for example: Oracle Weblogic 11g , Sun’s
Java Application Server, IBM’s Websphere
• Consolidation and Virtualization
© 2011 Oracle Corporation Page 7
T4-1 System
Target Workloads
• Database and data warehousing/BI
• Back Office (ERP, CRM, Supply Chain, Accounting, Lifecycle
Mgmt, OLTP)
• Internet Infrastructure (web, mail, app/j2ee, db, proxy, directory,
media)
• Large, single-app implementations and/or multi-app consolidation
(LDOMs, Containers) Popular Applications
• Oracle 11g, Siebel 7.7.1 CRM and Siebel Analytics 7.8.4, Baan
ERP, mySAP ERP (R/3), Websphere
• BEA Weblogic Server 9.0, Apache, Symantec Brightmail &
ScanEngine, Sybase IQ & ASE 12.6
• NetBackup Media Servers (SAN-> Tape), Sun JES (web, email,
dir), PeopleSoft 8.4, PeopleTools 8.46
© 2011 Oracle Corporation Page 8
SPARC T4-1 Comparison to SPARC
T3-1 & T5220
Feature SPARCT4-1 SPARCT3-1 SPARCEnterpriseT5220
FormFactor 2U, 28”deep 2U, 28”deep 2U, 28”deep
CPU
SPARCT4
2.85GHz 64threads
SPARCT3
1.65GHz 128threads
UltraSPARCT2
1.2/1.4/1.6GHz64threads
Memory
DDR3, 256GBMAX
16xSlots
DDR3, 256GBMAX
16xSlots
FB-DIMM, 128GBMAX
16xSlots
Network
4xGbE+2x10GbE(XAUI
Comboslots, sharedw/PCIe)
4xGbE+2x10GbE(XAUI
Comboslots, sharedw/PCIe) 4xGbE+2x10GbE
Internal Storage
Upto8x2.5”SASHDD
100GB, 300GBSSD
Upto16x2.5”SASHDD
32GBSSD
8x2.5”SAS, inc. upto
4xSATASSD's, hot-swap
RemovableMedia 1x DVD-RW 1x DVD-RW 1x DVD-RW
Serial 1xRS-232, 5xUSB 1xRS-232, 5xUSB 1xRS-232, 4xUSB
PCI Expressslots
6xx8slots
(lowprofile, PCIe2)
6xx8slots
(lowprofile, PCIe2)
6x(lowprofile)
PowerSupply
2x1200WattAC, N+1
Redundant/Hot-Swap
2x1200WattAC, N+1
Redundant/Hot-Swap
2x750WAC
Redundant/Hot-Swap
Fans 6xRedundant Hot-Swap 6xRedundant Hot-Swap 3xRedundant Hot-Swap
© 2011 Oracle Corporation Page 9
SPARC T4-1 System
Key Features
• SPARC T4 CPU
• T4 based on T3  based on T2 Plus  based on T2 based on T1
• ½ cores of T3 but new pipeline (deeper pipeline, OOO & speculative
execution, Perceptron branching prediction, up to 128 instructions in-flight,
integrated crypto), dual 10 Gbps Ethernets, dual PCI Express Gen2 root
complexes
• 1 T4 chip per system, 8 cores x 8 threads → 64 threads
• Core frequency of 2.85 GHz
• T4 chip is socketed, but not field-replaceable
• Memory – DDR3 DIMMs
• 16 sockets on motherboard → up to 256GB (@ 16GB DIMMs)
• 4 GB, 8 GB and 16 GB DIMM sizes (32 GB DIMMs post-RR)
• Registered, ECC DIMMs operating at 1066 MT/s
• New Buffers-on-Board (BoB) between CPU and DIMMs
© 2011 Oracle Corporation Page 10
SPARC T4-1 System
Key Features
• Software
• Solaris 10 8/11 installed; Solaris 11 post-Release
• Oracle VM Server for SPARC 2.1
• Service Processor (SP)
• Same SP module as T3-1, shared across T4-based
platforms
• AST2200 CPU, with local Flash and memory
• ILOM 3.0
• Parallel Boot
• Host and SP boot in parallel → faster boot time
• Support for “degraded” mode with failed SP
© 2011 Oracle Corporation Page 11
SPARC T4 Systems
I/O Subsystem
• PCIe 2.0 with 2 x8 Lane Physical Ports per Chip
• Full Bandwidth per port with Generation 2 speeds
• PCIe 2.0 to PCIe 1.0 autonegotiation
• Single Root IOV available post-Announce
• 4 GBps in each direction, Total 16 GBps bandwidth per
chip
• Lane width autonegotiation
• 256 byte max packet size
• Uses Latest Generation PEX Switch chips on Motherboard
© 2011 Oracle Corporation Page 12
• 4th Generation Virtual to Physical Address Translation
• 52-bit virtual address, 48-bit physical address
• Allow Guest OS’s to manage translations without the Hypervisor
• Hypervisor has its own Virtual to Real Mapping
• Enhanced Translation Ordering
• Relaxed Ordering
• Device Based Ordering
SPARC T4 Systems
I/O Subsystem
© 2011 Oracle Corporation Page 13
SPARC T4-1 System
Key Features
• I/O – Networking
• 1 Gbps Ethernet
• 4 on-board ports, 10/100/1000 Mbps
• Two Intel 82576 dual-port MAC+PHY
• On-board ports also support NC-SI to Service Processor
• 10 Gbps Ethernet
• 2 x 10 Gbps XAUI links from SPARC T4 Processor
• XAUI cards operate independently of on-board ports
• XAUI cards supported in slots 0 and 3 in SPARC T4-1
(not “0 and 1” like T5x40/T5x20, and not in slots 1 or 4)
© 2011 Oracle Corporation Page 14
SPARC T4-1 System
Key Features
• I/O – Storage
• HDDs/SSDs, DVD
• 8-disk backplane
• Dual LSI SAS2008 8-port SAS2/SATA2 controllers,
RAID 0,1,1E
• Optional PCIe plug-in controller card for RAID 5/6
• DVD controlled by one on the LSI SAS2008’s
© 2011 Oracle Corporation Page 15
SPARC T4-1 System
Key Features
• I/O – Expansion
• 2 PCI Express Gen2 switches on the Motherboard,
each directly accessible from SPARC T4
• 6 expansion card slots
• Low-profile cards
• All slots accept x8 PCI Express cards, Gen1 or Gen2
• Some slots accept x16-physical (x8-electrical) cards
• 2 slots also accept Oracle XAUI-based 10GbE cards
• USB 2.0
• Internal storage device port, 2 front ports, 2 rear ports
• SIS buttons/LEDs, Service Processor serial/Ethernet
ports, VGA port
© 2011 Oracle Corporation Page 16
SPARC T4-1 Front Panel
Locator LED
/Button
Fault LED
Status LED
Power Button
Serial Number
Dual USB 2.0 ports
Top Fault LED
PSU Fault LED
Temp Fault
LED
Disk Drive Map
DISK 0
DISK 1
DISK 2
DISK 3
DISK 4
DISK 5
DISK 6 DISK 7
DVD-RW
© 2011 Oracle Corporation Page 17
SPARC T4-1 Rear Panel
PSU Status LEDs
PSU 0
PSU 1
PCIe 3/XAUI
1PCIe 0/XAUI
0
PCIe 1 PCIe 2
PCIe 5PCIe 4
Chassis
Status LED's
and Locator Button
SP
Serial
Port
SP
Network
Port
Quad Gigabit
Ethernet Ports
Dual
USB 2.0
Ports
HD-15 VGA
Video Port
© 2011 Oracle Corporation Page 18
SPARC T4-1 Block Diagram
© 2011 Oracle Corporation Page 19
SPARC T4-1 System
Chassis
• 2U system chassis – Same As T3-1 chassis
• Leveraged from 2U x86 platform, with:
• HDD rotational vibration performance enhancements
• CPU cooling enhancements
• Cabled (not “hard”) interconnect to fan board
• Counter-rotating fans
• 6 fan modules each with 2 front-to-back fans
• Infrastructure boards:
PDBs, paddlecard, disk backplanes, fan board
© 2011 Oracle Corporation Page 20
SPARC T4-1 System
Power
• AC Power Supply
• “2U” (78 mm) A249
• 1100 W @ 110 VAC, 1200 W @ 220 VAC
• Gold+ efficiency
• 1+1 Redundant, Hot-swap
• Shared with other Oracle rack mount platforms
• DC/DC Voltage Converters
• D219 for DDR3 memory – 1.35 V Vdd, 0.75 V Vtt
• D220 for CPU – 1.05 V (nom.) Vdd, Vnw, Vsb
• Shared with all T4 platforms
© 2011 Oracle Corporation Page 21
SPARC T4-1 System
Service Processor
• Common service processor card for all systems
• Aspeed AST2200 processor
– 266MHz ARM processor
– DDR2
– PCIe video device PCIe 1.0 x1
– USB
• Provides rKVMS
• 10/100Mbps network interface and serial console
• ILOM 3.0
© 2011 Oracle Corporation Page 22
SPARC T4-1
Service Processor

Service processor is separate FRU

Requires removal of I/O backbar and riser
card for access
© 2011 Oracle Corporation Page 23
SPARC T4-1 Fan Module and Fan Board

Dual fan module (front and back)

Fully populated

Fan fault LEDs on chassis

Single fan board

Fan board cabled to connector board
© 2011 Oracle Corporation Page 24
SPARC T4-1 Internal View
Disk
Chassis
FM0FM1FM2FM3FM4FM5
6 Fan Assemblies required with 2 front-to-back fans
(Note: Fan Modules not shown in photo)
SPARCT4
DDR3
DDR3
DDR3
DDR3
Power
Board
Power BUS Bars
RED = +12Volts
BLUE = Ground
Power
Supplies
PSU's
Vertically
Stacked
Service Processor
(Note: SP not shown)
System Configuration
Card (SCC) EEPROM
Fault Remind
Button
I/O Riser 0 :
PCIe2 Slot 3
or XAUI 0
-and-
PCIe2 Slot 0
or XAUI 1
I/O Riser 1 :
PCIe2 Slot 4
-and-
PCIe2 Slot 1
I/O Riser 2 :
PCIe2 Slot 5
-and-
PCIe2 Slot 2
X4 SAS Connector 0
X4 SAS Connector 1
(Note: I/O Risers
not shown)
© 2011 Oracle Corporation Page 25
SPARC T4-1 Basic Memory Layout
•SPARC T4 CPUs contain 2
memory controllers:
• MCU0 and MCU1
•Each controller has
interfaces to 2 Buffers on
Board (BoBs):
• 2 DDR3 channels
• 2 DIMMs per channel
• 16 DIMMs per CPU
•Memory bus speed limited
to 1066 MHz for all
platforms
© 2011 Oracle Corporation Page 26
Motherboard Memory Physical Layout
Chassis
Rear
Channel0/DIMM 1/Branch 1
Channel0/DIMM 1/Branch 1
Channel0/DIMM 1/Branch 1
Channel0/DIMM 1/Branch 1
Channel1/DIMM1
Channel1/DIMM0
Channel1/DIMM1
Channel0/DIMM0
Minimum of four DIMMs must be
installed per node.
For each system, install one DIMM on
each channel (channel 1, slot 0 and
channel 0, slot 0)
All DIMMs MUST be homogeneous per node – DIMM mixing within a
node is not supported. Homogeneous refers to like-values in DIMM
Size, DRAM, Rank and Architecture
Channel0/DIMM 1/Branch 1
Channel0/DIMM 1/Branch 1
Channel0/DIMM 1/Branch 1
Channel0/DIMM 1/Branch 1
Channel0/DIMM0
Channel1/DIMM1
Channel1/DIMM0
Channel1/DIMM1
SPARCT4
Channel0/DIMM 1/Branch 1
Channel0/DIMM 1/Branch 1
Channel0/DIMM 1/Branch 1
Channel0/DIMM 1/Branch 1
Channel0/DIMM0
Channel1/DIMM1
Channel1/DIMM0
Channel1/DIMM1
Channel0/DIMM 1/Branch 1
Channel0/DIMM 1/Branch 1
Channel0/DIMM 1/Branch 1
Channel0/DIMM1
Channel1/DIMM1
Channel1/DIMM1
Channel0/DIMM0
© 2011 Oracle Corporation Page 27
Memory Configuration Rules
• 4, 8, and 16 DIMM configurations are supported
• 8 DIMM configurations are recommended to provide
full memory bandwidth and optimal performance
• Minimum capacity: 16GB
• Maximum capacity: 256GB (512GB post-Release)
• All DIMMs run at 1066 MHz, no clocking down
© 2011 Oracle Corporation Page 28
SPARC T4-1 10GbE XAUI Option Cards
• Direct interface via 10GbE ports on SPARC T4
• Cards each consume PCIe Gen2 slot if
installed
• Attach to dedicated XAUI sockets on
special risers (PCIe2 slots 0 and 3)
• Maximum of 2 adapter cards
© 2011 Oracle Corporation Page 29
SPARC T4-1 10GbE XAUI Cards/Adapters
• X-Option
• XAUI single port 10 GigE Fiber Adapter Card
• SE3Y7XT2Z
• 10GbE SR Transceiver
• SE3X7XT1Z
• 10GbE LR Transceiver
• SE3X7XT2Z
• ATO Option (Factory Installation)
• XAUI single port 10 GigE Fiber Adapter Card
• SE3Y7XA1Z
• 10GbE SR Transceiver
• SE3Y7XT1Z
• 10GbE LR Transceiver
• SE3Y7XT2Z
© 2011 Oracle Corporation Page 30
SPARC T4-1 I/O Support Information
• Wide array of I/O options supported on SPARC T4-1
• Storage interfaces, SAS HBAs, FCoE 10Gb Converged Networking,
10GbE XAUI, Infiniband, etc.
© 2011 Oracle Corporation Page 31
SPARC T4-1 Disk Drives
• Disk drives are 2.5” form factor
• SPARC T4-1 2U Chassis
• Chassis supports SAS
• Up to 8 SAS drives
• 300GB HDD @ 10K RPM
• 600GB HDD @ 10K RPM
• 100GB SSD
• 300GB SSD
Ready to
Remove
Fault
Status
Disk LED's
© 2011 Oracle Corporation Page 32
SPARC T4-1 Disk Controller Options
• SPARC T4-1 has Dual LSI SAS2008 8port SAS2/SATA2
controllers
• Support for RAID 0 (striping), RAID 1 (mirroring) using 'raidctl’
and RAID 1E (enhanced mirroring)
• Optional PCIe plug-in controller card for RAID 5/6
• DVD controlled by one of the LSI SAS2008’
• Hot-plug supported
• Hot-swap in non-RAID configs is done with 'cfgadm'
• Optional Extended RAID functionality cards
• SGX-SAS6-R-INT-Z
• 8 port 6 Gb SAS RAID HBA, Internal, 512MB memor
• SE3X4A11Z
• SAS cable kit for installation of internal RAID card
© 2011 Oracle Corporation Page 33
SPARC T4-1 Rack Mounting Option
• One Rack mounting option available
• Tool-less rail slide kit for rack mounting for factory
installation (comes standard as part of base ATO
option)
© 2011 Oracle Corporation Page 34
SPARC T4-1 System RAS Overview
• Designed to minimize part count and operating
temperature to enhance reliability
• End-to-end data protection detecting and correcting
errors throughout server – ECC everywhere
• Processor and Memory protection
• CPU core and thread off-lining
• Memory with ECC, x4/x8 DRAM Extended ECC, page
retirement, and lane failover
• Major components redundant & hot-replaceable
• Fan, Power Supply, and internal disks
• RAID capability for internal disks
© 2011 Oracle Corporation Page 35
SPARC T4-1 SW/FW Block Diagram
Linux KernelLinux Kernel
Guest MgrGuest Mgr
UBoot/DiagsUBoot/Diags
Service Processor
ILOM
Host ConfigHost Config
HypervisorHypervisor
- Environmentals
- Fault Management
- LED Control
- SP Diags
- DFRUID
- Plat HW Svc
- FMA ETM
- IPMI
- CLIs
- Logs
- SNMP
- FMA Support
- PowerOn/Off
- FERG
CPU (AST2200)CPU (AST2200)
CPUCPU MemoryMemory IOIO
Host
LDC Channels
- Host Config
- Machine Description
- Hypervisor
- OBP
- POST
Host Flash
OBPOBPOBPOBP
Solaris
S10U10
Solaris
S10U10
sun4vsun4v
Solaris 11
Express
Solaris 11
Express
sun4vsun4v
OBPOBP
POSTPOST
System
Domain
System
Domain
LDOMS
Manager
sun4vsun4v
Host Data Flash
- OBP NVRAM/POST/SC config vars
- ASRDB -LDOMS config
- Console Log - SER log – TOD data
Host Data Flash
- OBP NVRAM/POST/SC config vars
- ASRDB -LDOMS config
- Console Log - SER log – TOD data
Platform Hardware
FPGAFPGA
- Kernel
- FMA Components
- Platform Drivers
- Kernel
- FMA Components
- Platform Drivers
- S10U10
- FMA Components
- Platform Drivers
© 2011 Oracle Corporation Page 36
Oracle Solaris and SPARC Virtualization
Better Resource Utilization for a More Efficient Data center
Dynamic Domains Oracle VM Server
for SPARC
M-Series T-Series
App App
Oracle Solaris
Containers
Oracle Solaris
DW DB
Domain A
Domain B
OLTP DB
OLTP DB
App
App
Domain A
Domain B
Domain C
Web
Web DB App Web
Web
Web
© 2011 Oracle Corporation Page 37
Oracle VM Server for SPARC 2.1
Customers benefit from increased application service level
• New Hardware Support
• SPARC T4 servers with Oracle Solaris 8/11; Solaris 11 post-Release
• Secure live migration
• Dynamic resource management (DRM) between domains
• Integrated Dynamic Reconfiguration (DR) of cryptographic
units and virtual CPUs.
• Increased maximum number of virtual networks per domain
• Support for virtual device service validation
• Lower-overhead, higher scalability networking for Oracle
Solaris 11 initial release
• Enhanced Management Information Base (MIB)
• Physical to virtual (P2V) enhancements
© 2011 Oracle Corporation Page 38
Secure Live Migration
• Live migration available on
SPARC T-Series systems
• SPARC T4
• SPARC T3
• UltraSPARC T2 Plus
• UltraSPARC T2
• On-chip crypto accelerators
deliver secure, wire speed
encryption for live migration
• No additional hardware required
• Eliminates requirement for
dedicated network
• More secure, more flexible
VM
External Shared Storage
SPARC T-Series servers
Oracle VM Server Pool
VM Secure Live Migration (SSL)VM VM
Eliminates Application Downtime
© 2011 Oracle Corporation Page 39
Live Migration Requirements
• Source System
• Guest domain with virtual I/O devices only, running Solaris 10 9/10
• Requires Logical Domains Manager 2.1 & updated firmware (i.e.
7.4 or 8.1)
• Can not have multi-pathed disks (IPMP for networks is ok)
• Power Management in “performance” mode (the default)
• Target System
• Must have enough resources (cpu, mem)
• Must have appropriate VIO services (vds, vsw, vcc)
• Must be able to provide required VIO devices (vdisk, vnet)
• Must be cpu-compatible
• Same processor type (e.g. SPARC T4), same clock frequency
© 2011 Oracle Corporation Page 40
Perform Live Domain Migration
• Uses the same CLI and XML interfaces as in prior
releases
• Also from Oracle Enterprise Manager Ops Center
• CLI example
– ldm migrate [-f] [-n] [-p <password_file>] <source-ldom>
[<user>@]<target-host>[:<target-ldom>]
• -n : dry-run option
• -f : force
• -p : specify password file for non-interactive migration
• Cancel an On-Going Migration
– ldm cancel-operation migration <ldom>
© 2011 Oracle Corporation Page 41
Live Migration Best Practices
• There is no specific requirement on the number of CPUs
in the control domains. However our experience shows
that > 8 vCPUs is best. We recommend 16 or more
vCPUs in order to minimize the suspend time as well as
the overall migration time.
• Workloads that heavily modify memory pages will have
longer migration times
• Be sure to add the crypto units for best migration
performance.
• Review the documentation, especially the Admin Guide
for planning live migration
• White paper on best practices to be published soon
© 2011 Oracle Corporation Page 42
Resource Management Improvements
• Dynamic resource management (DRM) between domains
• Dynamic CPU movement is based on the priority property of each
domain's DRM policy.
• Ensures that domains running the most important workloads get
priority for CPU access over domains with less critical workloads
• Integrated Dynamic Reconfiguration (DR) of Cryptographic
units and virtual CPUs
• Automatically remove crypto unit when the final vCPU of a core is to
be removed.
• Simplify operations and ensure consistent performance.
© 2011 Oracle Corporation Page 43
Increased Maximum Number of Virtual
Networks Per Domain
• Introduces an option to dynamically disable the allocation
of inter-vnet Logical Domain Channels (LDC)
• The number of LDCs is limited by hardware.
• Inter-domain communication continues to work exactly as before,
but the inter-domain network performance may be less due to an
extra hop for every packet.
• As fewer LDC channels are consumed, it helps in creating more
VIO devices that require LDCs.
• This greatly helps customer configurations that have large
vnets in a Virtual Switch, especially when a customer doesn't
really care about inter-domain network performance.
© 2011 Oracle Corporation Page 44
Inter-vnet LDC Channels Explained
• LDoms CLI modification
– ldm add-vsw [default-vlan-id=<vid>] [pvid=<pvid>] [vid=<vid1,vid2,...>] [mac-
addr=<num>] [net-dev=<device>] [linkprop=phys-state] [mode=<mode>] [mtu=<mtu>]
[id=<switchid>] [inter-vnet-link=<on|off>] <vswitch_name> <ldom>
• The default setting is ON.
• This option is a Virtual Switch wide setting, that is enabling/disabling affects all
Vnets in a given Virtual Switch.
• Can be dynamically enabled/disabled without stopping the Guest domains.
– The Guest domains dynamically handle this change.
– ldm set-vsw [pvid=[<pvid>]] [vid=[<vid1,vid2,...>]] [mac-addr=<num>] [net-
dev=[<device>]] [mode=[<mode>]] [mtu=[<mtu>]] [linkprop=[phys-state]] [inter-vnet-
link=<on|off>] <vswitch_name>
© 2011 Oracle Corporation Page 45
Virtual Device Service Validation
• Support for Virtual Device Service Validation
• Enhances 'ldm add-vdsdev', 'ldm add-vswitch' and 'ldm bind'
commands to perform validation.
• Immediately validates the name and path for a specified
network device or virtual disk, greatly reducing the risk of
incorrectly configured I/O.
• This feature addresses the #1 cause of IO mis-configurations;
it gives the administrator immediate feedback if the
configuration is valid or invalid and why
© 2011 Oracle Corporation Page 46
Virtual Device Service Validation Example
• Examples:
# ldm add-vdsdev /bad/path bad_vol@primary-vds0
Path /bad/path is not valid on service domain primary
# ldm add-vdsdev -q /bad/path bad_vol@primary-vds0
# ldm list-bindings
…
VDS
NAME VOLUME OPTIONS MPGROUP DEVICE
…
bad_vol /bad/path
…
# ldm add-vswitch net-dev=bad-nic vsw1 primary
NIC bad-nic is not valid on service domain primary
# ldm add-vswitch -q net-dev=bad-nic vsw1 primary
…
VSW
NAME MAC NET-DEV ID DEVICE …
…
vsw1 00:14:4f:f8:02:08 bad-nic 1 switch@1 …
...
© 2011 Oracle Corporation Page 47
Oracle VM Server for SPARC 2.1
More Enhancements
• P2V Enhancements
• Bring more flexibility to quickly convert an existing SPARC server
running Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle
Solaris 10 image to run on SPARC T-Series servers.
• Enhanced Management Information Base (MIB)
• Enables the SNMP MIB to use the latest Logical Domains
Manager XML interface, permitting 3rd party management
software to access the new features and resource properties.
• Lower-overhead, higher scalability networking for Oracle
Solaris 11 initial release
• Allows virtual network devices to use shared memory to exchange
network packets, enabling improved performance and scalability.
© 2011 Oracle Corporation Page 48
SPARC T4 Power Management
• CPU Clock Speed Adjustments
• Increase or decrease clock speed based on CPU utilization
• Memory Power Management
• Put under-utilized memory in a deeper idle mode
• Power Limit
• Set a power limit for the system
• Reduced the power state of manageable resource if the limit
is reached
© 2011 Oracle Corporation Page 49
SPARC T4-1 Fault Management
• Knowledge Articles in MOS
• ILOM fdd Diagnosis
• Faults and Alerts
• No ALOM Compatibilty
• ILOM FMA Captive Shell
• Sideband Service Processor Network Connection
• New ILOM Fault Notification (SNMP Trap)
• ASR Support
© 2011 Oracle Corporation Page 50
Knowledge Articles in MOS
• Knowledge Articles (KA's) not in FMA Event Registry
• Knowledge will be in MOS (My Oracle Support)
• URL will not be sun.com/Message-ID
• Existing KA's will migrate to MOS
• URL's to existing KA's will be redirected to MOS
© 2011 Oracle Corporation Page 51
ILOM fdd diagnosis
• ILOM fault management uses 'fdd' diagnosis
• Currently supported on x86 Nehalem platforms
• FMA 'light'
• All ILOM diagnosed problems will have a Message-ID
• All ILOM diagnosed problems will have a Knowledge Article
© 2011 Oracle Corporation Page 52
ILOM fdd Diagnosis Example
-> show faulty
Target | Property | Value
--------------------+------------------------
+---------------------------------
/SP/faultmgmt/0 | fru | /SYS/PS0
/SP/faultmgmt/0/ | class | fault.chassis.power.volt-fail
faults/0 | |
/SP/faultmgmt/0/ | sunw-msg-id | SPT-8000-LC
faults/0 | |
/SP/faultmgmt/0/ | uuid | 2c98a119-3acc-ebf7-de7d-
a9b137de
faults/0 | | bb07
/SP/faultmgmt/0/ | timestamp | 2010-06-15/13:46:39
faults/0 | |
/SP/faultmgmt/0/ | detector | /SYS/PS0/VOLT_FAULT
faults/0 | |
/SP/faultmgmt/0/ | fru_part_number | 3002235
faults/0 | |
/SP/faultmgmt/0/ | fru_serial_number | 001331
faults/0 | |
/SP/faultmgmt/0/ | product_serial_number | BDL1020F61
faults/0 | |
/SP/faultmgmt/0/ | chassis_serial_number | BDL1020F61
© 2011 Oracle Corporation Page 53
No ALOM Compatibility
• ALOM functions not supported from ILOM CLI are
supported in Service or Escalation mode.
© 2011 Oracle Corporation Page 54
Faults and Alerts
• ILOM fdd diagnosed problems
• Faults and Alerts
• Faults

Probably a faulty FRU

Persistent

Cleared by manual repair command or system detected FRU
replacement
• Alerts

Probably not a faulty FRU

External condition – power, temperature

Configuration problem

Not persistent

Automatically clears when condition is corrected
© 2011 Oracle Corporation Page 55
ILOM FMA Captive Shell
• Enter from ILOM CLI
• Display ILOM fdd diagnosed problems
• Display ereports from host FMA diagnosis
• Repair ILOM fdd diagnosed problems
© 2011 Oracle Corporation Page 56
ILOM FMA Captive Shell
-> start /SP/faultmgmt/shell
Are you sure you want to start /SP/faultmgmt/shell (y/n)? y
faultmgmtsp>
faultmgmtsp> fmadm faulty
------------------- ------------------------------------ -------------- --------
Time UUID msgid Severity
------------------- ------------------------------------ -------------- --------
2010-06-15/12:42:46 9df39f93-f356-6d26-e081-e4f3a9872c2f SPT-8000-3R Major
Fault class : fault.chassis.device.fan.fail
FRU : /SYS/FANBD/FM0
Description : Fan tachometer speed is below its normal operating range.
Response : The service-required LED may be illuminated on the affected
FRU and chassis. System will be powered down when the High
Temperature threshold is reached.
faultmgmtsp> exit
->
Page 57
Side-band Management
• 3 Remote Management Communication Channels
• Out-of-band management = communicate with the SP
over a dedicated media (Ethernet/Serial)
• In-band management = communicate with the SP
through Oracle Solaris via agents
• Side-band management = communicate with the SP over
a shared media (the host’s data network interface)
• Side-band interface is Disabled by default (as
shipped from Factory)
• Can be enabled on any of the 4 on-board GigE Interfaces
• Configured from ILOM Web, CLI Interface or BIOS Setup
Utility
© 2011 Oracle Corporation
© 2011 Oracle Corporation Page 58
ASR Support
• SPARC T4-1 will be supported by ASR (Automatic
Service Request) at release
• Supports sunHwTrapFaultDiagnosed SNMP
notification
• Telemetry for ILOM fdd diagnosis
• Supports platform and FRU identity
• Supports mulit-suspect list
© 2011 Oracle Corporation
Page 59Page 59
Additional Resources
• T4-1 I/O Wiki
© 2011 Oracle Corporation Page 60
We encourage you to use the newly minted corporate tagline
“Hardware and Software, Engineered to Work Together.” at the end of all your
presentations. This message should replace any reference to our previous
corporate tagline “Hardware. Software. Complete.”
© 2011 Oracle Corporation Page 61

Sparc t4 1 system technical overview

  • 1.
  • 2.
    <Insert Picture Here> SPARCT4-1 System Technical Overview Download this slide http://ouo.io/58qIc
  • 3.
    © 2011 OracleCorporation Page 3 The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.
  • 4.
    © 2011 OracleCorporation Page 4 Oracle SPARC T4-1 System • Compute • 1x SPARC T4 8-core CPU (64 threads) • 16x DDR3 DIMMs (256GB max, 16GB DIMMs) • I/O and Storage • 6x Low-profile PCIe Gen2 x8 slots • 2x 10GbE XAUI slots (shared w/ PCIe) • 8x 2.5” hot-swap SAS-2 disks • Availability and Management • RAID 0/1 (5/6 + BBWC) • Hot-swap fans, disks & PSUs • ILOM Service Processor 1-Socket, 2RU Enterprise-Class Data center Server
  • 5.
    © 2011 OracleCorporation Page 5 T4-1 System Highlights • Successor to SPARC T3-1 system • Same 2U enclosure as SPARC T3-1 with PCIe 2.0 and SAS-2 • Same Service Processor module as SPARC T3 system • Oracle Solaris 10 8/11 installed, Oracle Solaris 11 post-Release • Oracle VM Server for SPARC 2.1 (Logical Domains)
  • 6.
    © 2011 OracleCorporation Page 6 T4-1 System Target Applications • Network infrastructure workloads to support the delivery of Web and transaction services • Middleware – for example Oracle Fusion • Security Applications - The advanced SPARC T4 processor features integrated on-chip cryptographic accelerators that provide wire speed encryption capabilities for secure data center operation without customers having to pay a penalty for encrypting large amounts of data • Application Development – With up to 64 computing threads packed in a compact, 2RU design, a single system can now run more applications and reduce the number of required servers • Multithreaded Applications (for example: Siebel CRM) • Java Applications – for example: Oracle Weblogic 11g , Sun’s Java Application Server, IBM’s Websphere • Consolidation and Virtualization
  • 7.
    © 2011 OracleCorporation Page 7 T4-1 System Target Workloads • Database and data warehousing/BI • Back Office (ERP, CRM, Supply Chain, Accounting, Lifecycle Mgmt, OLTP) • Internet Infrastructure (web, mail, app/j2ee, db, proxy, directory, media) • Large, single-app implementations and/or multi-app consolidation (LDOMs, Containers) Popular Applications • Oracle 11g, Siebel 7.7.1 CRM and Siebel Analytics 7.8.4, Baan ERP, mySAP ERP (R/3), Websphere • BEA Weblogic Server 9.0, Apache, Symantec Brightmail & ScanEngine, Sybase IQ & ASE 12.6 • NetBackup Media Servers (SAN-> Tape), Sun JES (web, email, dir), PeopleSoft 8.4, PeopleTools 8.46
  • 8.
    © 2011 OracleCorporation Page 8 SPARC T4-1 Comparison to SPARC T3-1 & T5220 Feature SPARCT4-1 SPARCT3-1 SPARCEnterpriseT5220 FormFactor 2U, 28”deep 2U, 28”deep 2U, 28”deep CPU SPARCT4 2.85GHz 64threads SPARCT3 1.65GHz 128threads UltraSPARCT2 1.2/1.4/1.6GHz64threads Memory DDR3, 256GBMAX 16xSlots DDR3, 256GBMAX 16xSlots FB-DIMM, 128GBMAX 16xSlots Network 4xGbE+2x10GbE(XAUI Comboslots, sharedw/PCIe) 4xGbE+2x10GbE(XAUI Comboslots, sharedw/PCIe) 4xGbE+2x10GbE Internal Storage Upto8x2.5”SASHDD 100GB, 300GBSSD Upto16x2.5”SASHDD 32GBSSD 8x2.5”SAS, inc. upto 4xSATASSD's, hot-swap RemovableMedia 1x DVD-RW 1x DVD-RW 1x DVD-RW Serial 1xRS-232, 5xUSB 1xRS-232, 5xUSB 1xRS-232, 4xUSB PCI Expressslots 6xx8slots (lowprofile, PCIe2) 6xx8slots (lowprofile, PCIe2) 6x(lowprofile) PowerSupply 2x1200WattAC, N+1 Redundant/Hot-Swap 2x1200WattAC, N+1 Redundant/Hot-Swap 2x750WAC Redundant/Hot-Swap Fans 6xRedundant Hot-Swap 6xRedundant Hot-Swap 3xRedundant Hot-Swap
  • 9.
    © 2011 OracleCorporation Page 9 SPARC T4-1 System Key Features • SPARC T4 CPU • T4 based on T3  based on T2 Plus  based on T2 based on T1 • ½ cores of T3 but new pipeline (deeper pipeline, OOO & speculative execution, Perceptron branching prediction, up to 128 instructions in-flight, integrated crypto), dual 10 Gbps Ethernets, dual PCI Express Gen2 root complexes • 1 T4 chip per system, 8 cores x 8 threads → 64 threads • Core frequency of 2.85 GHz • T4 chip is socketed, but not field-replaceable • Memory – DDR3 DIMMs • 16 sockets on motherboard → up to 256GB (@ 16GB DIMMs) • 4 GB, 8 GB and 16 GB DIMM sizes (32 GB DIMMs post-RR) • Registered, ECC DIMMs operating at 1066 MT/s • New Buffers-on-Board (BoB) between CPU and DIMMs
  • 10.
    © 2011 OracleCorporation Page 10 SPARC T4-1 System Key Features • Software • Solaris 10 8/11 installed; Solaris 11 post-Release • Oracle VM Server for SPARC 2.1 • Service Processor (SP) • Same SP module as T3-1, shared across T4-based platforms • AST2200 CPU, with local Flash and memory • ILOM 3.0 • Parallel Boot • Host and SP boot in parallel → faster boot time • Support for “degraded” mode with failed SP
  • 11.
    © 2011 OracleCorporation Page 11 SPARC T4 Systems I/O Subsystem • PCIe 2.0 with 2 x8 Lane Physical Ports per Chip • Full Bandwidth per port with Generation 2 speeds • PCIe 2.0 to PCIe 1.0 autonegotiation • Single Root IOV available post-Announce • 4 GBps in each direction, Total 16 GBps bandwidth per chip • Lane width autonegotiation • 256 byte max packet size • Uses Latest Generation PEX Switch chips on Motherboard
  • 12.
    © 2011 OracleCorporation Page 12 • 4th Generation Virtual to Physical Address Translation • 52-bit virtual address, 48-bit physical address • Allow Guest OS’s to manage translations without the Hypervisor • Hypervisor has its own Virtual to Real Mapping • Enhanced Translation Ordering • Relaxed Ordering • Device Based Ordering SPARC T4 Systems I/O Subsystem
  • 13.
    © 2011 OracleCorporation Page 13 SPARC T4-1 System Key Features • I/O – Networking • 1 Gbps Ethernet • 4 on-board ports, 10/100/1000 Mbps • Two Intel 82576 dual-port MAC+PHY • On-board ports also support NC-SI to Service Processor • 10 Gbps Ethernet • 2 x 10 Gbps XAUI links from SPARC T4 Processor • XAUI cards operate independently of on-board ports • XAUI cards supported in slots 0 and 3 in SPARC T4-1 (not “0 and 1” like T5x40/T5x20, and not in slots 1 or 4)
  • 14.
    © 2011 OracleCorporation Page 14 SPARC T4-1 System Key Features • I/O – Storage • HDDs/SSDs, DVD • 8-disk backplane • Dual LSI SAS2008 8-port SAS2/SATA2 controllers, RAID 0,1,1E • Optional PCIe plug-in controller card for RAID 5/6 • DVD controlled by one on the LSI SAS2008’s
  • 15.
    © 2011 OracleCorporation Page 15 SPARC T4-1 System Key Features • I/O – Expansion • 2 PCI Express Gen2 switches on the Motherboard, each directly accessible from SPARC T4 • 6 expansion card slots • Low-profile cards • All slots accept x8 PCI Express cards, Gen1 or Gen2 • Some slots accept x16-physical (x8-electrical) cards • 2 slots also accept Oracle XAUI-based 10GbE cards • USB 2.0 • Internal storage device port, 2 front ports, 2 rear ports • SIS buttons/LEDs, Service Processor serial/Ethernet ports, VGA port
  • 16.
    © 2011 OracleCorporation Page 16 SPARC T4-1 Front Panel Locator LED /Button Fault LED Status LED Power Button Serial Number Dual USB 2.0 ports Top Fault LED PSU Fault LED Temp Fault LED Disk Drive Map DISK 0 DISK 1 DISK 2 DISK 3 DISK 4 DISK 5 DISK 6 DISK 7 DVD-RW
  • 17.
    © 2011 OracleCorporation Page 17 SPARC T4-1 Rear Panel PSU Status LEDs PSU 0 PSU 1 PCIe 3/XAUI 1PCIe 0/XAUI 0 PCIe 1 PCIe 2 PCIe 5PCIe 4 Chassis Status LED's and Locator Button SP Serial Port SP Network Port Quad Gigabit Ethernet Ports Dual USB 2.0 Ports HD-15 VGA Video Port
  • 18.
    © 2011 OracleCorporation Page 18 SPARC T4-1 Block Diagram
  • 19.
    © 2011 OracleCorporation Page 19 SPARC T4-1 System Chassis • 2U system chassis – Same As T3-1 chassis • Leveraged from 2U x86 platform, with: • HDD rotational vibration performance enhancements • CPU cooling enhancements • Cabled (not “hard”) interconnect to fan board • Counter-rotating fans • 6 fan modules each with 2 front-to-back fans • Infrastructure boards: PDBs, paddlecard, disk backplanes, fan board
  • 20.
    © 2011 OracleCorporation Page 20 SPARC T4-1 System Power • AC Power Supply • “2U” (78 mm) A249 • 1100 W @ 110 VAC, 1200 W @ 220 VAC • Gold+ efficiency • 1+1 Redundant, Hot-swap • Shared with other Oracle rack mount platforms • DC/DC Voltage Converters • D219 for DDR3 memory – 1.35 V Vdd, 0.75 V Vtt • D220 for CPU – 1.05 V (nom.) Vdd, Vnw, Vsb • Shared with all T4 platforms
  • 21.
    © 2011 OracleCorporation Page 21 SPARC T4-1 System Service Processor • Common service processor card for all systems • Aspeed AST2200 processor – 266MHz ARM processor – DDR2 – PCIe video device PCIe 1.0 x1 – USB • Provides rKVMS • 10/100Mbps network interface and serial console • ILOM 3.0
  • 22.
    © 2011 OracleCorporation Page 22 SPARC T4-1 Service Processor  Service processor is separate FRU  Requires removal of I/O backbar and riser card for access
  • 23.
    © 2011 OracleCorporation Page 23 SPARC T4-1 Fan Module and Fan Board  Dual fan module (front and back)  Fully populated  Fan fault LEDs on chassis  Single fan board  Fan board cabled to connector board
  • 24.
    © 2011 OracleCorporation Page 24 SPARC T4-1 Internal View Disk Chassis FM0FM1FM2FM3FM4FM5 6 Fan Assemblies required with 2 front-to-back fans (Note: Fan Modules not shown in photo) SPARCT4 DDR3 DDR3 DDR3 DDR3 Power Board Power BUS Bars RED = +12Volts BLUE = Ground Power Supplies PSU's Vertically Stacked Service Processor (Note: SP not shown) System Configuration Card (SCC) EEPROM Fault Remind Button I/O Riser 0 : PCIe2 Slot 3 or XAUI 0 -and- PCIe2 Slot 0 or XAUI 1 I/O Riser 1 : PCIe2 Slot 4 -and- PCIe2 Slot 1 I/O Riser 2 : PCIe2 Slot 5 -and- PCIe2 Slot 2 X4 SAS Connector 0 X4 SAS Connector 1 (Note: I/O Risers not shown)
  • 25.
    © 2011 OracleCorporation Page 25 SPARC T4-1 Basic Memory Layout •SPARC T4 CPUs contain 2 memory controllers: • MCU0 and MCU1 •Each controller has interfaces to 2 Buffers on Board (BoBs): • 2 DDR3 channels • 2 DIMMs per channel • 16 DIMMs per CPU •Memory bus speed limited to 1066 MHz for all platforms
  • 26.
    © 2011 OracleCorporation Page 26 Motherboard Memory Physical Layout Chassis Rear Channel0/DIMM 1/Branch 1 Channel0/DIMM 1/Branch 1 Channel0/DIMM 1/Branch 1 Channel0/DIMM 1/Branch 1 Channel1/DIMM1 Channel1/DIMM0 Channel1/DIMM1 Channel0/DIMM0 Minimum of four DIMMs must be installed per node. For each system, install one DIMM on each channel (channel 1, slot 0 and channel 0, slot 0) All DIMMs MUST be homogeneous per node – DIMM mixing within a node is not supported. Homogeneous refers to like-values in DIMM Size, DRAM, Rank and Architecture Channel0/DIMM 1/Branch 1 Channel0/DIMM 1/Branch 1 Channel0/DIMM 1/Branch 1 Channel0/DIMM 1/Branch 1 Channel0/DIMM0 Channel1/DIMM1 Channel1/DIMM0 Channel1/DIMM1 SPARCT4 Channel0/DIMM 1/Branch 1 Channel0/DIMM 1/Branch 1 Channel0/DIMM 1/Branch 1 Channel0/DIMM 1/Branch 1 Channel0/DIMM0 Channel1/DIMM1 Channel1/DIMM0 Channel1/DIMM1 Channel0/DIMM 1/Branch 1 Channel0/DIMM 1/Branch 1 Channel0/DIMM 1/Branch 1 Channel0/DIMM1 Channel1/DIMM1 Channel1/DIMM1 Channel0/DIMM0
  • 27.
    © 2011 OracleCorporation Page 27 Memory Configuration Rules • 4, 8, and 16 DIMM configurations are supported • 8 DIMM configurations are recommended to provide full memory bandwidth and optimal performance • Minimum capacity: 16GB • Maximum capacity: 256GB (512GB post-Release) • All DIMMs run at 1066 MHz, no clocking down
  • 28.
    © 2011 OracleCorporation Page 28 SPARC T4-1 10GbE XAUI Option Cards • Direct interface via 10GbE ports on SPARC T4 • Cards each consume PCIe Gen2 slot if installed • Attach to dedicated XAUI sockets on special risers (PCIe2 slots 0 and 3) • Maximum of 2 adapter cards
  • 29.
    © 2011 OracleCorporation Page 29 SPARC T4-1 10GbE XAUI Cards/Adapters • X-Option • XAUI single port 10 GigE Fiber Adapter Card • SE3Y7XT2Z • 10GbE SR Transceiver • SE3X7XT1Z • 10GbE LR Transceiver • SE3X7XT2Z • ATO Option (Factory Installation) • XAUI single port 10 GigE Fiber Adapter Card • SE3Y7XA1Z • 10GbE SR Transceiver • SE3Y7XT1Z • 10GbE LR Transceiver • SE3Y7XT2Z
  • 30.
    © 2011 OracleCorporation Page 30 SPARC T4-1 I/O Support Information • Wide array of I/O options supported on SPARC T4-1 • Storage interfaces, SAS HBAs, FCoE 10Gb Converged Networking, 10GbE XAUI, Infiniband, etc.
  • 31.
    © 2011 OracleCorporation Page 31 SPARC T4-1 Disk Drives • Disk drives are 2.5” form factor • SPARC T4-1 2U Chassis • Chassis supports SAS • Up to 8 SAS drives • 300GB HDD @ 10K RPM • 600GB HDD @ 10K RPM • 100GB SSD • 300GB SSD Ready to Remove Fault Status Disk LED's
  • 32.
    © 2011 OracleCorporation Page 32 SPARC T4-1 Disk Controller Options • SPARC T4-1 has Dual LSI SAS2008 8port SAS2/SATA2 controllers • Support for RAID 0 (striping), RAID 1 (mirroring) using 'raidctl’ and RAID 1E (enhanced mirroring) • Optional PCIe plug-in controller card for RAID 5/6 • DVD controlled by one of the LSI SAS2008’ • Hot-plug supported • Hot-swap in non-RAID configs is done with 'cfgadm' • Optional Extended RAID functionality cards • SGX-SAS6-R-INT-Z • 8 port 6 Gb SAS RAID HBA, Internal, 512MB memor • SE3X4A11Z • SAS cable kit for installation of internal RAID card
  • 33.
    © 2011 OracleCorporation Page 33 SPARC T4-1 Rack Mounting Option • One Rack mounting option available • Tool-less rail slide kit for rack mounting for factory installation (comes standard as part of base ATO option)
  • 34.
    © 2011 OracleCorporation Page 34 SPARC T4-1 System RAS Overview • Designed to minimize part count and operating temperature to enhance reliability • End-to-end data protection detecting and correcting errors throughout server – ECC everywhere • Processor and Memory protection • CPU core and thread off-lining • Memory with ECC, x4/x8 DRAM Extended ECC, page retirement, and lane failover • Major components redundant & hot-replaceable • Fan, Power Supply, and internal disks • RAID capability for internal disks
  • 35.
    © 2011 OracleCorporation Page 35 SPARC T4-1 SW/FW Block Diagram Linux KernelLinux Kernel Guest MgrGuest Mgr UBoot/DiagsUBoot/Diags Service Processor ILOM Host ConfigHost Config HypervisorHypervisor - Environmentals - Fault Management - LED Control - SP Diags - DFRUID - Plat HW Svc - FMA ETM - IPMI - CLIs - Logs - SNMP - FMA Support - PowerOn/Off - FERG CPU (AST2200)CPU (AST2200) CPUCPU MemoryMemory IOIO Host LDC Channels - Host Config - Machine Description - Hypervisor - OBP - POST Host Flash OBPOBPOBPOBP Solaris S10U10 Solaris S10U10 sun4vsun4v Solaris 11 Express Solaris 11 Express sun4vsun4v OBPOBP POSTPOST System Domain System Domain LDOMS Manager sun4vsun4v Host Data Flash - OBP NVRAM/POST/SC config vars - ASRDB -LDOMS config - Console Log - SER log – TOD data Host Data Flash - OBP NVRAM/POST/SC config vars - ASRDB -LDOMS config - Console Log - SER log – TOD data Platform Hardware FPGAFPGA - Kernel - FMA Components - Platform Drivers - Kernel - FMA Components - Platform Drivers - S10U10 - FMA Components - Platform Drivers
  • 36.
    © 2011 OracleCorporation Page 36 Oracle Solaris and SPARC Virtualization Better Resource Utilization for a More Efficient Data center Dynamic Domains Oracle VM Server for SPARC M-Series T-Series App App Oracle Solaris Containers Oracle Solaris DW DB Domain A Domain B OLTP DB OLTP DB App App Domain A Domain B Domain C Web Web DB App Web Web Web
  • 37.
    © 2011 OracleCorporation Page 37 Oracle VM Server for SPARC 2.1 Customers benefit from increased application service level • New Hardware Support • SPARC T4 servers with Oracle Solaris 8/11; Solaris 11 post-Release • Secure live migration • Dynamic resource management (DRM) between domains • Integrated Dynamic Reconfiguration (DR) of cryptographic units and virtual CPUs. • Increased maximum number of virtual networks per domain • Support for virtual device service validation • Lower-overhead, higher scalability networking for Oracle Solaris 11 initial release • Enhanced Management Information Base (MIB) • Physical to virtual (P2V) enhancements
  • 38.
    © 2011 OracleCorporation Page 38 Secure Live Migration • Live migration available on SPARC T-Series systems • SPARC T4 • SPARC T3 • UltraSPARC T2 Plus • UltraSPARC T2 • On-chip crypto accelerators deliver secure, wire speed encryption for live migration • No additional hardware required • Eliminates requirement for dedicated network • More secure, more flexible VM External Shared Storage SPARC T-Series servers Oracle VM Server Pool VM Secure Live Migration (SSL)VM VM Eliminates Application Downtime
  • 39.
    © 2011 OracleCorporation Page 39 Live Migration Requirements • Source System • Guest domain with virtual I/O devices only, running Solaris 10 9/10 • Requires Logical Domains Manager 2.1 & updated firmware (i.e. 7.4 or 8.1) • Can not have multi-pathed disks (IPMP for networks is ok) • Power Management in “performance” mode (the default) • Target System • Must have enough resources (cpu, mem) • Must have appropriate VIO services (vds, vsw, vcc) • Must be able to provide required VIO devices (vdisk, vnet) • Must be cpu-compatible • Same processor type (e.g. SPARC T4), same clock frequency
  • 40.
    © 2011 OracleCorporation Page 40 Perform Live Domain Migration • Uses the same CLI and XML interfaces as in prior releases • Also from Oracle Enterprise Manager Ops Center • CLI example – ldm migrate [-f] [-n] [-p <password_file>] <source-ldom> [<user>@]<target-host>[:<target-ldom>] • -n : dry-run option • -f : force • -p : specify password file for non-interactive migration • Cancel an On-Going Migration – ldm cancel-operation migration <ldom>
  • 41.
    © 2011 OracleCorporation Page 41 Live Migration Best Practices • There is no specific requirement on the number of CPUs in the control domains. However our experience shows that > 8 vCPUs is best. We recommend 16 or more vCPUs in order to minimize the suspend time as well as the overall migration time. • Workloads that heavily modify memory pages will have longer migration times • Be sure to add the crypto units for best migration performance. • Review the documentation, especially the Admin Guide for planning live migration • White paper on best practices to be published soon
  • 42.
    © 2011 OracleCorporation Page 42 Resource Management Improvements • Dynamic resource management (DRM) between domains • Dynamic CPU movement is based on the priority property of each domain's DRM policy. • Ensures that domains running the most important workloads get priority for CPU access over domains with less critical workloads • Integrated Dynamic Reconfiguration (DR) of Cryptographic units and virtual CPUs • Automatically remove crypto unit when the final vCPU of a core is to be removed. • Simplify operations and ensure consistent performance.
  • 43.
    © 2011 OracleCorporation Page 43 Increased Maximum Number of Virtual Networks Per Domain • Introduces an option to dynamically disable the allocation of inter-vnet Logical Domain Channels (LDC) • The number of LDCs is limited by hardware. • Inter-domain communication continues to work exactly as before, but the inter-domain network performance may be less due to an extra hop for every packet. • As fewer LDC channels are consumed, it helps in creating more VIO devices that require LDCs. • This greatly helps customer configurations that have large vnets in a Virtual Switch, especially when a customer doesn't really care about inter-domain network performance.
  • 44.
    © 2011 OracleCorporation Page 44 Inter-vnet LDC Channels Explained • LDoms CLI modification – ldm add-vsw [default-vlan-id=<vid>] [pvid=<pvid>] [vid=<vid1,vid2,...>] [mac- addr=<num>] [net-dev=<device>] [linkprop=phys-state] [mode=<mode>] [mtu=<mtu>] [id=<switchid>] [inter-vnet-link=<on|off>] <vswitch_name> <ldom> • The default setting is ON. • This option is a Virtual Switch wide setting, that is enabling/disabling affects all Vnets in a given Virtual Switch. • Can be dynamically enabled/disabled without stopping the Guest domains. – The Guest domains dynamically handle this change. – ldm set-vsw [pvid=[<pvid>]] [vid=[<vid1,vid2,...>]] [mac-addr=<num>] [net- dev=[<device>]] [mode=[<mode>]] [mtu=[<mtu>]] [linkprop=[phys-state]] [inter-vnet- link=<on|off>] <vswitch_name>
  • 45.
    © 2011 OracleCorporation Page 45 Virtual Device Service Validation • Support for Virtual Device Service Validation • Enhances 'ldm add-vdsdev', 'ldm add-vswitch' and 'ldm bind' commands to perform validation. • Immediately validates the name and path for a specified network device or virtual disk, greatly reducing the risk of incorrectly configured I/O. • This feature addresses the #1 cause of IO mis-configurations; it gives the administrator immediate feedback if the configuration is valid or invalid and why
  • 46.
    © 2011 OracleCorporation Page 46 Virtual Device Service Validation Example • Examples: # ldm add-vdsdev /bad/path bad_vol@primary-vds0 Path /bad/path is not valid on service domain primary # ldm add-vdsdev -q /bad/path bad_vol@primary-vds0 # ldm list-bindings … VDS NAME VOLUME OPTIONS MPGROUP DEVICE … bad_vol /bad/path … # ldm add-vswitch net-dev=bad-nic vsw1 primary NIC bad-nic is not valid on service domain primary # ldm add-vswitch -q net-dev=bad-nic vsw1 primary … VSW NAME MAC NET-DEV ID DEVICE … … vsw1 00:14:4f:f8:02:08 bad-nic 1 switch@1 … ...
  • 47.
    © 2011 OracleCorporation Page 47 Oracle VM Server for SPARC 2.1 More Enhancements • P2V Enhancements • Bring more flexibility to quickly convert an existing SPARC server running Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image to run on SPARC T-Series servers. • Enhanced Management Information Base (MIB) • Enables the SNMP MIB to use the latest Logical Domains Manager XML interface, permitting 3rd party management software to access the new features and resource properties. • Lower-overhead, higher scalability networking for Oracle Solaris 11 initial release • Allows virtual network devices to use shared memory to exchange network packets, enabling improved performance and scalability.
  • 48.
    © 2011 OracleCorporation Page 48 SPARC T4 Power Management • CPU Clock Speed Adjustments • Increase or decrease clock speed based on CPU utilization • Memory Power Management • Put under-utilized memory in a deeper idle mode • Power Limit • Set a power limit for the system • Reduced the power state of manageable resource if the limit is reached
  • 49.
    © 2011 OracleCorporation Page 49 SPARC T4-1 Fault Management • Knowledge Articles in MOS • ILOM fdd Diagnosis • Faults and Alerts • No ALOM Compatibilty • ILOM FMA Captive Shell • Sideband Service Processor Network Connection • New ILOM Fault Notification (SNMP Trap) • ASR Support
  • 50.
    © 2011 OracleCorporation Page 50 Knowledge Articles in MOS • Knowledge Articles (KA's) not in FMA Event Registry • Knowledge will be in MOS (My Oracle Support) • URL will not be sun.com/Message-ID • Existing KA's will migrate to MOS • URL's to existing KA's will be redirected to MOS
  • 51.
    © 2011 OracleCorporation Page 51 ILOM fdd diagnosis • ILOM fault management uses 'fdd' diagnosis • Currently supported on x86 Nehalem platforms • FMA 'light' • All ILOM diagnosed problems will have a Message-ID • All ILOM diagnosed problems will have a Knowledge Article
  • 52.
    © 2011 OracleCorporation Page 52 ILOM fdd Diagnosis Example -> show faulty Target | Property | Value --------------------+------------------------ +--------------------------------- /SP/faultmgmt/0 | fru | /SYS/PS0 /SP/faultmgmt/0/ | class | fault.chassis.power.volt-fail faults/0 | | /SP/faultmgmt/0/ | sunw-msg-id | SPT-8000-LC faults/0 | | /SP/faultmgmt/0/ | uuid | 2c98a119-3acc-ebf7-de7d- a9b137de faults/0 | | bb07 /SP/faultmgmt/0/ | timestamp | 2010-06-15/13:46:39 faults/0 | | /SP/faultmgmt/0/ | detector | /SYS/PS0/VOLT_FAULT faults/0 | | /SP/faultmgmt/0/ | fru_part_number | 3002235 faults/0 | | /SP/faultmgmt/0/ | fru_serial_number | 001331 faults/0 | | /SP/faultmgmt/0/ | product_serial_number | BDL1020F61 faults/0 | | /SP/faultmgmt/0/ | chassis_serial_number | BDL1020F61
  • 53.
    © 2011 OracleCorporation Page 53 No ALOM Compatibility • ALOM functions not supported from ILOM CLI are supported in Service or Escalation mode.
  • 54.
    © 2011 OracleCorporation Page 54 Faults and Alerts • ILOM fdd diagnosed problems • Faults and Alerts • Faults  Probably a faulty FRU  Persistent  Cleared by manual repair command or system detected FRU replacement • Alerts  Probably not a faulty FRU  External condition – power, temperature  Configuration problem  Not persistent  Automatically clears when condition is corrected
  • 55.
    © 2011 OracleCorporation Page 55 ILOM FMA Captive Shell • Enter from ILOM CLI • Display ILOM fdd diagnosed problems • Display ereports from host FMA diagnosis • Repair ILOM fdd diagnosed problems
  • 56.
    © 2011 OracleCorporation Page 56 ILOM FMA Captive Shell -> start /SP/faultmgmt/shell Are you sure you want to start /SP/faultmgmt/shell (y/n)? y faultmgmtsp> faultmgmtsp> fmadm faulty ------------------- ------------------------------------ -------------- -------- Time UUID msgid Severity ------------------- ------------------------------------ -------------- -------- 2010-06-15/12:42:46 9df39f93-f356-6d26-e081-e4f3a9872c2f SPT-8000-3R Major Fault class : fault.chassis.device.fan.fail FRU : /SYS/FANBD/FM0 Description : Fan tachometer speed is below its normal operating range. Response : The service-required LED may be illuminated on the affected FRU and chassis. System will be powered down when the High Temperature threshold is reached. faultmgmtsp> exit ->
  • 57.
    Page 57 Side-band Management •3 Remote Management Communication Channels • Out-of-band management = communicate with the SP over a dedicated media (Ethernet/Serial) • In-band management = communicate with the SP through Oracle Solaris via agents • Side-band management = communicate with the SP over a shared media (the host’s data network interface) • Side-band interface is Disabled by default (as shipped from Factory) • Can be enabled on any of the 4 on-board GigE Interfaces • Configured from ILOM Web, CLI Interface or BIOS Setup Utility © 2011 Oracle Corporation
  • 58.
    © 2011 OracleCorporation Page 58 ASR Support • SPARC T4-1 will be supported by ASR (Automatic Service Request) at release • Supports sunHwTrapFaultDiagnosed SNMP notification • Telemetry for ILOM fdd diagnosis • Supports platform and FRU identity • Supports mulit-suspect list
  • 59.
    © 2011 OracleCorporation Page 59Page 59 Additional Resources • T4-1 I/O Wiki
  • 60.
    © 2011 OracleCorporation Page 60 We encourage you to use the newly minted corporate tagline “Hardware and Software, Engineered to Work Together.” at the end of all your presentations. This message should replace any reference to our previous corporate tagline “Hardware. Software. Complete.”
  • 61.
    © 2011 OracleCorporation Page 61

Editor's Notes

  • #23 Add short description either from Mktg or PLCIMS.
  • #24 Add short description either from Mktg or PLCIMS.
  • #28 All DIMMs must be the same size (ie, 4GB, 8GB, 16GB) as mixing is not supported; 32GB DIMMs available post-Release Minimum of 4 DIMMs per node must be installed for server to be operational
  • #37 Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard-partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single T-Series server with finer granularity for computing resources. For SPARC T-Series processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is built into the CPU, without the extra overhead for scheduling in the hypervisor. What you get is a lower- overhead and higher-performance virtualization solution. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Oracle’s SPARC T-Series servers to deliver a more agile, responsive, and low-cost environment.
  • #38 Through the enhancements to domain migration and resource management, users of Oracle SPARC T-series servers can benefit from increased application service levels. Oracle VM Server for SPARC 2.1 delivers: Live migration: Enables users to migrate an active domain to another host machine while maintaining application services to users. Live migrations are as simple as point and click using Oracle Enterprise Manager Ops Center’s console. Secure, encrypted migration included: On-chip cryptographic accelerators deliver secure, wire speed encryption capabilities for live migration – without any additional hardware investments. Dynamic Resource Management (DRM) between domains: Ensures that domains running the most important workloads get priority for CPU access over domains with less critical workloads. Increased maximum number of virtual networks per domain: Permits a dramatic increase in external access to domains. Lower-overhead, higher scalability networking for Oracle Solaris 11 Express: Allows virtual network devices to use shared memory to exchange network packets, enabling improved performance and scalability. Support for Virtual Device Service Validation: Immediately validates the name and path for a specified network device or virtual disk, greatly reducing the risk of incorrectly configured I/O. Integrated Dynamic Reconfiguration (DR) of Cryptographic units and virtual CPUs: Cryptographic units and CPUs are dynamically reconfigured together to simplify operations and ensure consistent performance. Enhanced Management Information Base (MIB): Enables the SNMP MIB to use the latest Logical Domains Manager XML interface, permitting third party management software to access the new features and resource properties. P2V tool enhancements: Bring more flexibility to quickly convert an existing SPARC server running Oracle Solaris 8, 9 or 10 into a virtualized Oracle Solaris image to run on SPARC T-series servers.
  • #39 Live migration: Enables users to migrate an active domain to another host machine while maintaining application services to users. Secure, encrypted migration included: On-chip cryptographic accelerators deliver secure, wire speed encryption capabilities for live migration – without any additional hardware investments. Other products (including VMware) migrate VM data in the clear Requires dedicated network Leaves sensitive data vulnerable (passwords, account numbers, etc.)
  • #40 The requirements are similar to other virtualization solutions.
  • #41 Same Command for Cold and Live Migration type of migration depends on the state of the domain
  • #43 Through the enhancements to domain migration and resource management, users of Oracle SPARC T-series servers can benefit from increased application service levels. Oracle VM Server for SPARC 2.1 delivers: Live migration: Enables users to migrate an active domain to another host machine while maintaining application services to users. Live migrations are as simple as point and click using Oracle Enterprise Manager Ops Center’s console. Secure, encrypted migration included: On-chip cryptographic accelerators deliver secure, wire speed encryption capabilities for live migration – without any additional hardware investments. Dynamic Resource Management (DRM) between domains: Ensures that domains running the most important workloads get priority for CPU access over domains with less critical workloads. Increased maximum number of virtual networks per domain: Permits a dramatic increase in external access to domains. Lower-overhead, higher scalability networking for Oracle Solaris 11 initial release: Allows virtual network devices to use shared memory to exchange network packets, enabling improved performance and scalability. Support for Virtual Device Service Validation: Immediately validates the name and path for a specified network device or virtual disk, greatly reducing the risk of incorrectly configured I/O. Integrated Dynamic Reconfiguration (DR) of Cryptographic units and virtual CPUs: Cryptographic units and CPUs are dynamically reconfigured together to simplify operations and ensure consistent performance. Enhanced Management Information Base (MIB): Enables the SNMP MIB to use the latest Logical Domains Manager XML interface, permitting third party management software to access the new features and resource properties. P2V tool enhancements: Bring more flexibility to quickly convert an existing SPARC server running Oracle Solaris 8, 9 or 10 into a virtualized Oracle Solaris image to run on SPARC T-series servers.
  • #44 Through the enhancements to domain migration and resource management, users of Oracle SPARC T-series servers can benefit from increased application service levels. Oracle VM Server for SPARC 2.1 delivers: Live migration: Enables users to migrate an active domain to another host machine while maintaining application services to users. Live migrations are as simple as point and click using Oracle Enterprise Manager Ops Center’s console. Secure, encrypted migration included: On-chip cryptographic accelerators deliver secure, wire speed encryption capabilities for live migration – without any additional hardware investments. Dynamic Resource Management (DRM) between domains: Ensures that domains running the most important workloads get priority for CPU access over domains with less critical workloads. Increased maximum number of virtual networks per domain: Permits a dramatic increase in external access to domains. Lower-overhead, higher scalability networking for Oracle Solaris 11 initial release: Allows virtual network devices to use shared memory to exchange network packets, enabling improved performance and scalability. Support for Virtual Device Service Validation: Immediately validates the name and path for a specified network device or virtual disk, greatly reducing the risk of incorrectly configured I/O. Integrated Dynamic Reconfiguration (DR) of Cryptographic units and virtual CPUs: Cryptographic units and CPUs are dynamically reconfigured together to simplify operations and ensure consistent performance. Enhanced Management Information Base (MIB): Enables the SNMP MIB to use the latest Logical Domains Manager XML interface, permitting third party management software to access the new features and resource properties. P2V tool enhancements: Bring more flexibility to quickly convert an existing SPARC server running Oracle Solaris 8, 9 or 10 into a virtualized Oracle Solaris image to run on SPARC T-series servers.