Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and Tuning Practices


Published on

Issue du salon orienté développeurs d'Intel (l'IDF) voici une présentation plutôt sympa sur le stockage dit "scale out" avec une présentation des différents fournisseurs de solutions (slide 6) comprenant ceux qui font du mode fichier, bloc et objet. Puis du benchmark sur certains d'entre eux dont Swift, Ceph et GlusterFS.

Published in: Technology, Business

Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and Tuning Practices

  1. 1. Scale-out Storage on Intel®Architecture Based Platforms:Characterizing and Tuning PracticesYongjie Sun, Application Engineer, IntelXiwei Huang, Senior Application Engineer, IntelJin Chen, Application Engineer, IntelSFTS007
  2. 2. Agenda • Dilemma of Data Center Storage • Intel® Architecture (IA) based Scale-out Storage Solution Overview • Increasing Performance of IA based Scale-Out Storage Solutions With Intel® Products • Characteristics and Tuning Practices – Swift* – Ceph* – Gluster FS* • Summary2
  3. 3. Storage Consumption Analysis Capacity(Petabytes) 180,000 160,000 Content depots and public clouds/ Huge Un-Structured 140,000 Exponential Growth 120,000 Public Cloud – Enterprise Hosting Services 100,000 80,000 Traditional 60,000 Un-Structure 40,000 Traditional Linear Growth Structure data 20,000 0 Year 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 Worldwide Enterprise Storage Consumption Capacity Shipped by Model, 2006–2015 (PB) Mobile & Cloud drive exponential growth in Storage Consumption Source: IDC, 2011 Worldwide Enterprise Storage Systems 2011–2015 Forecast Update, Doc#2310513
  4. 4. Can Traditional Storage Solutions Meet the Emerging Needs? Traditional Scale-up Storage Typical New New Storage Requirements Storage User Scenarios • Capacity: from GB to TB/PB/EB • a large number of Micro-blogs unstructured • Price: $ per MB messages and photos • Throughput: Supports hundreds/thousands of hosts • Surveillance video, at the same time Safe City pictures, and log • Response time: Response • Large-Volume files time & Throughput remain Centralized Storage • Patient Records/High unchanged while Scaling Arrays Healthcare Quality Medical • Flexibility: Dynamic • Hosts are attached to Images (CT) Allocations and Easy Storage Arrays with Enterprise • virtual machine Management for Business Hardware flexibility Controllers/Cables Cloud images • Fault tolerance: No Single- • High Performance /High Point Failure throughput • Fault tolerance on Disk Level • Expensive solutions Better Solution: Scale-out storage based on the Intel® Architecture Platform4
  5. 5. What is Scale Out Storage? Definition: • Massive but low-cost hardware infrastructure. Intel® Architecture Platform is the most preferable choice. Client Client Client Client • Scalable system architecture, multiple data servers to share the storage load, metadata server locator store information †IA Platform • High performance/High throughput IA PlatformPlatform IA Platform IA • High reliability/High availability Data Control Flow • High extensibility Flow Category: • Distributed file system • Distributed object storage Data Server Metadata Data Server Metadata Server • Distributed block device Data Server Metadata Server Data Server Server Characteristics: IA Platform IA Platform IA Platform IA Platform IA Platform • Cold data, no high requirement for access IA Platform IA Platform frequency and real-time • Both structured & Un-structure data Scalable storage design is usually closely integrated with business5 †IA Platform = Intel® Architecture Platform
  6. 6. Scale-Out Storage Category Overview IBM* SONAS* EMC* lsilon* Swift EMC* Atmos* GlusterFS* Dell* FluidFS* Ceph HP* StoreAll* Lustre* Storage DDN* WOS* DDN* EXAScaler* Ceph* Sheepdog Hitachi* NAS (HNAS) Amplidata* HDFS* … AmpliStor* Object Quantum StorNext MogileFS Storage system Huawei* OceanStor* MooseFS N9000 … Red Hat* Storage FastDFS Server 2.0 … Oracle* ZFS … Commercial File- Commercial Object- Open Source File- Open Source Based Scale-Out Based Scale-Out based Scale-Out Object-Based NAS Storage Storage Scale-Out Storage Scale-Out Storage Solution Commodity Storage Solution = Intel® Xeon® Processor based Servers + Open Source Software Stack6
  7. 7. Open Source Scale-Out Storage Project Key Features Storage Maturity Name Type Swift* • Support multi proxy server and NO SPOF Object- Not many • Support multi-tenant. Python* based. based commercial • PB level storage deployments • AWS S3 interface compatible Ceph* • Include multi Meta Servers and NO SPOF File- Emerging • POSIX-compliant, C based based/Obj solutions, • Support block storage, object storage and file system ect-based Inktank* is the company which provides enterprise- class commercial support for Ceph. GlusterFS* • No Meta Server and No SPOF File-based 100+ • POSIX-compliant , C based Country/Regio • Supports NFS, CIFS, HTTP, FTP, Gluster SDK/API ns is using access GlusterFS • Design for several hundred PBs of data Lustre* • Include Meta Server and have SPOF File-based Over 40% of • POSIX-compliant, C based Top 100 HPC • Supported 10K+ Nodes, PB + storage, 100GB/s projects adopts Lustre77
  8. 8. Increasing Performance of Scale-Out Storage Solutions With Intel® Products8
  9. 9. Increasing Performance of Scale-Out Storage With Leading Intel® Solid State Drive Fast and Consistent Fast and Consistent Performance Performance SATA III 6 Gbps Interface End-to-end data protection 75K/36K IOPS 4K Random R/W Power loss protection 50/65us Average Latency 256-bit AES Encryption <500us Max latency ECC protected memory 500/460 MBps Sustained Seq. 2.0 Million hours MTBF High-Endurance Technology 10 DWPD over five years Capacity Meets JEDEC endurance standard 2.5-inch: 100/200/400/800 GB 1.8-inch: 200/400GB Intel® SSD DC S3500/S3700 series9
  10. 10. Increasing Performance of Scale-Out Storage With Leading Intel® 10G Ethernet GbE Server Connections • New technology – Add-in cards and then move to LOM when demand is > 50% • New data centers are being built with 10GbE – Save cost, lower power, decrease complexity, and 10GbE Server Connections future proof – Virtualization growth – Unified Networking(LAN, iSCSI, FCoE) • Intel® server platform code name Romley - 10G 15% 80% options Reduction in Infrastructure Reduction in Cables and Costs Switch ports – Add card – easy sell up option – Mezz/Riser cards – Lower cost configure to order 45% 2x – 1GB/10G dual layout – New future upgrade Reduction in Improved Bandwidth capability Power per per Server rack – 10G baseT and 10G SFP+ LOM – new lowest cost Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated10 purchases, including the performance of that product when combined with other products. For more information go to
  11. 11. Characterizing and Tuning Practices: Swift*, Ceph* and Gluster FS*11
  12. 12. Agenda for Characterizing and Tuning Practices For each solution (Swift*, Ceph*, GlusterFS*), we will talk about: • Solution Architecture • Testing Environments & Workloads • Baseline Performance • Step by Step Performance Tuning • Summary12
  13. 13. Characterizing and Tuning Practices: -- Swift*13
  14. 14. Swift*: Architecture Overview • Swift* – A distributed object storage system designed to scale from a single machine to thousands of servers – Is optimized for multi-tenancy and high concurrency – Swift is ideal for backups, web and mobile content, and any other unstructured data that can grow without bound • Mainly components – Proxy Service – Account Service – Container Service – Object Service – Authentication Service • Main Features – Durability (zone, replica) – No Single Point of Failure (NWR) – Scalability – Multi-tenant14
  15. 15. Swift*: Testing Environment • Hardware List Purpose Count CPU Memory Disk NIC X5670 Workload 4 2.93GHz 24G SATA* 1000Mbit/s Clients 2*6 E5-2680 Proxy 1 2.70GHz 64G SATA 1000Mbit/s*2 2*8 E5-2680 Storage 4 2.70GHz 64G SATA 1000Mbit/s 2*8 • Software Stack Software Version swauth 1.04 Swift 1.7.4 COSBench 2.1.0 collectd 4.10.115
  16. 16. Swift*: Workloads • Intel developed a benchmark tool to measure Cloud Object Storage Service performance • Components: COSBench* – Controller – Driver – Console/Portal Performance sensitive metrics: CPU usage, NIC usage Workload Configuration Mmetrics Target Small Read Object size=64kb, runtime 5min IOPS, RESP TIME Website hosting Large Read Object size=1mb, runtime 5min IOPS, RESP TIME Music Small Write Object size=64kb, runtime 5min IOPS, RESP TIME Online game Large Write Object size=1mb, runtime 5min IOPS, RESP TIME Enterprise IOPS: IO per second RESP TIME: response time16
  17. 17. Swift*: Baseline Workload IOPS REPS (ms) Success Swift Configuration: 1. Proxy worker: 64 Rate 2. Object worker: 16 Small Read 1615.25 313.63 99.8% 3. Account worker:16 4. Container worker: 16 Large Read 108.16 4772.13 99.8% 5. XFS inode size: 1024 6. Others use default Small Write 493.58 1039.64 100% Large Write 37.96 6852.46 99.94% Proxy: CPU usage ~50%, NIC Usage ~100% Storage: NIC Usage ~50%, CPU ~40% NIC bandwidth used up Use Intel® 10G NIC to replace the original 1000Mbit/s NIC17
  18. 18. Tuning – Using Intel® 82599EB 10 Gigabit Ethernet Controller Workload IOPS REPS (ms) Success VS Rate Baseline Small Read 4271.4 159.74 99.9% >150% Large Read 99.49% >150% Did not reach 406.42 2478.9 our expectation Small Write 560.64 916.97 100% ~13.5% Large Write 94.76 3980.7 100% ~150% Proxy: CPU usage ~50%, NIC Usage ~30% Storage: NIC Usage ~50%, CPU ~40% Deep Analysis 100 90 CPU0 used up, mainly used to deal soft irq. 80 soft% 70 60 Proxy 50 Server 40 sys% 30 20 10 user% 0 cpu2 cpu3 cpu4 cpu5 cpu6 cpu7 cpu8 cpu9 cpu10 cpu11 cpu12 cpu13 cpu14 cpu15 cpu16 cpu17 cpu18 cpu19 cpu20 cpu21 cpu22 cpu23 cpu24 cpu25 cpu26 cpu27 cpu28 cpu29 cpu30 cpu31 Total cpu 0 cpu 118
  19. 19. Tuning – Using Intel® 82599EB 10 Gigabit Ethernet Controller (Con’t) • Know your NIC – Intel® 10G NIC has multi-queues – Each queue own 1 IRQ number dmesg | grep ixgbe cat /proc/softirqs | grep NET Soft IRQ not balance Deep search: stap & addr2line BKM: bind each IRQ to 1 core19
  20. 20. Tuning – Using Intel® 82599EB 10 Gigabit Ethernet Controller (Con’t) • IRQ Number << CPU cores – BKM: bind IRQ to same physical CPU or same NUMA node • Know your CPU architecture Bind IRQ in turn: cpu0-cpu7, cpu16-cpu23 cpu8-cpu15, cpu24-cpu3120
  21. 21. Tuning – Using Intel® 82599EB 10 Gigabit Ethernet Controller (Con’t) • Important extra component: memcached – Used for:  Cache client token  Cache Ring* for search – Tuning with:  Increasing the initial memory  Increasing the client concurrency • dmesg: ip_conntrack: table is full, dropping packet – BKM: increase the NAT Hash track table size emp: net.ipv4.netfilter.ip_conntrack_max = 655350 • Others: – Linux* ulimit21
  22. 22. Tuning – Using Intel® 82599EB 10 Gigabit Ethernet Controller (Con’t) Workload IOPS REPS (ms) Success Rate Vs Tuning Before Small Read 7571.4 189.74 99.9% >90% Large Read 736.42 2678.9 99.49% >90% Small Write 563.34 716.97 100% ~0% Large Write 121.38 3280.7 100% ~30% (except small write)Proxy: CPU usage ~50%, NIC Usage ~40% Storage: NIC Usage ~50%, CPU ~40% Speed KB/S proxy NIC storage CPU CPU % 140000 60 120000 50 100000 40 80000 60000 30 40000 20 20000 10 0 0 TX RX user% sys% iowait%22
  23. 23. Tuning – Scale Up Disk Scale up storage node: from 2 SATA disks up to 4 SATA disks Workload IOPS REPS (ms) Success Rate Vs Tuning Before Small Write 723.34 696.17 100% ~28% Speed KB/S proxy NIC storage CPU CPU % 70 250000 60 200000 50 150000 40 100000 30 20 50000 10 0 0 TX RX user% sys% iowait%23
  24. 24. Tuning – Use Intel® SSD 320 Series for Account & Container • Intel® SSD can improve the DISK performance, but too expensive to replace all SATA* • Account & Container data can be stored in SSD to improve performance Workload: container own to many objects, then write … Workload IOPS REPS (ms) Success Rate Special 245.19 303.19 100% Workload IOPS REPS (ms) Success Rate Vs Tuning Before Special 298.13 292.23 100% >20%24
  25. 25. Swift* Tuning Summary • Sample configuration – Hardware  10GbE for proxy node or 10GbE for load balancer & proxy node  More disks in storage node  SSD used for account & container – Software  Bind each IRQ to per core  Increase memcached memory & concurrency  Increase the NAT Hash track table size – Swift  Proxy worker: 64 ( twice cpu cores)  Object worker: 16 (half cpu cores)  Account worker:16 (half cpu cores)  Container worker: 16(half cpu cores)  XFS inode size: 1024  Memcached for authorization25
  26. 26. Swift* Tuning Summary Workload IOPS REPS (ms) Success Rate Vs Baseline Small Read 7571.4 189.74 99.9% 350% Large Read 736.42 2678.9 99.49% 350% Small Write 723.34 696.17 100% ~50% Large Write 121.38 3280.7 100% ~220% Large Scale deployment sample26
  27. 27. Characterizing and Tuning Practices: -- Ceph*27
  28. 28. Ceph*: Architecture Overview Ceph* uniquely delivers object, APP APP HOST/VM Client block, and file storage in one unified system. It is highly reliable, easy to manage, and free. RADOSGW RBD CEPH FS A reliable and A bucket- fully- A POSIX- based REST Three interfaces: gateway. distributed block device. compliant distributed Compatible With a Linux* file system, 1. CephFS with S3 and kernel client with a Linux Swift and a kernel client 2. Ceph RADOS Gateway QEMU/KVM and support driver for FUSE 3. Ceph Block Devices (RBD) LIBRADOS A library allowing apps to directly access RADOS, Our focus is Ceph RBD. with support for C, C++, Java*, Python*, Ruby, and PHP RADOS A reliable, autonomic, distributed object store comprised of self-healing, self-managing, intelligent storage nodes28
  29. 29. Ceph*: Arch Overview (Cont.) • MDS (Metadata Server Cluster) System architecture. Clients perform file I/O by • OSD (Object Storage Cluster) communicating directly with OSDs. Each process • MON (Cluster Monitors) can either link directly to a client instance or interact with a mounted file system. • Client29
  30. 30. Testing Environment Node IP Hostname OS Version MON&MDS NEW-MDS Ubuntu* 12.04.2 LTS OSD0 NEW-OSD0 Ubuntu 12.04.2 LTS OSD1 NEW-OSD1 Ubuntu 12.04.2 LTS OSD2 NEW-OSD2 Ubuntu 12.04.2 LTS Client compute1 Ubuntu 12.04.2 LTS Client/MON&MDS/OSD0/OSD1/OSD2: CPU: Intel® Xeon® Processor E5-2680 0 @ 2.70GHz MEM: 8x8GB DDR3 1600Mhz HDD: SATA Seagate* 1TB 7200PRM x 3 SSD: Intel® SSD 320 300GB 10GB NIC Chipset: Intel® 82599EB 10 Gigabit Ethernet Controller 1GB NIC Chipset: Intel® Ethernet Controller I35030
  31. 31. Workload & Baseline Result • Workload - Benchmark Tool : iozone v3.397 - Single Client R/W Testing iozone -i 0 -i 1 -r X -s Y -f /mnt/rbd-block/iozone -Rb ./rbd-X-Y.xls –I -+r X is the record size, Y is the file size. -I Using O_DIRECT for all operations -+r Using O_RSYNC|O_SYNC for all operations • Performance 1 Client R/W Performance 120,000 100,000 Throughput(KB) Write 1M 80,000 60,000 Write 4M System Network IO Write 16M 40,000 Read 1M 20,000 Read 4M 0 Read 16M record size 256M 512M 1G 2G File Size(Byte)31
  32. 32. Performance Tuning Practices Step 1: Intel® SSD replacement Observation: Action: Use Intel® SSD to store journal files Result: Obvious boost for write mkfs.xfs -n size=64k /dev/sde 100000 2.69x mount /dev/sde /srv/ceph/osd0 Throughput(KB/S) 80000 2.73x ceph.conf: 60000 osd journal = /srv/ceph/osd0/journal 40000 1.47x 20000 0 1M 4M 16M HDD(Baseline) SSD32
  33. 33. Performance Tuning Practices Step 2: Private Network for OSDs Reason: Ceph* can configure separated network across OSDs for internal data transportation(data redundancy copy), which can offload OSD outbound bandwidth. Action: Configure Ceph with Dedicated Result: Slight boost for write Private Network 100000 1.02x Throughput(KB/S) ceph.conf: 80000 1.04x [osd] 60000 cluster network = 1.06x 40000 public network = [osd.0] 20000 public addr = 0 cluster addr = 1M 4M 16M SSD SSD-Private33
  34. 34. Performance Tuning Practices Step 3: 1Gbe Network Adaptor bonding Reason: We may observe the client’s NIC bandwidth has been used upAction: Configure Client to use adaptor bonding Result: Slight boost for write 120000 1.10x 100000 Throughput(KB/S) 1.02x 80000 60000 1.02x 40000 20000 0 1M 4M 16M SSD-Private SSD-private-Bonding34
  35. 35. Performance Tuning Practices Step4: Use 10Gbe to replace 1Gbe Reason: The emulated block device has high IO wait; NIC throughput is unbalanced Result: great boost in Read 600000 Action: 4.33x 500000 A way is to adjust Throughput(KB/s) bonding load balance 400000 algorithm; 300000 Given that full utilization of 200000 1.02x bonding is limited to 200MB/s, 100000 here 10Gbe will be adopted 0 directly. ReWrite Read 1G Bonding/SSD 10G/SSD35
  36. 36. Ceph* Tuning Summary 600000 500000 Throughput(KB/s) 400000 300000 ReWrite Read 200000 100000 0 1G/HDD 1G/SSD 1G Bonding/SSD 10G/SSD ReWrite 28831 91946 113719 119980 Read 101846 107920 119314 516217 Ref: Local SATA 7200RPM Write Performance = 101,403 KB/S36
  37. 37. Characterizing and Tuning Practices: -- GlusterFS*37
  38. 38. GlusterFS*: Architecture Storage A scale-out NAS file system GlusterFS* Gateway based on a stackable user Client space design • Server NFS CIFS(Samba) • Brick RDMA • Client • Sub volume Volume Volume • Volume Server Side brick brick brick brick brick brick brick Gluster Volume Storage brick brick brick brick Cloud38
  39. 39. Gluster FS*: Test Environment • Hardware: CPU: Intel® Xeon® Processor E5-2680 2.70GHz – GlusterFS* Client: 1-2 MEM: 8x8GB DDR3 1600Mhz – GlusterFS* Server: 2 HDD: SATA Seagate* 1TB 7200PRM x 3 SSD: Intel® SSD 320 300GB 10GB NIC: Intel® 82599EB 10 Gigabit • Software: Ethernet Controller – OS: Ubuntu* 12.04 LTS – GlusterFS * version: 3.2.5 – IOzone for large file test(read/write)39
  40. 40. Gluster FS*: Baseline • Gluster FS* Volume – Type: Distributed – Volume options  Read large files  io-thread-count: 16  cache-size: 32MB  cache-max-file-size 16384PB  cache-min-file-size 0  Write large files  write-behind-window-size: 1MB  write-behind: off  io-thread-count: 16  flush-behind: on • Workload: read/write of large file – Record size: 4K~16M; The bigger record size is better for write operation. – 2 Clients: 1 IOzone on 1 Client. iozone -a -s 2g -i 0 -i 1 -f /mnt/glusterfs/iozone0 -Rb 2Clt2Svr-Dtbt-2G.xls -+r40
  41. 41. Gluster FS*: Volume Options Optimization 120 115.698MB/s • Gluster FS* volume 115 – Type: Distributed 110 103.286MB/s Read(MB/s) – Volume options 105 Network(MB/s)  Read large files 100  io-thread-count: 16->64 95  cache-size: 32MB->2GB Baseline Options  cache-max-file-size 16384PB  cache-min-file-size 0 120 119.826MB/s  Write large files 100 80  write-behind-window-size:1MB->1GB Write(MB/s) 60  write-behind: on 20.863MB/s Network(MB/s) 40  io-thread-count: 16->64 5.2X 20  flush-behind: on 0 Baseline Options41
  42. 42. Gluster FS*:Hardware Optimization • Gluster FS* volume • Hardware Optimization – Type: Distributed – Use Intel® SSD to replace HDD – Volume options: unchanged – Use Intel® 10G NIC to replace 1Gbe NIC 116.289 350 250 300 248.94 200 150.783 250 2.2X 150 1.3X 200 7.2X 115.698 150 Read(MB/s) 100 Write(MB/s) 5.7X 100 Network(MB/s) Network(MB/s) 50 50 5.2X 0 0 Baseline: disable volume options; Options: enable relevant volume optimization options; SSD: bricks on SSD 10G: Both client and server use 10G NIC42
  43. 43. Gluster FS*: Stress Testing MB/s • Gluster FS* volume 1000 937.337 – Type: Distributed 900 802.676 – Volume options: unchanged 800 689.156 – 12 Bricks: 6 SSD, 6 HDD 700 Write 600 478.358 Read 500 Network(Write) 400 Network(Read) 300 200 100 0 Performance iozone -s 24g -r 16m -i 0 -i 1 -t 12 –F iozone0 iozone1 iozone2 iozone3 iozone4 iozone5 iozone6 iozone7 iozone8 iozone9 iozone10 iozone11 -Rb 1C2S12B-Dtbt-2G16M-3.2.5- 0329-all@22.xls -+r43
  44. 44. Gluster FS*: Striped Volume Tuning • Gluster FS* volume • Hardware Optimization – Type: Striped – Use Intel® SSD to replace HDD – Volume options – Use Intel® 10G NIC to replace 1Gbe NIC 322.687 300 250 355.532 317.006 250 200 130.563 200 2.45X 1.13X 150 150 3.19X 93.803 Read(MB/s) 100 Write(MB/s) 2.8X 100 Network(MB/s) Network(MB/s) 50 50 3.25X 0 0 Baseline: disable volume options; Options: enable relevant volume optimization options; SSD: bricks on SSD 10G: Both client and server use 10G NIC44
  45. 45. Tuning Best Known Methods • GlusterFS volume options optimization – Read large files  io-thread-count: 64  cache-size: 2GB  cache-max-file-size and cache-min-file-size – Write large files  write-behind-window-size: 1GB  write-behind: on  io-thread-count: 64  flush-behind: on • Hardware optimization – Use Intel® SSD to replace HDD – Use Intel® 10G NIC to replace 1Gbe NIC45
  46. 46. Summary46
  47. 47. Summary • Scale-out Storage is the one of the new major trends of Data Center storage evolution • Intel® Platform and Products can greatly increase the performance and expand usage models for scale-out storage solutions • Open source solutions generally need careful tuning before achieving reliable performance47
  48. 48. Next Steps Our Plans • Scalability Optimization Ceph*/GlusterFS* • SSD Usage models For Audience • Is Scale-out Storage suitable for you? • Contact us!48
  49. 49. Additional Sources of Information: • Other Sessions – TECS003 - Lustre*: The Exascale File System, Now at Intel - Room 306B at 17:00 • Demos in the showcase – Teamsun* OpenStack* Swift* Scale-Out storage solution based on Intel 10GBE – Customer Application Case Study: Intel® Xeon Phi™ Platform After Porting and Tuning – Resource Scheduler & Performance Monitoring for Intel® Xeon® Processor & Intel Xeon Phi Hybrid Cluster • More web based info – controllers.html (Chinese) – drives-ssd.html (Chinese) – software-tools-for-developers-to-debug-and-optimize.html (Chinese)49
  50. 50. Legal DisclaimerINFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT ASPROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVERAND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDINGLIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANYPATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.• A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTELS PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS.• Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.• The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.• Intel product plans in this presentation do not constitute Intel plan of record product roadmaps. Please contact your Intel representative to obtain Intels current plan of record product roadmaps.• Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. Go to:• Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.• Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to:• Intel, Xeon, Xeon Phi, Sponsors of Tomorrow and the Intel logo are trademarks of Intel Corporation in the United States and other countries.• *Other names and brands may be claimed as the property of others.• Copyright ©2013 Intel Corporation.50
  51. 51. Legal Disclaimer • Any software source code reprinted in this document is furnished under a software license and may only be used or copied in accordance with the terms of that license. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. • Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to
  52. 52. Intels compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #2011080452
  53. 53. Risk Factors The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “plans,” “believes,” “seeks,” “estimates,” “may,” “will,” “should” and their variations identify forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could affect Intel’s actual results, and variances from Intel’s current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-looking statements. Intel presently considers the following to be the important factors that could cause actual results to differ materially from the company’s expectations. Demand could be different from Intels expectations due to factors including changes in business and economic conditions; customer acceptance of Intel’s and competitors’ products; supply constraints and other disruptions affecting customers; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Uncertainty in global economic and financial conditions poses a risk that consumers and businesses may defer purchases in response to negative financial events, which could negatively affect product demand and other related matters. Intel operates in intensely competitive industries that are characterized by a high percentage of costs that are fixed or difficult to reduce in the short term and product demand that is highly variable and difficult to forecast. Revenue and the gross margin percentage are affected by the timing of Intel product introductions and the demand for and market acceptance of Intels products; actions taken by Intels competitors, including product offerings and introductions, marketing programs and pricing pressures and Intel’s response to such actions; and Intel’s ability to respond quickly to technological developments and to incorporate new features into its products. The gross margin percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated costs; start-up costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials or resources; product manufacturing quality/yields; and impairments of long-lived assets, including manufacturing, assembly/test and intangible assets. Intels results could be affected by adverse economic, social, political and physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Expenses, particularly certain marketing and compensation expenses, as well as restructuring and asset impairment charges, vary depending on the level of demand for Intels products and the level of revenue and profits. Intel’s results could be affected by the timing of closing of acquisitions and divestitures. Intel’s current chief executive officer plans to retire in May 2013 and the Board of Directors is working to choose a successor. The succession and transition process may have a direct and/or indirect effect on the business and operations of the company. In connection with the appointment of the new CEO, the company will seek to retain our executive management team (some of whom are being considered for the CEO position), and keep employees focused on achieving the company’s strategic goals and objectives. Intels results could be affected by adverse effects associated with product defects and errata (deviations from published specifications), and by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust, disclosure and other issues, such as the litigation and regulatory matters described in Intels SEC reports. An unfavorable ruling could include monetary damages or an injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property. A detailed discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent Form 10-Q, report on Form 10-K and earnings release. Rev. 1/17/1353