CeTune is an open source toolkit for deploying, benchmarking, analyzing, and tuning Ceph clusters. It provides a web UI for configuring Ceph deployments and benchmarks, and generates reports with performance metrics and system analysis. CeTune aims to simplify tuning Ceph for beginners and engineers, and provide an easy way for developers to stress test and analyze performance. It includes modules for deployment, benchmarking with tools like FIO and Cosbench, system analysis with tools like iostat and sar, and tuning Ceph configurations.
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
Kenny Chang (張任伯) (Storage Solution Architect, Intel)
With the trend that Solid State Drive (SSD) becomes more affordable, more and more cloud providers are trying to provide high performance, highly reliable storage for their customers with SSDs. Ceph is becoming one of most open source scale-out storage solutions in worldwide market. More and more customers have strong demands that using SSD in Ceph to build high performance storage solutions for their Openstack clouds.
The disrupted Intel® Optane SSDs based on 3D Xpoint technology fills the performance gap between DRAM and NAND based SSD while the Intel® 3D NAND TLC is reducing cost gap between SSD and traditional spindle hard drive and makes it possible for all flash storage. In this session, we will
1) Discuss OpenStack storage Ceph reference design on the first Intel Optane (3D Xpoint) and P4500 TLC NAND based all-flash Ceph cluster, it delivers multi-million IOPS with extremely low latency as well as increase storage density with competitive dollar-per-gigabyte costs
2) Share Ceph bluestore tunings and optimizations, latency analysis, TCO model, IOPS/TB, IOPS/$ based on the reference architecture to demonstrate this high performance, cost effective solution.
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
Kenny Chang (張任伯) (Storage Solution Architect, Intel)
With the trend that Solid State Drive (SSD) becomes more affordable, more and more cloud providers are trying to provide high performance, highly reliable storage for their customers with SSDs. Ceph is becoming one of most open source scale-out storage solutions in worldwide market. More and more customers have strong demands that using SSD in Ceph to build high performance storage solutions for their Openstack clouds.
The disrupted Intel® Optane SSDs based on 3D Xpoint technology fills the performance gap between DRAM and NAND based SSD while the Intel® 3D NAND TLC is reducing cost gap between SSD and traditional spindle hard drive and makes it possible for all flash storage. In this session, we will
1) Discuss OpenStack storage Ceph reference design on the first Intel Optane (3D Xpoint) and P4500 TLC NAND based all-flash Ceph cluster, it delivers multi-million IOPS with extremely low latency as well as increase storage density with competitive dollar-per-gigabyte costs
2) Share Ceph bluestore tunings and optimizations, latency analysis, TCO model, IOPS/TB, IOPS/$ based on the reference architecture to demonstrate this high performance, cost effective solution.
How to Become a Thought Leader in Your NicheLeslie Samuel
Are bloggers thought leaders? Here are some tips on how you can become one. Provide great value, put awesome content out there on a regular basis, and help others.
Andreas Grabner - Performance as Code, Let's Make It a StandardNeotys_Partner
Since its beginning, the Performance Advisory Council aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing. During this event, 12 participants convened in Chamonix (France) exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Stephen Gordon
This deck begins with a high-level overview of where OpenStack Compute (Nova) fits into the overall OpenStack architecture, as demonstrated in Red Hat Enterprise Linux OpenStack Platform. Before illustrating how OpenStack Compute interacts with other OpenStack components.
The session will also provide a grounding in some common Compute terminology and a deep-dive look into key areas of OpenStack Compute, including the:
Compute APIs.
Compute Scheduler.
Compute Conductor.
Compute Service.
Compute Instance lifecycle.
Intertwined with the architectural information are details on horizontally scaling and dividing compute resources as well as customization of the Compute scheduler. You’ll also learn valuable insights into key OpenStack Compute features present in OpenStack Icehouse.
Tips to deploy a production grade Kubernetes cluster using SUSE CaaS Platfrom v3.
Created joinly with my colleague Martin Weiss to be used on SUSECON 2019
Introduction to Stacki at Atlanta Meetup February 2016StackIQ
An introduction to Stacki-the fastest bare metal Linux server provisioning tool from the Stacki Atlanta kickoff meetup on 2/23/16 at the Microsoft Innovation Center. Greg Bruno is the VP Engineering at StackIQ.
Orchestration tool roundup - OpenStack Israel summit - kubernetes vs. docker...Uri Cohen
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Cloud providers like Amazon or Goggle have great user experience to create and manage PaaS and IaaS services. But is it possible to reproduce same experience and flexibility locally, in on premise datacenter? This talk describes success story of creation private cloud based on DC/OS cluster. It is used to host and share different services like hadoop or kafka for development teams, dynamically manage services and resource pools with GKE integration.
Deploying datacenters with Puppet - PuppetCamp Europe 2010Puppet
Rafael Brito at PuppetCamp Europe 2010
"Deploying datacenters with Puppet."
Follow along with the Video at: https://www.youtube.com/watch?v=3DaWrKQ82j4
Puppet Camp Europe 2010: Ghent, Belgium
May 27-28, 2010
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Similar to Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster (20)
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
2. CeTune Overview
• What is CeTune?
• CeTune is a toolkit/framework to deploy, benchmark, analyze and tuning ceph.
• CeTune’s objective?
• For beginners: Shorten landing time of Ceph based storage solution.
• For performance engineer: Simplify the procedure to deploy, tune ceph, easily finding device
bottlenecks, identify unexpected software behavior from processed data.
• For Developers: Providing an easy way to verify codes with a quick stress test and a clear
performance report.
3. CeTune is open sourced today!!
• Github:
• https://github.com/01org/CeTune
• License:
• Apache License v2.0
• Main feature:
• Manage CeTune Configuration and Execute CeTune with Webui
• Deploy module: install ceph with CeTune Cli, deploy ceph with Webui
• Benchmark module: support qemurbd, fiorbd, cosbench
• Analyzer module: support iostat, sar, interrupt, performance counter
• Report Visualize: Support Config download, csv download, all data present by line chart.
• Maillist:
• maillist: cephperformance@lists.01.org
• Subscribe maillist: https://lists.01.org/mailman/listinfo/cephperformance
5. CeTune WebUI
5
Configuration view: In this view, user can add cluster description, ceph configuration,
benchmark testcase, etc. Then click ‘Execute’ to kickoff CeTune.
True: destroy ceph cluster and build
False: compare current cluster with configuration, incremental build
Deploy radosgw only when this is ‘True’
6. CeTune WebUI
6
Configuration view: In this view, user can add cluster description, ceph configuration,
benchmark testcase, etc. Then click ‘Execute’ to kickoff CeTune.
Ceph configurations need to set before deploy
7. CeTune WebUI
7
Configuration view: In this view, user can add cluster description, ceph configuration,
benchmark testcase, etc. Then click ‘Execute’ to kickoff CeTune.
8. CeTune WebUI
8
Configuration view: In this view, user can add cluster description, ceph configuration,
benchmark testcase, etc. Then click ‘Execute’ to kickoff CeTune.
System configuration, including disk read_ahead, scheduler, sector size
9. CeTune WebUI
9
Configuration view: In this view, user can add cluster description, ceph configuration,
benchmark testcase, etc. Then click ‘Execute’ to kickoff CeTune.
Ceph tuning, pool configuration and ceph.conf tuning
When CeTune start to run benchmark, will firstly compared ceph tuning
with this configuration, apply tuning if needed then start test
10. CeTune WebUI
10
Configuration view: In this view, user can add cluster description, ceph configuration,
benchmark testcase, etc. Then click ‘Execute’ to kickoff CeTune.
Benchmark configuration
Benchmark testcase configuration
11. CeTune WebUI
11
Configuration view: In this view, user can add cluster description, ceph configuration,
benchmark testcase, etc. Then click ‘Execute’ to kickoff CeTune.
System metrics: iostat, sar, interrupt
Perf counter process
Select deploy, benchmark
Click to execute CeTune
12. CeTune WebUI
12
Status Monitoring View: Reports current CeTune working status, so user can interrupt
CeTune if necessary.
13. CeTune WebUI
13
Reports view: Result report provides two report view, summary view shows history report
list, double click to view the detail report of one specific test run.
14. CeTune WebUI
14
Reports view: Result report provides two report view, summary view shows history report
list, also user can view the detail report of one specific testrun.
Ceph.conf configuration,
cluster description,
Cetune_process_log,
Fio/cosbench error log
15. CeTune WebUI
15
Reports view: Result report provides two report view, summary view shows history report
list, also user can view the detail report of one specific testrun.
16. CeTune WebUI
16
Reports view: Result report provides two report view, summary view shows history report
list, also user can view the detail report of one specific testrun.
18. Terms definition
• Cetune controller
• reads config files and controls the process to
deploy, benchmark and analyze the collected data;
• Cetune workers
• controlled by CeTune controller working as
workload generator, system metrics collector.
• CeTune Configuration files
• all.conf
• tuner.conf
• testcase.conf
19. Modules
• Deployment:
• clean build, incremental build
• ceph cluster, radosgw
• Benchmark:
• qemurbd, fiorbd, cosbench
• seqwrite, seqread,randwrite,randread, mixreadwrite
• Analyze:
• System_metrics: iostat, sar, interrupt
• Performance_counter
• Latency_breakdown ( WIP )
• Output as a json file
• Tuner:
• ceph tuning after comparing
• pool tuning
• disk tuning
• Other System Tuning?
• Visualizer:
• Read from json file
• Output as html
Deployer
Benchmarker
Analyzer
Tuner
Visualizer
CeTune
configuration
ConfigHandler common.py
20. Modules
• Deployment:
• clean build, incremental build
• ceph cluster, radosgw
• Benchmark:
• qemurbd, fiorbd, cosbench
• seqwrite, seqread,randwrite,randread, mixreadwrite
• Analyze:
• System_metrics: iostat, sar, interrupt
• Performance_counter
• Latency_breakdown ( WIP )
• Output as a json file
• Tuner:
• ceph tuning after comparing
• pool tuning
• disk tuning
• Other System Tuning?
• Visualizer:
• Read from json file
• Output as html
Controller
Deploy
Handler
CeTune
configuration
①
③
Web Interface
④
Asynchronized
Benchmark
Handler
Analyzer
Handler
Visualizer
Handler
⑤
CeTune
Status
Monitor
CeTune
process LOGCommon
②
21. Deployment
• Ceph Package installation:
• Use ceph-deploy to do installation.
• Check and maybe reinstall nodes defined in CeTune description file to make sure all nodes under same Ceph Major release.
• Ex: if some nodes are 0.94.3, some are 0.94.2 won’t reinstall 0.94.2 nodes
• Deployment:
• Clean_build: Remove current osd, mon, and radosgw then deploy ceph cluster from zero.
• Incremental_build( Non clean build ): compare current ceph cluster with desired configuration, then add osd devices, new node, radosgw daemon if
necessary.
DeployHandler
Deploy.py Deploy_rgw.py
22. Benchmark
• RBD
• Fio running inside VM, fio-rbd engine
• Cosbench
• Adopts Radosgw to test object
• Cephfs
• Fio cephfs engine( not recommend, will working on a more generic benchmark at CeTune v2)
• Generic devices
• Distribute Fio test job to multi nodes multi disks.
BenchmarkHandler
Benchmark.py
qemurbd.py
Controller
TunerHandler testcases.conf
fiorbd.py generic.py cosbench.py fiocephfs.py Plugin?
④
③
a hook for user
defined
benchmark
23. Analyzer
• System metrics:
• iostat: partition
• Sar: cpu, mem, nic
• Interrupt
• Top: raw data
• Performance counter:
• Indicates software behavior
• Stable and well format in ceph codes
• Well supported in cetune
• Lttng( blkin ):
• Better reveal of code path
• Current lttng codes lack of an uniform identifier to mark one op, but doable by chaining thread id and object id.
• Blkin branch is not merged to ceph master by now.
• Not fully supported in CeTune, but have experience on visualize/process blkin lttng data, and able to help.
AnalyzerHandler
BenchmarkHandler Controller
Iostat
Hanlder
Sar
Hanlder
PerfCounter
Hanlder
Fio
Handler
Cosbench
Handler
Res dir
Iostat files
Sar files
Fio files
…
② ③
Result.json
25. Tuning
• Current CeTune Supports:
• POOL tuning
• Ceph.conf tuning
• Debug to 0
• Op_thread_num
• Xattr size
• Etc.
• Disk tuning
• To be extend:
• Mount omap to a separate device
• Multi osd daemons on one physical device
• Adopting flashcache device as osd
• Multi pools ?
Check_tuning
apply_tuning
generate ceph conf
Deploy ceph
Run benchmark
Hook
tuning
Hook
tuning
26. Tuning(All flash Setup)
26
0
1
2
3
4
5
6
7
8
Seq.W Seq.R Rand.W Rand.R
TUNING EXAMPLE: All-flash setup
TUNING1 TUNING2 TUNING3 TUNING4 TUNING5 TUNING6
Case Tuning Items
TUNIN
G1
Default
TUNIN
G2
2 OSD instances running on 2 partitions of
same SSD
TUNIN
G3
Case-2 + Set debug log to 0
TUNIN
G4
Case-3 + Set throttle values to 10x of
default value
TUNIN
G5
Case-4 + disable RBD cache, OPtracker,
tuning FD cache size to 64
TUNIN
G6
Case-5 + Replace Tcmalloc with Jemalloc
32. 32
Legal Information: Benchmark and Performance Claims
Disclaimers
Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as
SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those
factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated
purchases, including the performance of that product when combined with other products.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual
performance. Consult other sources of information to evaluate performance as you consider your purchase.
Test and System Configurations: See Back up for details.
For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
33. Risk Factors
The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the future are forward-looking statements that involve
a number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "plans," "believes," "seeks," "estimates," "may," "will," "should" and their variations identify
forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could
affect Intel's actual results, and variances from Intel's current expectations regarding such factors could cause actual results to differ materially from those expressed in these
forward-looking statements. Intel presently considers the following to be important factors that could cause actual results to differ materially from the company's expectations.
Demand for Intel’s products is highly variable and could differ from expectations due to factors including changes in the business and economic conditions; consumer confidence
or income levels; customer acceptance of Intel’s and competitors’ products; competitive and pricing pressures, including actions taken by competitors; supply constraints and other
disruptions affecting customers; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Intel’s gross margin
percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products
for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated costs; excess or obsolete inventory; changes in unit
costs; defects or disruptions in the supply of materials or resources; and product manufacturing quality/yields. Variations in gross margin may also be caused by the timing of Intel
product introductions and related expenses, including marketing expenses, and Intel’s ability to respond quickly to technological developments and to introduce new features into
existing products, which may result in restructuring and asset impairment charges. Intel's results could be affected by adverse economic, social, political and physical/infrastructure
conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health
concerns and fluctuations in currency exchange rates. Results may also be affected by the formal or informal imposition by countries of new or revised export and/or import and
doing-business regulations, which could be changed without prior notice. Intel operates in highly competitive industries and its operations have high costs that are either fixed or
difficult to reduce in the short term. The amount, timing and execution of Intel’s stock repurchase program and dividend program could be affected by changes in Intel’s priorities
for the use of cash, such as operational spending, capital spending, acquisitions, and as a result of changes to Intel’s cash flows and changes in tax laws. Product defects or errata
(deviations from published specifications) may adversely impact our expenses, revenues and reputation. Intel’s results could be affected by litigation or regulatory matters
involving intellectual property, stockholder, consumer, antitrust, disclosure and other issues. An unfavorable ruling could include monetary damages or an injunction prohibiting
Intel from manufacturing or selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such
as compulsory licensing of intellectual property. Intel’s results may be affected by the timing of closing of acquisitions, divestitures and other significant transactions. A detailed
discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Form 10-Q, Form 10-K and
earnings release.
Rev. 1/15/15 33
35. All Flash Setup Configuration Details
Client Cluster
CPU Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 36C/72T
Memory 64GB
NIC 10Gb
Disks 1 HDD for OS
Ceph Cluster
CPU Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 36C/72T
Memory 64 GB
NIC 10GbE
Disks 4 x 400 GB DC3700 SSD (INTEL SSDSC2BB120G4) each
cluster
35
Ceph cluster
OS Ubuntu 14.04.2
Kernel 3.16.0
Ceph 0.94.2
Client host
OS Ubuntu 14.04.2
Kernel 3.16.0
• Ceph version is 0.94.2
• XFS as file system for Data Disk
• 4 partitions of each SSD and two of
tem for OSD daemon
• replication setting (2 replicas), 2048
pgs/OSD
36. Tuning release Configuration Details
Client Nodes
CPU 2 x Intel Xeon E5-2680 @ 2.8Hz (20-core, 40 threads) (Qty: 3)
Memory 128 GB (8GB * 16 DDR3 1333 MHZ)
NIC 2x 10Gb 82599EB, ECMP (20Gb), 64 GB (8 x 8GB DDR3 @ 1600 MHz)
Disks 1 HDD for OS
Client VM
CPU 1 X VCPU VCPUPIN
Memory 512 MB
Ceph Nodes
CPU 1 x Intel Xeon E3-1275 V2 @ 3.5 GHz (4-core, 8 threads)
Memory 32 GB (4 x 8GB DDR3 @ 1600 MHz)
NIC 1 X 82599ES 10GbE SFP+, 4x 82574L 1GbE RJ45
HBA/C204 {SAS2008 PCI-Express Fusion-MPT SAS-2} / {6 Series/C200 Series Chipset Family SATA AHCI Controller}
Disks
1 x INTEL SSDSC2BW48 2.5’’ 480GB for OS
1 x Intel P3600 2TB PCI-E SSD (Journal)
2 x Intel S3500 400GB 2.5’’ SSD as journal
10 x Seagate ST3000NM0033-9ZM 3.5’’ 3TB 7200rpm SATA HDD (Data)