クラウディアン、IOT/M2M本格普及にむけ、ビッグデータを「スマートデータ」として活用できるCLOUDIAN HyperStore 5.1をリリース ~ CLOUDIAN HyperStore 5.1ソフトウェアとアプライアンスをHadoopとHortonworks Data Platformが公式認定、ペタバイト規模の分析を可能に ~
http://cloudian.jp/news/pressrelease_detail/press-release-34.html
Cloudian HyperStore Ushers in Era of Smart Data With Efficient, Scalable Storage for Internet of Things ~ With Hadoop and Hortonworks Data Platform Qualified on HyperStore 5.1 Software and Appliances, Customers Can Perform In-Place Data Analysis at Petabyte-Scale; Cloudian Becomes Hortonworks Certified Technology Partner ~
http://www.cloudian.com/news/press-releases/cloudian-hyperstore-5.1-ushers-in-era-of-smart-data.php
http://hortonworks.com/partner/cloudian/
http://hortonworks.com/wp-content/uploads/2014/08/Cloudian-Hortonworks-Solutions-Brief.pdf
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...Principled Technologies
Critical Apache Cassandra NoSQL databases can offer reliability and flexibility for workloads like media streaming or social media. Running these databases in a private cloud can let you maintain control of your data while giving you the agility and flexibility the cloud provides.
In our datacenter, the Dell EMC PowerEdge FC640 solution powered by Intel Xeon Gold 5120 processors dramatically increased performance for Apache Cassandra workloads compared to a legacy solution. By choosing a solution that can do up to 4.7 times the work of the legacy solution, your infrastructure could handle more requests at a time—and we found that the Dell EMC PowerEdge FC640 solution could do all this additional work in less space, which could let you hold off on renting more datacenter space or on building out your existing space as your business grows.
Comparing Dell Compellent network-attached storage to an industry-leading NAS...Principled Technologies
A flexible NAS solution addresses many organizational challenges from server backup to hosting production applications and databases. Advanced NAS solutions such as the Intel Xeon processor-based Dell Compellent FS8600 NAS provide flexibility and scalability, allowing various use options as well as drive options throughout its lifecycle. This scale and flexibility enables an organization to alleviate performance bottlenecks anywhere in the organization simply by reallocating or adding more disk resources.
We found that the Intel Xeon processor-based Dell Compellent FS8600 NAS solution backed up a small-file corpus up to 15.9 percent faster and a large-file corpus up to 17.1 percent faster than a similarly configured, industry-leading NAS solution. This means that selecting the Dell Compellent FS8600 NAS has the potential to help optimize an organization’s infrastructure.
クラウディアン、IOT/M2M本格普及にむけ、ビッグデータを「スマートデータ」として活用できるCLOUDIAN HyperStore 5.1をリリース ~ CLOUDIAN HyperStore 5.1ソフトウェアとアプライアンスをHadoopとHortonworks Data Platformが公式認定、ペタバイト規模の分析を可能に ~
http://cloudian.jp/news/pressrelease_detail/press-release-34.html
Cloudian HyperStore Ushers in Era of Smart Data With Efficient, Scalable Storage for Internet of Things ~ With Hadoop and Hortonworks Data Platform Qualified on HyperStore 5.1 Software and Appliances, Customers Can Perform In-Place Data Analysis at Petabyte-Scale; Cloudian Becomes Hortonworks Certified Technology Partner ~
http://www.cloudian.com/news/press-releases/cloudian-hyperstore-5.1-ushers-in-era-of-smart-data.php
http://hortonworks.com/partner/cloudian/
http://hortonworks.com/wp-content/uploads/2014/08/Cloudian-Hortonworks-Solutions-Brief.pdf
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...Principled Technologies
Critical Apache Cassandra NoSQL databases can offer reliability and flexibility for workloads like media streaming or social media. Running these databases in a private cloud can let you maintain control of your data while giving you the agility and flexibility the cloud provides.
In our datacenter, the Dell EMC PowerEdge FC640 solution powered by Intel Xeon Gold 5120 processors dramatically increased performance for Apache Cassandra workloads compared to a legacy solution. By choosing a solution that can do up to 4.7 times the work of the legacy solution, your infrastructure could handle more requests at a time—and we found that the Dell EMC PowerEdge FC640 solution could do all this additional work in less space, which could let you hold off on renting more datacenter space or on building out your existing space as your business grows.
Comparing Dell Compellent network-attached storage to an industry-leading NAS...Principled Technologies
A flexible NAS solution addresses many organizational challenges from server backup to hosting production applications and databases. Advanced NAS solutions such as the Intel Xeon processor-based Dell Compellent FS8600 NAS provide flexibility and scalability, allowing various use options as well as drive options throughout its lifecycle. This scale and flexibility enables an organization to alleviate performance bottlenecks anywhere in the organization simply by reallocating or adding more disk resources.
We found that the Intel Xeon processor-based Dell Compellent FS8600 NAS solution backed up a small-file corpus up to 15.9 percent faster and a large-file corpus up to 17.1 percent faster than a similarly configured, industry-leading NAS solution. This means that selecting the Dell Compellent FS8600 NAS has the potential to help optimize an organization’s infrastructure.
Bruno francesco tribunale di palermo ufficio istruzione processi penaliGiuseppe Ciampolillo
L’AUTORIZZAZIONE INTEGRATA AMBIENTALE N 693 DEL 18 LUGLIO 2008 CHE HA VISTO COME RESPONSABILE DEL PROCEDIMENTO L’ARCHITETTO CANNOVA GIANFRANCO, CON LA QUALE LA ITALCEMENTI AVANZA RICHIESTA DI RINNOVO NON PUO’ TROVARE ACCOGLIMENTO IN QUANTO IL DECRETO SU CUI SI CHIEDE IL RINNOVO E’ DECADUTO SIN DAL 17 LUGLIO 2010 PER INOSSERVANZA DELLE PRESCRIZIONI INSERITE NEL DECRETO 693 18 LUGLIO 2008
LA CONFERMA DI QUANTO SOPRA SI EVINCE DALLA CONVOCAZIONE DI UN TAVOLO TECNICO PER IL GIORNO 09/06/2011 ALLE ORE 11 DEL SERVIZIO 2 VIA VAS DIRETTO DAL DOTTOR NATALE ZUCCARELLO .
IL TAVOLO TECNICO CONVOCATO DAL DIRIGENTE RESPONSABILE DR NATALE ZUCCARELLO AVEVA IL COMPITO DI: “verificare se la societa’ italcementi s.p.a. ha provveduto a dare corso alla attuazione delle prescrizioni contenute nel decreto di riferimento “
QUINDI NON SI PUO’ AVANZARE UNA RICHIESTA DI RINNOVO SU UN DECRETO CHE NON ESISTE
NON RISULTANDO ALCUN INTERVENTO VOLTO AD UNIFORMARSI ALLE PREVISIONI DELLA AUTORIZZAZIONE INTEGRATA AMBIENTALE CONCESSA NEL 2008, COMPORTA UNA GRAVE RESPONSABILITA’ PER L’AITALCEMENTI S.p.a. CHE HA CONTINUATO AD UTILIZZARE UN IMPIANTO ALTAMENTE INQUINANTE E NOCIVO PER LA SALUTE DEI CITTADINI, OLTRE AD ESSERE FORIERA DI RESPONSABILITA’ ANCHE PER L’AMMINISTRAZIONE REGIONALE PER I SUOI AGENTI CHE RIMANENDO INERTI SONO SOLIDAMENTE RESPONSABILI CON LA ITALCEMENTI S.p.a., PER I DANNI ALLA SALUTE DEI CITTADINI.
NON RISULTA CHE L’AMMINISTRAZIONE ABBIA MAI EFFETTUATO ALCUN CONTROLLO IN ORDINE ALL’ADEMPIMENTO DELLE PRESCRIZIONI IMPOSTE NEI TERMINI PREVISTI DALL’A.I.A.
Sempre in merito alla procedura A.I.A.
Il 9 febbraio 2007 protocollo 10741 il 2° servizio VIA VAS nel rispondere a quanto richiesto con nota prot arta 75686 del 2.11.206 della ITALCEMENTI tendente ad ottenere L’AUTORIZZAIONE INTEGRATA AMBIENTALE comunicava alla ITALCEMENTI che la richiesta avanzata doveva essere sottoposta a procedura di Valutazione Di Impatto Ambientale
La comunicazione a firma del
Dirigente responsabile del servizio 2° VIA-VAS ingegnere VINCENZO SANSONE
Nella conferenza dei servizi del 21.11.2007 il responsabile del procedimento architetto CANNOVA GIANFRANCO comunica i presenti di “ aver ricevuto una nota 2132 del 20.11.07 col quale si informa che la pratica di V.I.A. e’ in fase istruttoria e che sarà cura dell’U.O. trasmettere le risultanze alla conclusione del procedimento”
Nella conferenza dei servizi del 31.1.2008 il responsabile del procedimento architetto CANNOVA GIANFRANCO comunica i presenti di “ aver ricevuto una nota 138 del 25.01.08 col quale si informa che la pratica di V.I.A. e’ in fase istruttoria e che sarà cura dell’U.O. trasmettere le risultanze alla conclusione del procedimento”
La Italcementi chiede “ di rilasciare l’Autorizzazione Integrata Ambientale relativo all’impianto attuale includendo il coke di petrolio, ad esclusione della conversione tecnologica da via semisecca a via secca che
Peanut Butter and jelly: Mapping the deep Integration between Ceph and OpenStackSean Cohen
Ceph is the most widely deployed storage technology used with OpenStack, most often because it's an open source, massively scalable, unified software-defined storage solution. Its popularity is also due to its unique and optimized technical integration with the OpenStack services and its pure-software approach to scaling. In this session, we'll review how Ceph is integrated into Nova, Glance, Keystone, Cinder, and Manila and demonstrate why using traditional storage products won’t give you the full benefits of an elastic cloud infrastructure. We’ll also cover the flexible deployment options, available through Red Hat Enterprise Linux OpenStack Platform and Red Hat Ceph Storage, for seamless operations and key scenarios like disaster recovery. We'll discuss architectural options for deploying a multisite OpenStack cluster and cover the varying levels of maturity in the OpenStack services for configuring multisite. This session will also show how other technologies are using OpenStack Ceph to increase performance and reduce power consumption, such as Intel SSDs. This will include reference architectures and best practices for Ceph and SSDs.
Open Cloud Storage @ OpenStack Summit Parisit-novum
This slides are the original slides from Michael Kienle @ OpenStack Summit in Paris November 2014 focusing on Open Cloud Storage - Building a flexible and large - scale software-defined storage platform for OpenStack
OpenStack Day Italy: openATTC as an open storage platform for OpenStackit-novum
The first OpenStack Day in Italy took place in Milan on Friday, May 30. This presentation shows how the open source storage project openATTIC can be used as storage platform for cloud systems like OpenStack.
openATTIC is a storage project started in 2012 in Germany. It can be downloaded at www.openattic.org
Building an open source cloud storage platform for OpenStack - openATTICit-novum
Although OpenStack is purposely open it partly relies on proprietary storage products. To build your cloud upon a truly open software-defined storage platform (SDS), the storage project openATTIC might be worth a try. openATTIC is a cloud storage platform based on 100% open source. It is optimized for OpenStack's Cinder component and for openQRM.
The openATTIC project has been initiated to support organizations in getting the best return on investment in their data center operations while achieving greatest flexibility. openATTIC is based on open source software and extended with intelligent storage functions. Building software-defined storage platforms with openATTIC is easy as it gives you complete access to all its functionalities via a single central API making integration and extension both simple and inexpensive. openATTIC is hardware-independent and includes intelligent SDS functions such as
- consistent snapshots of VMs, databases and applications
- high availability and high performance
- the comprehensive variety of protocols typical of a unified storage system.
- variety of standard hardware can be used
- secure & enterprise-class reliable
More about the openATTIC project at http://openattic.org/en
Pivotal has setup and operationalized 1000 node Hadoop cluster called the Analytics Workbench. It takes special setup and skills to manage such a large deployment. This session shares how we set it up and how you will manage it.
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebula Project
Cloud providers are constantly addressing the technology limitations on their infrastructures, which must be overcome to meet customer needs. On this presentation, we will demonstrate how technological agnosticism and management flexibility of OpenNebula has allowed Todoencloud to provide the most efficient open source solution to the needs of its customers, choosing the most appropriate virtualization technology (Xen and KVM), storage approach (ZFS vs CEPH), Cloud Bursting solutions (Azure, Amazon) and customized networking topologies.
Open Hybrid Cloud.
A presentation given by Erik Geensen, responsible for Cloud, Platform and Virtualization at Red Hat Benelux, at the OPEN'14 conference in Belgium.
Building Complex Workloads in Cloud - AWS PS Summit CanberraAmazon Web Services
In this session we will explore technologies & solutions to deploy ever increasing complex workload like High Performance Computing, Big Data and AI seamlessly to the cloud. You will hear from two strategic partners on how they have used AWS cloud and Intel technologies to accelerate innovation for their customers.
Speaker: Jason Jacobs, Industry Manager, ANZ Public Sector, Intel Corporation with Aileen Gemma Smith CEO, Vizalytics and Zack Levy, DevOps Partner, Deloitte Consulting
Software Defined Storage. While the growth of software defined storage isn’t good news for traditional vendors, it is very good news for IT Managers, IT teams, who stand to gain unlimited scale, reduced costs, reduce risks and non-proprietary management. Open source means the end of vendor lock-in with true cloud portability and a solid foundation for future infrastructure.
Similar to Ceph Day Amsterdam 2015 - Building your own disaster? The safe way to make Ceph storage ready! (20)
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
Ceph Day Amsterdam 2015 - Building your own disaster? The safe way to make Ceph storage ready!
1. The safe way to make Ceph storage enterprise ready!
Build your own disaster ?
Copyright 2015 FUJITSU
Dieter Kasper
CTO Data Center Infrastructure
Emerging Technologies & Solutions, Global Delivery
2015-03-31
2. 1
The safe way to make Ceph storage enterprise ready
ETERNUS CD10k integrated in OpenStack
mSHEC Erasure Code from Fujitsu
Contribution to performance enhancements
3. 2
Building Storage with Ceph looks simple
Copyright 2015 FUJITSU
Ceph
+ some servers
+ network
= storage
4. 3
Building Storage with Ceph looks simple – but……
Many new Complexities
Rightsizing server, disk types, network
bandwidth
Silos of management tools (HW, SW..)
Keeping Ceph versions with versions of
server HW, OS, connectivity, drivers in sync
Management of maintenance and support
contracts of components
Troubleshooting
Copyright 2015 FUJITSU
Build Ceph source storage yourself
5. 4
The challenges of software defined storage
What users want
Open standards
High scalability
High reliability
Lower costs
No-lock in from a vendor
What users may get
An own developed storage system based on open
/ industry standard HW & SW components
High scalability and reliability ? If the stack works !
Lower investments but higher operational efforts
Lock-in into the own stack
Copyright 2015 FUJITSU
6. 5
ETERNUS CD10000 – Making Ceph enterprise ready
Build Ceph source storage yourself Out of the box ETERNUS CD10000
incl. support
incl. maintenance
ETERNUS CD10000 combines open source storage with enterprise–class quality of service
E2E Solution Contract by Fujitsu based on Red Hat Ceph Enterprise
Easy Deployment / Management by Fujitsu
+
+
+ Lifecycle Management for Hardware & Software by Fujitsu
+
8. 7
Unlimited Scalability
Cluster of storage nodes
Capacity and performance scales by
adding storage nodes
Three different node types enable
differentiated service levels
Density, capacity optimized
Performance optimized
Optimized for small scale dev & test
1st version of CD10000 (Q3.2014) is
released for a range o 4 to 224 nodes
Scales up to >50 Petabyte
Copyright 2015 FUJITSU
Basic node 12 TB Performance node 35 TB Capacity node 252 TB
9. 8
Immortal System
Copyright 2015 FUJITSU
Node1 Node2 Node(n)
+
Adding nodes
with new generation
of hardware
………+
Adding nodes
Non-disruptive add / remove / exchange of hardware (disks and nodes)
Mix of nodes of different generations, online technology refresh
Very long lifecycle reduces migration efforts and costs
10. 9
TCO optimized
Based on x86 industry standard architectures
Based on open source software (Ceph)
High-availability and self-optimizing functions are part
of the design at no extra costs
Highly automated and fully integrated management
reduces operational efforts
Online maintenance and technology refresh reduce
costs of downtime dramatically
Extreme long lifecycle delivers investment protection
End-to-end design an maintenance from Fujitsu
reduces, evaluation, integration, maintenance costs
Copyright 2015 FUJITSU
Better service levels at reduced costs – business centric storage
11. 10
One storage – seamless management
ETERNUS CD10000 delivers one seamless
management for the complete stack
Central Ceph software deployment
Central storage node management
Central network management
Central log file management
Central cluster management
Central configuration, administration and
maintenance
SNMP integration of all nodes and
network components
Copyright 2015 FUJITSU
12. 11
Seamless management (2)
Dashboard – Overview of cluster statusDashboard – Overview of cluster status
Server Management – Management of cluster hardware – add/remove server
(storage node), replace storage devices
Server Management – Management of cluster hardware – add/remove server
(storage node), replace storage devices
Cluster Management – Management of cluster resources – cluster and pool creationCluster Management – Management of cluster resources – cluster and pool creation
Monitoring the cluster – Monitoring overall capacity, pool utilization, status of OSD,
Monitor, and MDS processes, Placement Group status, and RBD status
Monitoring the cluster – Monitoring overall capacity, pool utilization, status of OSD,
Monitor, and MDS processes, Placement Group status, and RBD status
Managing OpenStack Interoperation: Connection to OpenStack Server, and
placement of pools in Cinder multi-backend
Managing OpenStack Interoperation: Connection to OpenStack Server, and
placement of pools in Cinder multi-backend
14. 13
Example: Replacing an HDD
Plain Ceph
taking the failed disk offline in Ceph
taking the failed disk offline on OS /
Controller Level
identify (right) hard drive in server
exchanging hard drive
partitioning hard drive on OS level
Make and mount file system
bring the disk up in Ceph again
On ETERNUS CD10000
vsm_cli <cluster> replace-disk-out
<node> <dev>
exchange hard drive
vsm_cli <cluster> replace-disk-in
<node> <dev>
15. 14
Example: Adding a Node
Plain Ceph
Install hardware
Install OS
Configure OS
Partition disks (OSDs, Journals)
Make filesystems
Configure network
Configure ssh
Configure Ceph
Add node to cluster
On ETERNUS CD10000
Install hardware
• hardware will automatically PXE boot
and install the current cluster
environment including current
configuration
Make node available to GUI
Add node to cluster with mouse click
on GUI
16. 15
Seamless management drives productivity
Manual Ceph Installation
Setting-up a 4 node Ceph cluster with 15 OSDs: 1,5 – 2 admin days
Adding an additional node: 3 admin hours up to half a day
Automated Installation through ETERNUS CD10000
Setting-up a 4 node Ceph cluster with 15 OSDs: 1 hour
Adding an additional node: 0,5 hour
Copyright 2015 FUJITSU
17. 16
Adding and Integrating Apps
The ETERNUS CD10000 architecture
enables the integration of apps
Fujitsu is working with customers and
software vendors to integrate selected
storage apps
E.g. archiving, sync & share, data
discovery, cloud apps…
Copyright 2015 FUJITSU
Cloud
Services
Sync
& Share
Archive
iRODS
data
discovery
ETERNUSCD10000
Object Level
Access
Block Level
Access
File Level
Access
Central Management
Ceph Storage System S/W and Fujitsu
Extensions
10GbE Frontend Network
Fast Interconnect Network
PerformanceNodes
CapacityNodes
18. 17
ETERNUS CD10000 at University Mainz
Large university in Germany
Uses iRODS Application for library services
iRODS is an open-source data management software in use at research
organizations and government agencies worldwide
Organizes and manages large depots of distributed digital data
Customer has built an interface from iRODS to Ceph
Stores raw data of measurement instruments (e.g. research in chemistry and
physics) for 10+ years meeting compliance rules of the EU
Need to provide extensive and rapidly growing data volumes online at
reasonable costs
Will implement a sync & share service on top of ETERNUS CD10000
19. 18
How ETERNUS CD10000 supports cloud biz
Cloud IT Trading Platform
An European provider operates a trading platform for cloud
resources (CPU, RAM, Storage)
Cloud IT Resources Supplier
The Darmstadt data center (DARZ) offers
storage capacity via the trading platform
Using ETERNUS CD10000 to provide storage
resources for an unpredictable demand
ETERNUS
CD10000
Copyright 2015 FUJITSU
20. 19
Summary ETERNUS CD10k – Key Values
Copyright 2015 FUJITSU
ETERNUS CD10000
ETERNUS
CD10000
Unlimited
Scalability
TCO
optimized
The new
unified
Immortal
System
Zero
Downtime
ETERNUS CD10000 combines open source storage with enterprise–class quality of service
21. 20
The safe way to make Ceph storage enterprise ready
ETERNUS CD10k integrated in OpenStack
mSHEC Erasure Code from Fujitsu
Contribution to performance enhancements
22. 21
What is OpenStack
Free open source (Apache license) software governed by a non-profit foundation
(corporation) with a mission to produce the ubiquitous Open Source Cloud
Computing platform that will meet the needs of public and private clouds
regardless of size, by being simple to implement and massively scalable.
Platin
Gold
Corporate
…
…
Massively scalable cloud operating system that
controls large pools of compute, storage, and
networking resources
Community OSS with contributions from 1000+
developers and 180+ participating organizations
Open web-based API Programmatic IaaS
Plug-in architecture; allows different hypervisors,
block storage systems, network implementations,
hardware agnostic, etc.
http://www.openstack.org/foundation/companies/
23. 22
OpenStack Summit in Paris Nov.2014
OpenStack Momentum
Impressively demonstrated at the OpenStack Summit: more than 5.000
participants from 60+ countries, high profile companies from all industries
– e.g. AT&T, BBVA, BMW, CERN, Expedia, Verizon – sharing their
experience and plans around OpenStack
OpenStack @ BMW: Replacement of a self-built IaaS cloud; covers a pool
of x.000 VMs; rapid growth planned; system is up & running but currently
used productively by selected departments only.
OpenStack @ CERN: In production since July 2013; 4 operational IaaS
clouds, the largest one with 70k cores on 3.000 servers; expected to pass
150k cores by Q1.2015.
24. 23
Attained fast growing customer interest
VMware clouds dominate
OpenStack clouds already #2
Worldwide adoption
Source: OpenStack User Survey and Feedback Nov 3rd 2014
Source: OpenStack User Survey and Feedback May 13th 2014
25. 24
Why are Customers so interested?
Source: OpenStack User Survey and Feedback Nov 3rd 2014
Greatest industry & community support
compared to alternative open platforms:
Eucalyptus, CloudStack, OpenNebula
“Ability to Innovate” jumped from #6 to #1
27. 26
OpenStack Cloud Layers
OpenStack and ETERNUS CD10000
Physical Server (CPU, Memory, SSD, HDD) and Network
Base Operating System (CentOS)
OAM
-dhcp
-Deploy
-LCM
Hypervisor
KVM, ESXi,
Hyper-V
Compute (Nova)
Network
(Neutron) +
plugins
Dashboard (Horizon)
Billing Portal
OpenStack
Cloud APIs
RADOS
Block
(RBD)
S3
(Rados-GW)
Object (Swift)Volume (Cinder)
Authentication (Keystone)
Images (Glance)
EC2 API
Metering (Ceilometer)
Manila (File)
File
(CephFS)
Fujitsu
Open Cloud
Storage
28. 27
The OpenStack – Ceph Ecosystem @Work
OpenStack
Cloud Controller
OpenStack
Compute Node
OpenStack
Compute Node
OpenStack
Compute Node…
Ceph Storage Cluster
VM Template Production VM
VM Template
Replica
VM Template
Replica
Production VM
Replica
Production VM
Replica
create
snapshot / clone
use
move
use
29. 28
The safe way to make Ceph storage enterprise ready
ETERNUS CD10k integrated in OpenStack
mSHEC Erasure Code from Fujitsu
Contribution to performance enhancements
30. 29
Backgrounds (1)
Erasure codes for content data
Content data for ICT services is ever-growing
Demand for higher space efficiency and durability
Reed Solomon code (de facto erasure code) improves both
Reed Solomon Code(Old style)Triple Replication
However, Reed Solomon code is not so recovery-efficient
content data
copy copy
3x space
parity parity
1.5x space
content data
31. 30
Backgrounds (2)
Local parity improves recovery efficiency
Data recovery should be as efficient as possible
• in order to avoid multiple disk failures and data loss
Reed Solomon code was improved by local parity methods
• data read from disks is reduced during recovery
Data Chunks
Parity Chunks
Reed Solomon Code
(No Local Parities) Local Parities
data read from disks
However, multiple disk failures is out of consideration
A Local Parity Method
32. 31
Local parity method for multiple disk failures
Existing methods is optimized for single disk failure
• e.g. Microsoft MS-LRC, Facebook Xorbas
However, Its recovery overhead is large in case of multiple disk failures
• because they have a chance to use global parities for recovery
Our Goal
A Local Parity Method
Our goal is a method efficiently handling multiple disk failures
Multiple Disk Failures
33. 32
SHEC (= Shingled Erasure Code)
An erasure code only with local parity groups
• to improve recovery efficiency in case of multiple disk failures
The calculation ranges of local parities are shifted and partly overlap with each
other (like the shingles on a roof)
• to keep enough durability
Our Proposal Method (SHEC)
k : data chunks (=10)
m :
parity
chunks
(=6)
l : calculation range (=5)
34. 33
SHEC is implemented as an erasure code plugin of Ceph, an open
source scalable object storage
SHEC’s Implementation on Ceph
4MB objects are split
into data/parity chunks,
distributed over OSDs
encode/decode logic is separated
from main part of Ceph Storage
SHEC plugin
35. 34
1. mSHEC is more adjustable than Reed Solomon code,
because SHEC provides many recovery-efficient layouts
including Reed Solomon codes
2. mSHEC’s recovery time was ~20% faster than Reed
Solomon code in case of double disk failures
3. mSHEC erasure-code was add to
Ceph v0.93 = pre-Hammer release
4. For more information see
https://wiki.ceph.com/Planning/Blueprints/Hammer/Shingled_Erasure_Code_(SHEC)
or ask Fujitsu
Summary mSHEC
36. 35
The safe way to make Ceph storage enterprise ready
ETERNUS CD10k integrated in OpenStack
mSHEC Erasure Code from Fujitsu
Contribution to performance enhancements
37. 36
Areas to improve Ceph performance
Ceph has an adequate performance today,
But there are performance issues which prevent us from taking full
advantage of our hardware resources.
Two main goals for improvement:
(1) Decrease latency in the Ceph code path
(2) Enhance large cluster scalability with many nodes / ODS
39. 38
1. LTTng general http://lttng..org/
General
open source tracing framework for Linux
trace Linux kernel and user space applications
low overhead and therefore usable on
production systems
activate tracing at runtime
Ceph code contains LTTng trace points already
Our LTTng based profiling
activate within a function, collect timestamp information at the interesting places
save collected information in a single trace point at the end of the function
transaction profiling instead of function profiling: use Ceph transaction id's to
correlate trace points
focused on primary and secondary write operations
40. 39
2. Test setup
Ceph Cluster
3 storage nodes:
2 CPU sockets, 8 core per socket, Intel E5-2640, 2.00GHz, 128 GB memory
12 OSDs: 4 OSDs per storage node (SAS disks), journals on raw SSD partitions
CentOS 6.6, linux 3.10.32, Ceph v0.91, storage pools with replication 3
Ceph Client
2 CPU sockets, 6 cores per socket, Intel E5-2630, 2.30GHz, 192 GB memory
CentOS 6.6, Linux 3.10.32
Ceph kernel client (rbd.ko + libceph.ko)
Test Program
fio 2.1.10
randwrite, 4kByte buffersize, libaio / iodepth 16
test writes 1 GByte of data (or 262144 I/O requests)
41. 40
3. LTTng trace session
Ceph cluster is up and running: ceph-osd binaries from standard packages
stop one ceph-osd daemon
restart with ceph-osd binary including LTTng based profiling
wait until cluster healthy
start LTTng session
run fio test
stop LTTng session
collect trace data and evaluate
Typical sample size on the osd under test:
22.000 primary writes (approx. 262144 / 12)
44.000 replication writes (approx. (262144 * 2) / 12)
43. 42
4.1. LTTng data evaluation: Replication Write
Observation:
replication write latency suffers from the "large variance problem"
minimum and average differ by a factor of 2
This is a common problem visible for many ceph-osd components.
Why is variance so large?
Observation: No single hotspot visible.
Observation: Active processing steps do not differ between minimum and average
sample as much as the total latency does.
Additional latency penalty mostly at the switch from
sub_op_modify_commit to Pipe::writer
no indication that queue length is the cause
Question: Can the overall thread load on the system and Linux scheduling be the
reason for the delayed start of the Pipe::writer thread?
44. 43
4.1.1 LTTng Microbenchmark Pipe::reader
"decode": fill message MSG_OSD_SUBOP data structure from bytes in the input
buffer. There is no decoding of the data buffer!
Optimizations:
"decode": a project currently restructures some messages to decrease the effort for
message encoding and decoding.
"authenticate": is currently optimized, too. Disable via "cephx sign messages"
45. 44
4.1.2 LTTng Microbenchmark Pipe::writer
"message setup": buffer allocation and encoding of message structure
"enqueue": enqueue at low level socket layer (not quite sure whether this really
covers the write/sendmsg system call to the socket)
48. 47
5. Thread classes and ceph-osd CPU usage
Thread per ceph-osd depends on complexity of Ceph cluster: 3x node with 4 OSDs
each ~700 threads per node; 9x nodes with 40 OSDs each > 100k threads per node
ThreadPool::WorkThread is a hot spot = work in the ObjectStore / FileStore
total CPU usage during test 43.17 CPU seconds
Pipe::Writer 4.59 10.63%
Pipe::Reader 5.81 13.45%
ShardedThreadPool::WorkThreadSharded 8.08 18.70%
ThreadPool::WorkThread 15.56 36.04%
FileJournal::Writer 2.41 5.57%
FileJournal::WriteFinisher 1.01 2.33%
Finisher::finisher_thread_entry 2.86 6.63%
49. 48
5.1. FileStore benchmarking
most of the work is done in FileStore::do_transactions
each write transaction consists of
3 calls to omap_setkeys,
the actual call to write to the file system
2 calls to setattr
Proposal: coalesce calls to omap_setkeys
1 function call instead of 3 calls, set 5 key value pairs instead of 6 (duplicate key)
51. 50
6. With our omap_setkeys coalescing patch
Reduced latency in ThreadPool::WorkThread by 54 microseconds = 25%
Significant reduction of CPU usage at the ceph-osd: 9% for the complete ceph-osd
Approx 5% better performance at the Ceph client
total CPU usage during test 43.17 CPU seconds 39.33 CPU seconds
Pipe::Writer 4.59 10.63% 4.73 12.02%
Pipe::Reader 5.81 13.45% 5.91 15.04%
ShardedThreadPool::WorkThreadSharded 8.08 18.70% 7.94 20.18%
ThreadPool::WorkThread 15.56 36.04% 12.45 31.66%
FileJournal::Writer 2.41 5.57% 2.44 6.22%
FileJournal::WriteFinisher 1.01 2.33% 1.03 2.61%
Finisher::finisher_thread_entry 2.86 6.63% 2.76 7.01%
52. 51
Summary on Performance
Two main goals for improvement:
(1) Decrease latency in the Ceph code path
(2) Enhance large cluster scalability with many nodes / ODS
There is a long path to improve the overall Ceph performance.
Many steps are necessary to get a factor of 2. Actual performance
work focus on (1) decrease latency.
To get an order of magnitude improvement on (2) we have to
master the limits bound to the overall OSD design:
Transaction structure bound across multiple Objects
PG omap data with a high level state logging
54. 53
Summary and Conclusion
ETERNUS CD10k is the safe way to make Ceph enterprise ready
Unlimited Scalability: 4 to 224 nodes, scales up to >50 Petabyte
Immortal System with Zero downtime: Non-disruptive add / remove / exchange of hardware
(disks and nodes) or Software update
TCO optimized: Highly automated and fully integrated management reduces operational efforts
Tight integration in OpenStack with own GUI
Fujitsu mSHEC technology (integrated in Hammer) shortens
recovery time at ~20% compared to Reed Solomon code
We love Ceph! But love is not blind, so we actively contribute in
the performance analysis & code/performance improvements.