When your industry’s revenue projections flatten, how can you break away and continue to grow? At Orange Business Services, we decided to reinvent the company by becoming a cloud services provider—entering a market still at the beginning of its growth trajectory.
Application Report: Virtualizing Tier-1 Workloads using FC SANsIT Brand Pulse
This application report describes how a major industrial distributor virtulalized servers running tier-1 eCommerce and SAP applications, and how it impacted their data center infrastructure.
The Most Trusted In-Memory database in the world- AltibaseAltibase
Life is a database. How you manage data defines business. ALTIBASE HDB with its Hybrid architecture combines the extreme speed of an In-Memory Database with the storage capacity of an On-Disk Database’ in a single unified engine.
ALTIBASE® HDB™ is the only Hybrid DBMS in the industry that combines an in-memory DBMS with an on-disk DBMS, with a single uniform interface, enabling real-time access to large volumes of data, while simplifying and revolutionizing data processing. ALTIBASE XDB is the world’s fastest in-memory DBMS, featuring unprecedented high performance, and supports SQL-99 standard for wide applicability.
Altibase is provider of In-Memory data solutions for real-time access, analysis and distribution of high volumes of data in mission-critical environments.
Please visit our website (www.altibase.com) to learn more about our products and read more about our case studies. Or contact us at info@altibase.com. We look forward to helping you!
Move to Hadoop, Go Faster and Save Millions - Mainframe Legacy ModernizationDataWorks Summit
In spite of recent advances in computing, many core business processes are batch-oriented running on Mainframes. Annual Mainframe costs are counted in 6+ figure Dollars per year, potentially growing with capacity needs. In order to tackle the cost challenge, many organizations have considered or attempted multi-year mainframe migration/re-hosting strategies. Traditional approaches to Mainframe elimination call for large initial investments and carry significant risks – It is hard to match Mainframe performance and reliability. Using Hadoop, Sears/MetaScale developed an innovative alternative that enables batch processing migration to Hadoop, without the risks, time and costs of other methods. This solution has been adopted in multiple businesses with excellent results and associated cost savings, as Mainframes are physically eliminated or downsized: Millions of dollars in savings based on MIP reductions have been seen – A reduction of 200 MIPS can yield $1 million in annual savings. MetaScale eliminated over 900 MIPs and an entire Mainframe system for one fortune 500 client. This presentation illustrates reference architecture and approach successfully used by MetaScale to move mainframe processing to the Hadoop platform without altering user-facing business applications.
Application Report: Virtualizing Tier-1 Workloads using FC SANsIT Brand Pulse
This application report describes how a major industrial distributor virtulalized servers running tier-1 eCommerce and SAP applications, and how it impacted their data center infrastructure.
The Most Trusted In-Memory database in the world- AltibaseAltibase
Life is a database. How you manage data defines business. ALTIBASE HDB with its Hybrid architecture combines the extreme speed of an In-Memory Database with the storage capacity of an On-Disk Database’ in a single unified engine.
ALTIBASE® HDB™ is the only Hybrid DBMS in the industry that combines an in-memory DBMS with an on-disk DBMS, with a single uniform interface, enabling real-time access to large volumes of data, while simplifying and revolutionizing data processing. ALTIBASE XDB is the world’s fastest in-memory DBMS, featuring unprecedented high performance, and supports SQL-99 standard for wide applicability.
Altibase is provider of In-Memory data solutions for real-time access, analysis and distribution of high volumes of data in mission-critical environments.
Please visit our website (www.altibase.com) to learn more about our products and read more about our case studies. Or contact us at info@altibase.com. We look forward to helping you!
Move to Hadoop, Go Faster and Save Millions - Mainframe Legacy ModernizationDataWorks Summit
In spite of recent advances in computing, many core business processes are batch-oriented running on Mainframes. Annual Mainframe costs are counted in 6+ figure Dollars per year, potentially growing with capacity needs. In order to tackle the cost challenge, many organizations have considered or attempted multi-year mainframe migration/re-hosting strategies. Traditional approaches to Mainframe elimination call for large initial investments and carry significant risks – It is hard to match Mainframe performance and reliability. Using Hadoop, Sears/MetaScale developed an innovative alternative that enables batch processing migration to Hadoop, without the risks, time and costs of other methods. This solution has been adopted in multiple businesses with excellent results and associated cost savings, as Mainframes are physically eliminated or downsized: Millions of dollars in savings based on MIP reductions have been seen – A reduction of 200 MIPS can yield $1 million in annual savings. MetaScale eliminated over 900 MIPs and an entire Mainframe system for one fortune 500 client. This presentation illustrates reference architecture and approach successfully used by MetaScale to move mainframe processing to the Hadoop platform without altering user-facing business applications.
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Hitachi Vantara
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bringing Flexibility, Agility and Readiness to the Real-Time Enterprise. VMworld 2015
Fully leveraging your data, infrastructure, and IT staff has never been more important than it is now, during these times of fiscal responsibility and evolving business demands. In response, businesses need to maximize their IT by getting increased performance, efficiency, and economics out of their infrastructure and resources.
This presentation focuses on three key technologies that provide particularly compelling opportunities to maximize IT:
-All-flash systems that accelerate access to information for faster decision-making, analysis and productivity.
-Unified storage solutions that enable you to process more, and diverse, workloads in less time while driving capacity efficiencies.
-Unified compute solutions that deliver improved orchestration and automation and enhance the productivity of your IT staff, while avoiding costly over- or under-provisioning.
Consolidate More: High Performance Primary Deduplication in the Age of Abunda...Hitachi Vantara
Increase productivity, efficiency and environmental savings by eliminating silos, preventing sprawl and reducing complexity by 50%. Using powerful consolidation systems, Hitachi Unified Storage or Hitachi NAS Platform, lets you consolidate existing file servers and NAS devices on to fewer nodes. You can perform the same or even more work with fewer devices and lower overhead, while reducing floor space and associated power and cooling costs. View this webcast to learn how to: Shrink your primary file data without disrupting performance. Increase productivity and utilization of available capacity. Defer additional storage purchases. Save on power, cooling and space costs. For more information please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_inside_rm_htchunfds
The Next Wave of 10GbE webcast with Crehan Research was held on 10/5 and focused on current and future 10GbE adapter and switch market drivers and adoption trends, and the effects of the introduction of 10GBASE-T products on the overall 10GbE market.
Klaus Gottschalk from IBM presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Last year IBM together with partners out of the OpenPOWER foundation won two of the multi-year contacts of the US CORAL program. Within these contacts IBM develops an ac- celerated HPC infrastructure and software development ecosystem that will be a major step towards Exascale Computing. We believe that the CORAL roadmap will enable a massive pull for transformation of HPC codes for accelerated systems. The talk will discuss the IBM HPC strategy, explain the OpenPOWER foundation and the show IBM OpenPOWER roadmap for CORAL and beyond."
Watch the video presentation: http://wp.me/p3RLHQ-f9x
Learn more: http://e.huawei.com/us/solutions/business-needs/data-center/high-performance-computing
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
We cover the IBM solution for HPC. In addition to hardware and software stack we show how the rational choice of compilation/running parameters helps to significantly improve the performance of technical computing applications.
Why Networked FICON Storage Is Better Than Direct Attached StorageHitachi Vantara
With the many enhancements and improvements in mainframe I/O technology in the past 5 years, the question "Do I need FICON switching technology, or should I go with direct attached storage?" is frequently asked. This webcast explores both technical and business reasons for implementing a switched FICON architecture instead of a direct attached storage FICON architecture for mainframe attached storage. The discussion will also include an overview of the Hitachi Data Systems and Brocade solutions for mainframe environments. By viewing this webcast, you’ll learn: The business and technical value of networking FICON attached storage instead of direct attached. The business and technical value of Hitachi mainframe storage capabilities. The offerings from Hitachi Data Systems and Brocade that can help you achieve the benefits of networked FICON storage. For more information on our mainframe solutions please read: http://www.hds.com/solutions/infrastructure/mainframe/?WT.ac=us_mg_sol_mnfr
Programmable I/O Controllers as Data Center Sensor NetworksEmulex Corporation
This is a presentation on 'Programmable I/O Controllers as Data Center Sensor Networks' as presented by Shaun Walsh and Sanjeev Datla at the 2011 Storage Developer's Conference in October 2011.
Enabling the Software Defined Data Center for Hybrid ITNetApp
Recently, NetApp held a Cloud Breakfast for customers of our High Touch Customer Program. This was a combined presentation from OBS, VMware and NetApp.
Presenters:
Jim Sangster, Senior Director, Solutions Marketing, NetApp - "Cloud for the Hybrid Data Center"
John Gilmartin, Vice President, Cloud Infrastructure Products, VMware - "Next Generation of IT"
Axel Haentjens Vice President, Marketing and International Orange Cloud for Business "NetApp Epic Story OBS"
Tim Waldron, Manager, Cloud Solutions, NetApp EMEA "Cloud Services – An EMEA Perspective"
Oracle Cloud : Big Data Use Cases and ArchitectureRiccardo Romani
Oracle Itay Systems Presales Team presents : Big Data in any flavor, on-prem, public cloud and cloud at customer.
Presentation done at Digital Transformation event - February 2017
Software Defined IT @ Evento SOIEL Roma 6 Aprile 2017Riccardo Romani
Oracle espone il concetto del "virtuous circle" del nostro integrated cloud : noi per primi mettiamo in pratica la value proposition dei sistemi ingegnerizzati per costruire i nostri cloud datacenters, oltre che i datacenter dei nostri clienti. Da questa contaminazione, nasce innovazione a valore che si puo' concretizzare con il lancio di nuovi rivoluzionari sistemi come Oracle Clodu Machine o con una ulteriore evoluzione di nostri sistemi flagship come Exadata o la Private Cloud Appliance, che di fatto costituiscono l'offerta Application Software Defined IT.
Carlson Companies is one of the largest privately held compani.docxwendolynhalbert
Carlson Companies is one of the largest privately held companies in the United States,
with more than 180,000 employees in more than 140 countries. Carlson enterprises
include a presence in marketing, business and leisure travel, and hospitality industries.
Its Information Technology (IT) division, Carlson Shared Services, acts as a service
provider to its internal clients and consequently must support a spectrum of user
applications and services. The IT division uses a centralized data processing model to
meet business operational requirements. The central computing environment includes an
IBM mainframe and over 50 networked Hewlett-Packard and Sun servers
[KRAN04, CLAR02,HIGG02]. The mainframe supports a wide range of applications,
including Oracle financial database, e-mail, Microsoft Exchange, Web, PeopleSoft, and a
data warehouse application.
In 2002, the IT division established six goals for assuring that IT services continued to
meet the needs of a growing company with heavy reliance on data and applications:
1. Implement an enterprise data warehouse.
2. Build a global network.
3. Move to enterprise-wide architecture.
4. Established six-sigma quality for Carlson clients.
5. Facilitate outsourcing and exchange.
6. Leverage existing technology and resource.
The key to meeting these goals was to implement a storage area network (SAN) with a
consolidated, centralized database to support mainframe and server applications.
Carlson needed a SAN and data center approach that provided a reliable, highly scalable
facility to accommodate the increasing demand of its users.
https://jigsaw.vitalsource.com/books/9781323079324/epub/OPS/xhtml/filebib.xhtml#biblio_111
https://jigsaw.vitalsource.com/books/9781323079324/epub/OPS/xhtml/filebib.xhtml#biblio_46
https://jigsaw.vitalsource.com/books/9781323079324/epub/OPS/xhtml/filebib.xhtml#biblio_89
Storage Requirements
Until recently, the central DP shop included separate disc storage for each server, plus
that of the mainframe. This dispersed data storage scheme had the advantage of
responsiveness; that is, the access time from a server to its data was minimal. However,
the data management cost was high. There had to be backup procedures for the storage
on each server, as well as management controls to reconcile data distributed throughout
the system. The mainframe included an efficient disaster recovery plan to preserve data
in the event of major system crashes or other incidents and to get data back online with
little or no disruption to the users. No comparable plan existed for the many servers.
As Carlson’s databases grow beyond 10 terabytes (TB) of business-critical data, the IT
team determined that a comprehensive network storage strategy would be required to
manage future growth.
Solution
Concept
The existing Carlson server complex made use of Fibre Channel links to achieve
communication and backup capabilities among servers. Carlson considered extending
this capability to a full-blown Fibr ...
Riverbed SteelHead Family Brochure 10.10.13.
SteelHead is the flagship product of Riverbed for WAN Acceleration. Riverbed is the world's leader in WOC (WAN Optimization Controller), according to Gartner's Magic Quadrant.
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Hitachi Vantara
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bringing Flexibility, Agility and Readiness to the Real-Time Enterprise. VMworld 2015
Fully leveraging your data, infrastructure, and IT staff has never been more important than it is now, during these times of fiscal responsibility and evolving business demands. In response, businesses need to maximize their IT by getting increased performance, efficiency, and economics out of their infrastructure and resources.
This presentation focuses on three key technologies that provide particularly compelling opportunities to maximize IT:
-All-flash systems that accelerate access to information for faster decision-making, analysis and productivity.
-Unified storage solutions that enable you to process more, and diverse, workloads in less time while driving capacity efficiencies.
-Unified compute solutions that deliver improved orchestration and automation and enhance the productivity of your IT staff, while avoiding costly over- or under-provisioning.
Consolidate More: High Performance Primary Deduplication in the Age of Abunda...Hitachi Vantara
Increase productivity, efficiency and environmental savings by eliminating silos, preventing sprawl and reducing complexity by 50%. Using powerful consolidation systems, Hitachi Unified Storage or Hitachi NAS Platform, lets you consolidate existing file servers and NAS devices on to fewer nodes. You can perform the same or even more work with fewer devices and lower overhead, while reducing floor space and associated power and cooling costs. View this webcast to learn how to: Shrink your primary file data without disrupting performance. Increase productivity and utilization of available capacity. Defer additional storage purchases. Save on power, cooling and space costs. For more information please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_inside_rm_htchunfds
The Next Wave of 10GbE webcast with Crehan Research was held on 10/5 and focused on current and future 10GbE adapter and switch market drivers and adoption trends, and the effects of the introduction of 10GBASE-T products on the overall 10GbE market.
Klaus Gottschalk from IBM presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Last year IBM together with partners out of the OpenPOWER foundation won two of the multi-year contacts of the US CORAL program. Within these contacts IBM develops an ac- celerated HPC infrastructure and software development ecosystem that will be a major step towards Exascale Computing. We believe that the CORAL roadmap will enable a massive pull for transformation of HPC codes for accelerated systems. The talk will discuss the IBM HPC strategy, explain the OpenPOWER foundation and the show IBM OpenPOWER roadmap for CORAL and beyond."
Watch the video presentation: http://wp.me/p3RLHQ-f9x
Learn more: http://e.huawei.com/us/solutions/business-needs/data-center/high-performance-computing
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
We cover the IBM solution for HPC. In addition to hardware and software stack we show how the rational choice of compilation/running parameters helps to significantly improve the performance of technical computing applications.
Why Networked FICON Storage Is Better Than Direct Attached StorageHitachi Vantara
With the many enhancements and improvements in mainframe I/O technology in the past 5 years, the question "Do I need FICON switching technology, or should I go with direct attached storage?" is frequently asked. This webcast explores both technical and business reasons for implementing a switched FICON architecture instead of a direct attached storage FICON architecture for mainframe attached storage. The discussion will also include an overview of the Hitachi Data Systems and Brocade solutions for mainframe environments. By viewing this webcast, you’ll learn: The business and technical value of networking FICON attached storage instead of direct attached. The business and technical value of Hitachi mainframe storage capabilities. The offerings from Hitachi Data Systems and Brocade that can help you achieve the benefits of networked FICON storage. For more information on our mainframe solutions please read: http://www.hds.com/solutions/infrastructure/mainframe/?WT.ac=us_mg_sol_mnfr
Programmable I/O Controllers as Data Center Sensor NetworksEmulex Corporation
This is a presentation on 'Programmable I/O Controllers as Data Center Sensor Networks' as presented by Shaun Walsh and Sanjeev Datla at the 2011 Storage Developer's Conference in October 2011.
Enabling the Software Defined Data Center for Hybrid ITNetApp
Recently, NetApp held a Cloud Breakfast for customers of our High Touch Customer Program. This was a combined presentation from OBS, VMware and NetApp.
Presenters:
Jim Sangster, Senior Director, Solutions Marketing, NetApp - "Cloud for the Hybrid Data Center"
John Gilmartin, Vice President, Cloud Infrastructure Products, VMware - "Next Generation of IT"
Axel Haentjens Vice President, Marketing and International Orange Cloud for Business "NetApp Epic Story OBS"
Tim Waldron, Manager, Cloud Solutions, NetApp EMEA "Cloud Services – An EMEA Perspective"
Oracle Cloud : Big Data Use Cases and ArchitectureRiccardo Romani
Oracle Itay Systems Presales Team presents : Big Data in any flavor, on-prem, public cloud and cloud at customer.
Presentation done at Digital Transformation event - February 2017
Software Defined IT @ Evento SOIEL Roma 6 Aprile 2017Riccardo Romani
Oracle espone il concetto del "virtuous circle" del nostro integrated cloud : noi per primi mettiamo in pratica la value proposition dei sistemi ingegnerizzati per costruire i nostri cloud datacenters, oltre che i datacenter dei nostri clienti. Da questa contaminazione, nasce innovazione a valore che si puo' concretizzare con il lancio di nuovi rivoluzionari sistemi come Oracle Clodu Machine o con una ulteriore evoluzione di nostri sistemi flagship come Exadata o la Private Cloud Appliance, che di fatto costituiscono l'offerta Application Software Defined IT.
Carlson Companies is one of the largest privately held compani.docxwendolynhalbert
Carlson Companies is one of the largest privately held companies in the United States,
with more than 180,000 employees in more than 140 countries. Carlson enterprises
include a presence in marketing, business and leisure travel, and hospitality industries.
Its Information Technology (IT) division, Carlson Shared Services, acts as a service
provider to its internal clients and consequently must support a spectrum of user
applications and services. The IT division uses a centralized data processing model to
meet business operational requirements. The central computing environment includes an
IBM mainframe and over 50 networked Hewlett-Packard and Sun servers
[KRAN04, CLAR02,HIGG02]. The mainframe supports a wide range of applications,
including Oracle financial database, e-mail, Microsoft Exchange, Web, PeopleSoft, and a
data warehouse application.
In 2002, the IT division established six goals for assuring that IT services continued to
meet the needs of a growing company with heavy reliance on data and applications:
1. Implement an enterprise data warehouse.
2. Build a global network.
3. Move to enterprise-wide architecture.
4. Established six-sigma quality for Carlson clients.
5. Facilitate outsourcing and exchange.
6. Leverage existing technology and resource.
The key to meeting these goals was to implement a storage area network (SAN) with a
consolidated, centralized database to support mainframe and server applications.
Carlson needed a SAN and data center approach that provided a reliable, highly scalable
facility to accommodate the increasing demand of its users.
https://jigsaw.vitalsource.com/books/9781323079324/epub/OPS/xhtml/filebib.xhtml#biblio_111
https://jigsaw.vitalsource.com/books/9781323079324/epub/OPS/xhtml/filebib.xhtml#biblio_46
https://jigsaw.vitalsource.com/books/9781323079324/epub/OPS/xhtml/filebib.xhtml#biblio_89
Storage Requirements
Until recently, the central DP shop included separate disc storage for each server, plus
that of the mainframe. This dispersed data storage scheme had the advantage of
responsiveness; that is, the access time from a server to its data was minimal. However,
the data management cost was high. There had to be backup procedures for the storage
on each server, as well as management controls to reconcile data distributed throughout
the system. The mainframe included an efficient disaster recovery plan to preserve data
in the event of major system crashes or other incidents and to get data back online with
little or no disruption to the users. No comparable plan existed for the many servers.
As Carlson’s databases grow beyond 10 terabytes (TB) of business-critical data, the IT
team determined that a comprehensive network storage strategy would be required to
manage future growth.
Solution
Concept
The existing Carlson server complex made use of Fibre Channel links to achieve
communication and backup capabilities among servers. Carlson considered extending
this capability to a full-blown Fibr ...
Riverbed SteelHead Family Brochure 10.10.13.
SteelHead is the flagship product of Riverbed for WAN Acceleration. Riverbed is the world's leader in WOC (WAN Optimization Controller), according to Gartner's Magic Quadrant.
Infiniband storage- Benefits a Mega Retail Company ! Tyrone Systems
A well-known textile retail company wanted to use SSDs in their storage servers to get faster transaction speeds, but interconnect with the servers was a bottleneck; which Netweb Technologies resolved with InfiniBand based storage solution.
During a period when various proposed solutions under consideration were either too expensive, too proprietary
or functionally inadequate, FTEL was contacted by DataCore and introduced to the SANsymphony™ advanced
storage networking and management software. Ian Batten, FTEL’s IT Director, explained, “The DataCore solution
appeared to offer many of the aspects missing from other options, such as block level snapshot, easier device
sharing, single point of administration, better caching and the prospect of interesting solutions to the backup
issue.” FTEL decided to evaluate SANsymphony utilizing commodity RAID devices for storage. With even
relatively low-end storage, the results were impressive enough that the solution moved forward into a
production environment
IT Brand Pulse industry brief describing a new approach to configuring virtual networks for virtual machines...layering hypervisor-based virtual networking services on top of hardware based virtual networking services. The result is more efficient management and lower costs.
Problems with enterprise wan solutions - The Cloud X EcosystemRoy Hilliard
IT departments are faced with balancing application availability, reliability, security, and flexibility. It is far too much to handle especially with shrinking staffs. No one size or solution fits all, but calculated moves into SDWAN/NFV solutions for aspects of the business is proving easier and efficient.
Cloud economics design, capacity and operational concernsMarcos García
Learn how to choose your e-commerce infrastructure, and how to forecast the TCO based on a simple model, including the explanations on how public, private and hybrid cloud computing work.
DATASHEET▶ Enterprise Cloud Backup & Recovery with Symantec NetBackupSymantec
Symantec NetBackup delivers reliable backup and recovery across applications,
platforms, physical and virtual environments.1 A single console unites the management and reporting of both on-premises and on-cloud information to provide additional operating efficiencies and simplified administration. The NetBackup platform has deep VMware® and Microsoft Hyper-V integration, built-in deduplication to protect the private cloud, seamless integration with industry-leading public cloud storage providers, self-service and multi-tenancy for backup as a service (BaaS).
The NetBackup cloud storage module enables you to backup and restore data from cloud storage providers and is integrated with Symantec's Open Storage (OST) module which provides features that can enhance the operational experience of backup and recovery from the cloud.
DevOps the NetApp Way: 10 Rules for Forming a DevOps TeamNetApp
Does your enterprise IT organization practice DevOps without a common team approach? To create a standardized way for development and operations teams to work together at NetApp, the IT team differentiates a DevOps team from a regular development team based on these 10 rules.
Spot Lets NetApp Get the Most Out of the CloudNetApp
Prior to NetApp acquiring Spot.io, two of its IT teams had adopted Spot in their operations: Product Engineering for Cloud Volumes ONTAP test automation and NetApp IT for corporate business applications. Check out the results in this infographic.
NetApp has fully embraced tools that allow for seamless, collaborative work from home, and as a result was fully prepared to minimize COVID-19's impact on how we conduct business. Check out this infographic for a look at results from the new remote work reality.
4 Ways FlexPod Forms the Foundation for Cisco and NetApp SuccessNetApp
At Cisco and NetApp, seeing our customers succeed in their digital transformations means that we’ve succeeded too. But that’s only one of the ways we measure our performance. What’s another way? Hearing how our wide-ranging IT support helps Cisco and NetApp thrive. Here’s what makes FlexPod an indispensable part of Cisco’s and NetApp’s IT departments.
With the widespread adoption of hybrid multicloud as the de-facto architecture for the enterprise, organizations everywhere are modernizing to deliver tangible business value around data-intensive applications and workloads such as AI-driven IoT and Hyperledgers. Shifting from on-premises to public cloud services, private clouds, and moving from disk to flash – sometimes concurrently – opens the door to enormous potential, but also the unintended consequence of IT complexity.
With the widespread adoption of hybrid multicloud as the de facto IT architecture for the enterprise, organizations everywhere are modernizing to deliver tangible business value around data-intensive applications and workloads such as AI-driven IoT and indelible ledgers.
10 Reasons Why Your SAP Applications Belong on NetAppNetApp
NetApp has been supporting SAP for 20 years, delivering advanced solutions for SAP applications. Here are 10 reasons why your SAP applications belong on NetApp!
Redefining HCI: How to Go from Hyper Converged to Hybrid Cloud InfrastructureNetApp
The hyper converged infrastructure (HCI) market is entering a new phase of maturity. A modern HCI solution requires a private cloud platform that integrates with public clouds to create a consistent hybrid multi-cloud experience.
During this webinar, NetApp and an IDC guest speaker covered what led to the next generation of hyper converged infrastructure and which five capabilities are required to go from hyper converged to hybrid cloud infrastructure.
As we enter 2019, what stands out is how trends in business and technology are connected by common themes. For example, AI is at the heart of trends in development, data management, and delivery of applications and services at the edge, core, and cloud. Also essential are containerization as a critical enabling technology and the increasing intelligence of IoT devices at the edge. Navigating the tempests of transformation are developers, whose requirements are driving the rapid creation of new paradigms and technologies that they must then master in pursuit of long-term competitive advantage. Here are some of our perspectives and predictions for 2019.
Künstliche Intelligenz ist in deutschen Unter- nehmen ChefsacheNetApp
Einer aktuellen Umfrage des führenden Datenma- nagementspezialisten in der Hybrid Cloud NetApp zufolge gewinnt künstliche Intelligenz (KI) in deut- schen Unternehmen zunehmend an Relevanz.
Iperconvergenza come migliora gli economics del tuo ITNetApp
In this NetApp Webinar we present how NetApp HCI helps improve the economics of IT: accelerating and ensuring performance for each application, simplifying your Data Center and make your architecture more scalable by reducing waste, implementing and expanding your HCI infrastructure quickly and inexpensively, making your management even simpler and more intuitive, saving time and using the skills you already have in the company.
NetApp IT’s Tiered Archive Approach for Active IQNetApp
NetApp AutoSupport technology proactively monitors the health of NetApp systems installed at customer’s location and provides 24/7 actionable intelligence to optimize their storage environment. The amount of data received back to NetApp doubles approximately every 16 months. To manage the swelling waves of data to archive, NetApp IT sought a more flexible solution.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Assure Contact Center Experiences for Your Customers With ThousandEyes
Orange Business Services: A Telecom Business Reinvents Itself for the Cloud Era
1. Technical Case Study
Orange Business Services: A Telecom
Business Reinvents Itself for the Cloud Era
How Orange Business Services built flexible computing offers
By Yann Degardin, Technical Project
Lead, Orange Business Services
When your industry’s revenue projections flatten, how can
you break away and continue to grow? At Orange Business
Services, we decided to reinvent the company by becoming
a cloud services provider—entering a market still at the
beginning of its growth trajectory.
Orange Business Services, the business services arm of global telecommu
nications leader Orange, provides communications services to companies in
France and to multinational companies throughout the world. We serve more
than two million businesses, and we knew that many of them wanted to move
business applications out of their own data centers and into the cloud—shifting
a capital expense to an operational expense and freeing up their internal
IT teams to focus on the core business.
Today, our cloud services, built on NetApp® storage, are transforming our
c
ustomers’ businesses. For example, Tiens Group, a multinational conglomerate,
uses one of our Flexible Computing infrastructure-as-a-service (IaaS) offerings
to provision and manage servers in multiple countries from its headquarters
in China. GFI Informatique uses Flexible Computing to cost-effectively host
its award-winning cloud security services for networks, email, and websites.
Exaegis, a French provider of software-as-a-service offerings to the financial
industry, leverages Flexible Computing to guarantee the security and continuity
of services to its customers.
2. TELECOMM
SERVICES
De
m
d
an
fo
rB
d
an
wi
dth
M
ar
gi
INDUSTRY
PROFITS
ns
Er
od
in
g
COMPETITION IS INCREASING
Flexible Storage Provisioning
1. When customers sign up for Flexible
Computing services, they specify the
amount of storage they need today and
the service level agreement (SLA)—for
example, 50GB of gold-tier storage
and 100GB of silver-tier storage. They
also specify the maximum capacity,
such as 1TB each for gold and silver.
2. Our orchestrator provisions all the
resources (storage, compute, network).
3. As our customers’ storage needs
fluctuate, they visit our self-service
portal to increase or decrease the
allocation. We bill only for resources
allocated each day.
Elevating Our Customers’ Expectations of Cloud Services
To give these customers the confidence to operate a major part of their business
in the cloud, we needed to design an architecture that delivered a simple user
experience, agility, flexibility, reliability, and security.
Our flagship cloud service is Flexible Computing, an IaaS offering available
in all 220 countries and territories where Orange Business Services operates.
To differentiate our Flexible Computing offers, we focused on four capabilities:
• Self-service provisioning. Customers can self-provision infrastructure in
m
inutes from an intuitive browser interface. For example, if you need a web
server in another country, with a few clicks you can provision 10 virtual
machines running Windows® 8, each with 4GB RAM and 200GB disk space,
and 1Gbps throughput between virtual machines.
• Tiered service levels. Customers select silver or gold service tiers for
computing, networking, and storage. For instance, you have the flexibility
to select 25GB with very fast access for business intelligence workloads
and 30GB with moderately fast access for your financial applications
(see sidebar).
• Automation. To minimize operational costs, we automated provisioning,
using our own orchestration software in conjunction with NetApp OnCommand ®.
When customers create a virtual machine, the orchestration software provisions
storage from a shared pool. We wrote scripts to coordinate all activities—
for example, ensuring that storage is properly and accurately provisioned
before virtual machines are provisioned.
• Reporting. Customers can view real-time and historical usage reports on
demand. We also generate our own reports for capacity planning and
chargeback.
The Storage Challenge
Fast time to market was important because our existing telecom customers
were eager for IaaS, and we wanted to be the company to serve them. To meet our
aggressive launch schedule, we initially used the same third-party Fibre Channel
storage area network (SAN) platform that we already used for our own business,
in conjunction with Cisco® Catalyst® switches.
2
3. But the SAN platform hampered our flexibility, in two ways. First, the maximum
LUN size with Fibre Channel is 16TB. Therefore, we had to aggregate LUNs for
customers that wanted larger datastores, and aggregation can decrease virtual
machine performance. Also, a customer that needed 80GB had to provision and
pay for 200GB, the smallest unit. We believe that our customers should have the
flexibility to provision any size datastore.
The other drawback of the previous storage solution was that expanding or
contracting a customer’s datastore required manual effort from our operations
team, an unsustainable operational model. To increase capacity, we had to
aggregate cells into a single datastore that was visible from VMware® and from
the customer portal. Scheduling and completing the work typically took two full
days—not a scalable process. And to decrease capacity, we had to migrate
customer data to a smaller datastore. We couldn’t simply remove a LUN from
the datastore because we would lose data.
How We Built a Flexible Cloud Architecture with NetApp Solutions
We gained the flexibility that differentiates our cloud service by replacing our
original cloud storage with the NetApp Unified Storage Architecture. Multiprotocol
support in NetApp storage enables us to use the same storage architecture for
primary and backup storage and for physical and virtual servers. This relieves
our IT team from having to learn and manage multiple storage environments.
In our data centers in France (Chevilly and Rueil) and Singapore, we deployed
paired NetApp FAS6240 storage systems for production data and paired
NetApp FAS3240 storage systems for backup (Figure 1).
Internet
IP VPN
Firewall
Users
Administrators
SSL Gateway
Virtual Data Center
Internet
Front-End
Zone
Internet
Front-End
Zone
Load
Balancer
Load
Balancer
Load
Balancer
Load
Balancer
Intranet
Front-End
Zone
Administration Tools
Intranet
Front-End
Zone
Physical Servers
Back-End Zone
Back-End Zone
Back-End Zone
Back-End Zone
NetApp FAS6240
NetApp FAS6240
NetApp FAS6240
NetApp FAS6240
Backups
NetApp FAS3240
Figure 1) Flexible Computing services network architecture based on a NetApp storage infrastructure.
3
4. Four Benefits of NetApp NFS
Support for Orange Business
Services Flexible Computing
Support for Network File System (NFS) protocol in NetApp storage solved the
problems we had with the Fibre Channel SAN, making capacity management
far more efficient (see sidebar).
• We can create datastores of up
to 100TB without the performance
degradation that results from
aggregating LUNs.
Currently, we offer two types of Flexible Computing services. Flexible Computing
Premium provides virtual servers, physical servers, or both. Customers can
either manage their infrastructure and applications themselves or ask Orange
Business Services to manage all or some of the stack. Daily backup to a secondary
data center is standard, and we’re adding an optional disaster recovery plan.
Customers that have the internal resources to manage their infrastructure use
the Flexible Computing Express service, which is self-managed and includes
virtual machines only. See Figure 2.
• The time to expand or shrink a volume
decreased from several hours to a few
seconds.
• Customers can decrease storage
without any involvement by our IT
team. As a result, the time to expand
or shrink a volume decreased from
several hours to a few seconds.
• We can provide SLAs for 99.95%
availability, partly because NFS makes
it easier to switch a volume from one
virtual machine to another, and to fail
over between paired controllers.
Virtual Data Center
Internet
IP VPN
Firewall
Load Balancer
Front End
Back End
NetApp FAS6240
NetApp FAS6240
Figure 2) Flexible Computing Express network architecture with NetApp storage.
4
5. Silver
160 IOPS/TB
Gold
600 IOPS/TB
Storage Innovations Behind Flexible Computing Services
NetApp technologies play a major role in the customer experience as well as
the cost efficiency of our Flexible Computing platform.
Tiered Service Levels
Providing tiered SLAs
We’re able to offer tiered service levels for storage performance—Gold
and Silver—using the same NetApp storage infrastructure. The technology
behind tiered service levels is NetApp Flash Cache™, which accelerates data
access by caching recently read user data or NetApp metadata.
Customers that request the gold service receive an SLA for 600 IOPS. Their
data resides on SAS drives that are front-ended by Flash Cache. Customers that
choose the silver service receive an SLA for 160 IOPS. With Flash Cache, we
can meet this SLA using lower cost SATA drives, helping to keep service costs
down. We think of Flash Cache as bridging the gap between our SLAs and
actual disk performance.
We use NetApp FlexShare® to specify the relative priorities of volumes, allocating
80% of the Flash Cache capacity to gold-tier customers and the remaining
20% to silver-tier customers.
Keeping costs down by reducing overall storage requirements
Three NetApp technologies—deduplication, thin provisioning, and Snapshot™—
provide 50% more storage capacity than our previous storage infrastructure for
the same cost.
For example, we reduced total storage capacity requirements by 30%—up to
80% in a volume—by using NetApp deduplication to eliminate redundant blocks
of data within the same volume. Deduplication saved 120TB by locating identical
blocks of data and replacing them with references to a single shared block.
Deduplication savings are allocated to volumes, so we wouldn’t see any benefit
from deduplication if we built each customer’s storage in a separate volume.
Therefore, we build several customers’ storage in the same volume.
We reduced capacity requirements even further by using NetApp Snapshot
s
oftware in our Chevilly-Larue data center to make point-in-time copies.
With our previous storage infrastructure, the copy took as much space as
the original. With Snapshot software, in contrast, the original plus copy takes
only 20% to 30% more space than the original alone.
Making use of space that customers provision but don’t use
Out of habit, some customers still request more storage capacity than they
actually need—sometimes even double or triple the capacity. They still can’t
quite believe that they can increase capacity in minutes, making it unnecessary
to overprovision.
Overprovisioning increases our data center space, power, and cooling costs.
We significantly reduced these costs by using NetApp thin provisioning,
which presents more logical storage to customers than we actually have
in our physical storage pool. For example, suppose a customer requests
100GB of space. Instead of allocating the space upfront, the architecture
dynamically allocates space to the LUN as data is written. So, if the customer
writes 40GB of data, that’s what the NetApp storage system allocates.
5
6. The remaining 60GB remains in a pool of storage that’s available on demand
to this customer—or any other customer. And when customers delete data,
thin provisioning releases the free space back to the common storage pool.
For some customers, thin provisioning has decreased committed storage by
a factor of six. It is a win-win deal: our customers manage their storage cost
at the most granular level, and it leads to a decrease of hardware footprint
in our data center.
To make sure that we always have enough capacity should customers suddenly
need it, we continually monitor capacity using NetApp OnCommand Unified
Manager, which integrated easily into our internal management interface.
Building a secure, multi-tenant environment
Our cloud customers want assurance that their data will remain private.
We designed the infrastructure to provide enterprise levels of security, logically
partitioning the infrastructure for each customer and giving each customer
a dedicated virtual firewall and dedicated VLANs.
Meeting business continuance and disaster recovery needs
Our standard backup policy is to take daily Snapshot copies. With NetApp
SnapManager ® for Virtual Infrastructure, backups run on dedicated virtual
storage system servers, enabling customers to use their production servers
to run applications.
In the future, we will provide a disaster recovery option for our Flexible Computing
Premium service. We plan to use the NetApp SnapMirror® data replication
solution to copy virtual machine images to our backup facility in Val de Reuil
on a daily basis. If a regional or data center disaster occurs, we can bring
customers online in the backup data center within a few hours. See Figure 3.
Primary Site
NetApp FAS6240
Virtual Server Farm
NetApp
SnapVault
NetApp
OnCommand
Backup Site
NetApp FAS3240
NetApp
SnapMirror
DR Site
NetApp FAS3240
Figure 3) Backup and disaster recovery network architecture with NetApp storage.
6
7. Simplifying management
We didn’t need to hire a large team of storage administrators for the Flexible
Computing cloud service because we’ve automated most of the provisioning
process. When we deployed the NetApp infrastructure, we used NetApp
OnCommand management software to configure default policies, such as
backup. As we add more disks, we can apply these policies with a few clicks,
helping us to keep costs down as the business grows.
As mentioned earlier, we carefully monitor storage consumption in all data
centers, using NetApp Command Unified Manager, to make sure our customers
can increase their allocation whenever needed.
And, to identify storage infrastructure issues before they affect our customers,
we use a NetApp service called My AutoSupport™. We periodically visit a website
that identifies potential issues such as out-of-date firmware or software and
gives us step-by-step instructions to avoid disruptions.
Customer Benefits: Increased Business Agility
For Orange Business Services, branching out from the saturated telecom market
to an industry at the beginning of its growth curve is increasing our market
potential. We expect to earn €500 million in revenue from cloud services by 2015.
Offering highly flexible cloud services also increases our business value to
customers, making us a strategic business partner. For example, our customer
Tiens Group, headquartered in China, showcases how our Flexible Computing
offers can increase business agility and transform IT economics. A multinational
conglomerate, Tiens Group is involved in biotechnology, health management,
hotel and tourism industry, educational training, e-commerce, finance investment,
and real estate. The company already has offices in 110 countries and is
expanding rapidly in Europe.
Orange Business
Services at
a glance
• World’s largest data
network
• 220 countries and
territories
• €500 million cloud
revenues by 2015
• 35PB of NetApp
storage
7
8. Tiens Group now hosts its European website using our Flexible Computing
Express cloud service, as part of the company’s strategy to “Think globally,
e
xecute locally.” Website administrators in China can access their servers and
storage in France over the Orange VPN. To anticipate when they need more
capacity, they view real-time and historical utilization from our web portal.
F
lexible Computing Express reduced total cost of ownership for web infrastruc
ture for Tiens Group. The company has also begun using Flexible Computing
Express to centrally provision IT infrastructure for offices in any country.
Yann Degardin
Technical Project Lead
Orange Business Services
Yann Degardin has been a member of
the Orange Business Services IT team
since 2006. As technical lead for the
Orange Business Services Flexible
Computing services, he spearheaded
the storage infrastructure design,
product launch, and lifecycle manage
ment program. Mr. Degardin brings
firsthand knowledge of operations to
cloud design, having previously served
as operations coordinator for France
Telecom and Equant, now part of
Orange Business Services. Earlier in
his career, Mr. Degardin worked for
Logitech Informatique and SII. He is a
graduate of the Louis de Broglie School
of Engineering and lives in France with
his wife and two children.
Our growing customer base includes companies from a variety of industries,
all attracted by the ease of adding or contracting resources and the appeal
of paying only for the resources they need. GFI Informatique uses a Flexible
Computing service to host its award-winning cloud security services for
networks, email, and websites. A global €20+ billion luxury goods company
takes advantage of Flexible Computing to manage more than 50 brands,
and gives our service credit for helping to accelerate time to market.
What’s Next
Overcoming our earlier storage challenge lifted our cloud services to a new
level, and we’ll migrate 200 Flexible Computing customers from the old storage
platform to the NetApp platform in 2013. With NetApp Virtual Storage Tiering,
we’ve given our customers a choice of service levels for storage access,
depending on their application requirements. And we’ve positioned ourselves
to scale cost effectively to serve more customers working with more data
by taking advantage of NetApp features such as NetApp deduplication
and thin provisioning.
We’re continuing our partnership with NetApp as we enhance our existing cloud
services and plan new ones to meet our customers’ emerging business needs.
Some of our plans include:
• Infrastructure. We’re introducing a Flexible Computing offer tailored for
midmarket accounts, as well as a private cloud offer. Soon our pilot disaster
recovery service will move into production.
• Workspace. Our Flexible Workspace offer will allow users to access virtual
desktops from anywhere, on any device, helping our customers support a
mobile workforce while lowering desktop costs.
• Private and public cloud services. We are building a private cloud offer,
mainly for large accounts, and working with partners on a public cloud
offering called Cloudwatt.
Our relationship with NetApp involves much more than technology. NetApp has
become a true partner, working in close collaboration with Orange Business
Services to help us reinvent our business for the cloud era.
8