It is 2019, why are we still backing up production storage? The simple answer is that most production storage systems can provide some level of data protection but they can’t fully replace backup. It's time to consider what backup independence means. The technology is there for production storage to replace backup but most organizations haven’t fully leveraged it. Join Storage Switzerland and ClearSky Data for our webinar to learn how to design a self-protecting production storage infrastructure that not only protects data but replaces backup.
Join us to learn:
•. The difference between data protection and backup
•. Why most production systems can’t replace backup
• How to architect a production system that is fully protected and achieves backup inde-pendence
Realtime Indexing for Fast Queries on Massive Semi-Structured DataScyllaDB
Rockset is a realtime indexing database that powers fast SQL over semi-structured data such as JSON, Parquet, or XML without requiring any schematization. All data loaded into Rockset are automatically indexed and a fully featured SQL engine powers fast queries over semi-structured data without requiring any database tuning. Rockset exploits the hardware fluidity available in the cloud and automatically grows and shrinks the cluster footprint based on demand. Available as a serverless cloud service, Rockset is used by developers to build data-driven applications and microservices.
In this talk, we discuss some of the key design aspects of Rockset, such as Smart Schema and Converged Index. We describe Rockset's Aggregator Leaf Tailer (ALT) architecture that provides low latency queries on large datasets.Then we describe how you can combine lightweight transactions in ScyllaDB with realtime analytics on Rockset to power an user-facing application.
Webinar: Eliminate Backups and Simplify DR with Hybrid Cloud StorageStorage Switzerland
The cloud should be a valuable ally in helping organizations eliminate backup infrastructure and increase their disaster recovery (DR) confidence. The reality is that current cloud backup and DR solutions fall short because they don't fully exploit cloud resources.
Most cloud backup and DR solutions still require on-premises infrastructure and actually increase costs by requiring multiple data protection copies, both on prem and in the cloud.
In this webinar join Storage Switzerland and ClearSky to learn why current solutions aren't doing enough to eliminate backups or simplify DR and how a complete, hybrid cloud storage service can meet these goals without compromising performance.
[NetApp] Simplified HA:DR Using Storage SolutionsPerforce
Perforce administrators have several choices for HA/DR solutions depending on RTO/RPO objectives. Using an effective storage solution such as NetApp filers simplifies HA/DR planning in several ways. In this session we'll look at using a NetApp filer for more reliable HA in the event of storage or application failure and simpler DR replication. In the latter case, deduplication and SnapMirror technology can significantly reduce the amount of data replicated to a remote site.
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...Red_Hat_Storage
At Red Hat Storage Day New York on 1/19/16, Red Hat partner Seagate presented on how to implement dense storage using HDDs with SSDs and PCIe flash accelerator cards.
Red Hat's own Sr. Cloud Storage Solutions Architect Narendra Narang took the podium at Red Hat Storage Day New York 1/19/16 to highlight emerging use cases for Red Hat's software-defined-storage products.
"Data classification" is an umbrella term covering things: locality-aware data placement, SSD/disk or normal/deduplicated/erasure-coded data tiering, HSM, etc. They share most of the same infrastructure, and so are proposed (for now) as a single feature.
Realtime Indexing for Fast Queries on Massive Semi-Structured DataScyllaDB
Rockset is a realtime indexing database that powers fast SQL over semi-structured data such as JSON, Parquet, or XML without requiring any schematization. All data loaded into Rockset are automatically indexed and a fully featured SQL engine powers fast queries over semi-structured data without requiring any database tuning. Rockset exploits the hardware fluidity available in the cloud and automatically grows and shrinks the cluster footprint based on demand. Available as a serverless cloud service, Rockset is used by developers to build data-driven applications and microservices.
In this talk, we discuss some of the key design aspects of Rockset, such as Smart Schema and Converged Index. We describe Rockset's Aggregator Leaf Tailer (ALT) architecture that provides low latency queries on large datasets.Then we describe how you can combine lightweight transactions in ScyllaDB with realtime analytics on Rockset to power an user-facing application.
Webinar: Eliminate Backups and Simplify DR with Hybrid Cloud StorageStorage Switzerland
The cloud should be a valuable ally in helping organizations eliminate backup infrastructure and increase their disaster recovery (DR) confidence. The reality is that current cloud backup and DR solutions fall short because they don't fully exploit cloud resources.
Most cloud backup and DR solutions still require on-premises infrastructure and actually increase costs by requiring multiple data protection copies, both on prem and in the cloud.
In this webinar join Storage Switzerland and ClearSky to learn why current solutions aren't doing enough to eliminate backups or simplify DR and how a complete, hybrid cloud storage service can meet these goals without compromising performance.
[NetApp] Simplified HA:DR Using Storage SolutionsPerforce
Perforce administrators have several choices for HA/DR solutions depending on RTO/RPO objectives. Using an effective storage solution such as NetApp filers simplifies HA/DR planning in several ways. In this session we'll look at using a NetApp filer for more reliable HA in the event of storage or application failure and simpler DR replication. In the latter case, deduplication and SnapMirror technology can significantly reduce the amount of data replicated to a remote site.
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...Red_Hat_Storage
At Red Hat Storage Day New York on 1/19/16, Red Hat partner Seagate presented on how to implement dense storage using HDDs with SSDs and PCIe flash accelerator cards.
Red Hat's own Sr. Cloud Storage Solutions Architect Narendra Narang took the podium at Red Hat Storage Day New York 1/19/16 to highlight emerging use cases for Red Hat's software-defined-storage products.
"Data classification" is an umbrella term covering things: locality-aware data placement, SSD/disk or normal/deduplicated/erasure-coded data tiering, HSM, etc. They share most of the same infrastructure, and so are proposed (for now) as a single feature.
In this session, we'll discuss new volume types in Red Hat Gluster Storage. We will talk about erasure codes and storage tiers, and how they can work together. Future directions will also be touched on, including rule based classifiers and data transformations.
You will learn about:
How erasure codes lower the cost of storage.
How to configure and manage an erasure coded volume.
How to tune Gluster and Linux to optimize erasure code performance.
Using erasure codes for archival workloads.
How to utilize an SSD inexpensively as a storage tier.
Gluster's erasure code and storage tiering design.
Red Hat Storage, based on the upstream GlusterFS project, was developed as a distributed file system in the oil-and-gas, high-performance compute arena. With Red Hat Storage, you can easily set up flexible distributed storage using commodity x86 hardware.
Simple, inexpensive internal or JBOD storage can be linked across multiple physical servers and presented as a single storage namespace. This storage can be used for log files, web content, virtual machine, home directory, and other storage use cases.
In this session, we’ll demonstrate how to:
Install Red Hat Gluster Storage
Configure disks
Link the storage nodes
Define storage bricks
Present storage to clients
We'll talk about tips and tricks, best practices, backup and recovery, and other storage-related topics.
Real-Time Streaming: Move IMS Data to Your Cloud Data WarehousePrecisely
With over 22,000 transactions processed every second, your mainframe IMS is a critical source of data for the cloud data warehouses that feed analytics, customer experience or regulatory initiatives. However, extracting data from mainframe IMS can be time-consuming and costly, leading to the exclusion of IMS data from cloud data warehouses all together – and leaving valuable insights unseen.
Never ignore or manually extract mainframe IMS data again. In this on-demand webcast, you will learn how Connect CDC enables your team to develop integrations quickly and easily between mainframe IMS and cloud data warehouses in the most cost-effective way possible.
At the Public Sector Red Hat Storage Days on 1/20/16 and 1/21/16, Jason Calloway walked attendees through the basics of scalable POSIX file systems in the cloud.
All the content of this website is informative and non-commercial, does not imply a commitment to develop, launch or schedule delivery of any feature or functionality, should not rely on it in making decisions, incorporate or take it as a reference in a contract or academic matters. Likewise, the use, distribution and reproduction by any means, in whole or in part, without the authorization of the author and / or third-party copyright holders, as applicable, is prohibited.
Disaster Recovery is an expensive proposition. But since the consequences of not being prepared for a disaster are so severe, it is an expense that organizations make. But that’s not to say organizations are not always looking for way to do DR better, faster and for less money. In this live webinar join Storage Switzerland and ClearSky to learn how organizations can lower the cost of DR preparation and execution.
Cloud for NAS services is now a reality thanks to new hybrid cloud models. Users can benefit from improved performance thanks to local flash performance and organizations benefit from cost effective cloud storage and breaks the endless storage refresh cycle.
Register for our on demand webinar and learn:
* The Challenges with NAS Refreshes
* The Problems Continuing with On-Premises NAS
* Five Reasons to Consider The Cloud
In this session, we'll discuss new volume types in Red Hat Gluster Storage. We will talk about erasure codes and storage tiers, and how they can work together. Future directions will also be touched on, including rule based classifiers and data transformations.
You will learn about:
How erasure codes lower the cost of storage.
How to configure and manage an erasure coded volume.
How to tune Gluster and Linux to optimize erasure code performance.
Using erasure codes for archival workloads.
How to utilize an SSD inexpensively as a storage tier.
Gluster's erasure code and storage tiering design.
Red Hat Storage, based on the upstream GlusterFS project, was developed as a distributed file system in the oil-and-gas, high-performance compute arena. With Red Hat Storage, you can easily set up flexible distributed storage using commodity x86 hardware.
Simple, inexpensive internal or JBOD storage can be linked across multiple physical servers and presented as a single storage namespace. This storage can be used for log files, web content, virtual machine, home directory, and other storage use cases.
In this session, we’ll demonstrate how to:
Install Red Hat Gluster Storage
Configure disks
Link the storage nodes
Define storage bricks
Present storage to clients
We'll talk about tips and tricks, best practices, backup and recovery, and other storage-related topics.
Real-Time Streaming: Move IMS Data to Your Cloud Data WarehousePrecisely
With over 22,000 transactions processed every second, your mainframe IMS is a critical source of data for the cloud data warehouses that feed analytics, customer experience or regulatory initiatives. However, extracting data from mainframe IMS can be time-consuming and costly, leading to the exclusion of IMS data from cloud data warehouses all together – and leaving valuable insights unseen.
Never ignore or manually extract mainframe IMS data again. In this on-demand webcast, you will learn how Connect CDC enables your team to develop integrations quickly and easily between mainframe IMS and cloud data warehouses in the most cost-effective way possible.
At the Public Sector Red Hat Storage Days on 1/20/16 and 1/21/16, Jason Calloway walked attendees through the basics of scalable POSIX file systems in the cloud.
All the content of this website is informative and non-commercial, does not imply a commitment to develop, launch or schedule delivery of any feature or functionality, should not rely on it in making decisions, incorporate or take it as a reference in a contract or academic matters. Likewise, the use, distribution and reproduction by any means, in whole or in part, without the authorization of the author and / or third-party copyright holders, as applicable, is prohibited.
Disaster Recovery is an expensive proposition. But since the consequences of not being prepared for a disaster are so severe, it is an expense that organizations make. But that’s not to say organizations are not always looking for way to do DR better, faster and for less money. In this live webinar join Storage Switzerland and ClearSky to learn how organizations can lower the cost of DR preparation and execution.
Cloud for NAS services is now a reality thanks to new hybrid cloud models. Users can benefit from improved performance thanks to local flash performance and organizations benefit from cost effective cloud storage and breaks the endless storage refresh cycle.
Register for our on demand webinar and learn:
* The Challenges with NAS Refreshes
* The Problems Continuing with On-Premises NAS
* Five Reasons to Consider The Cloud
Webinar: Overcoming the Storage Roadblock to Data Center ModernizationStorage Switzerland
Organizations have tried a variety of solutions to regain control of their data storage infrastructure. They’ve invested in monolithic storage systems, software defined storage (SDS) and hyper-converged systems. While each approach may have brought some value, each failed in its primary task: consolidating storage resources. Each of these consolidation efforts is unable to consistently guarantee performance, scale capacity and drive down storage costs.
As a result, most organizations end up buying workload specific solutions for both legacy and modern applications. Most data centers today have a mixture of multiple all-flash storage systems, hyper-converged environments and high capacity data archives. They also have storage software for each use case. IT ends up dealing with a data management nightmare, which limits organizational efficiency and productivity.
Don’t give up! Join Storage Switzerland and Datera to learn how monolithic, software defined and hyper-converged architectures have let IT down and why the problem gets worse as data centers modernize. Attendees will learn how storage solutions need to change in order to eliminate primary storage silos while guaranteeing specific application performance, scaling to meet capacity demands and lower storage TCO.
Typical disaster recovery plans leverage backup and/or replication to move data out of the primary data center and to a secondary site. Historically, the secondary site is another data center that the organization maintains. But now, companies are looking to the cloud to become a secondary site, leveraging it as a backup target and even a place to start their applications in the event of a failure. The problem with this approach is that it merely simulates a legacy design and presents some significant recovery challenges.
Webinar: Using the Cloud to Fix Backup's Blind Spot - Endpoint Data ProtectionStorage Switzerland
Many data centers unwittingly have a blind spot in their backup strategy, endpoints like laptops and other devices are left exposed. Most organizations have no formal endpoint data protection strategy even though studies indicate that over 60% of data on endpoints is unique to that device. In most cases, there is no centralized data protection strategy for endpoints.
IT is counting on users to protect their data, but the reality is that the organization needs a centralized endpoint data protection strategy that fixes backup's blind spot.
Join Storage Switzerland and Infrascale for this on demand webinar to learn how to create an endpoint data protection strategy that leverages the cloud to fix the backup blind spot. The webinar also includes a demo of the solution, so you can see how efficiently IT can protect endpoints without disrupting end users.
Disaster Recovery Experience at CACIB: Hardening Hadoop for Critical Financia...DataWorks Summit
Hadoop is becoming a standard platform for building critical financial applications such as risk reporting, trading and fraud detection. These applications require high level of SLAs (service-level agreement) in terms of RPO (Recovery Point Objective) and RTO (Recovery Time Objective). To achieve these SLAs, organizations need to build a disaster recovery plan that cover several layers ranging from the infrastructure to the clients going through the platform and the applications. In this talk, we will present the different architecture blueprints for disaster recovery as well as their corresponding SLA objectives. Then, we will focus on the stretch cluster solution that Crédit Agricole CIB is using in production. We will discuss the solution’s advantages, drawbacks and the impact of this approach on the global architecture. Finally, we will explain in detail how to configure and deploy this solution and how to integrate each layer (storage layer, processing layer...) into the architecture.
In this presentation from the DDN User Meeting at SC13, Jeff Denworth provides a product update,
Watch the video presentation: http://insidehpc.com/2013/11/13/ddn-user-meeting-coming-sc13-nov-18/
ClearSky - Value to Manged Service Providers rbcummings
ClearSky provides Performance (Flash) Storage as a Service that includes all your data protection. No more building, buy, or running backup/DR with ClearSky Data.
- Acquire more customers
- Dramatically reduce your OpEx budget
- Lower customer churn
Adam Dagnall: Advanced S3 compatible storage integration in CloudStackShapeBlue
Adam's slides from his talk at the CloudStack European User group meetup, March 13, London. To provide tighter integration between the S3 compatible object store and CloudStack, Cloudian has developed a connector to allow users and their applications to utilize the object store directly from within the CloudStack platform in a single sign-on manner with self-service provisioning. Additionally, CloudStack templates and snapshots are centrally stored within the object store and managed through the CloudStack service. The object store offers protection of these templates and snapshots across data centres using replication or erasure coding.
Webinar: Application Explosion - Rediscovering the Lost Art of Protection Ser...Storage Switzerland
By attending this webinar you’ll learn how to:
* Implement service levels like RPO and RTO throughout the data center
* Establish NEW service levels for copy management and recovered performance
* Fix the disconnect between service levels and data protection applications
Webinar: 3 Steps to Controlling the Secondary Storage DelugeStorage Switzerland
In this interactive webinar, we discuss the challenges secondary storage creates, how the cloud might help and where it might fall short. Then we examine how the cloud, combined with the right services, can help organizations control the secondary storage data deluge.
How to migrate workloads to the google cloud platformactualtechmedia
IT Organizations of all sizes are moving their workloads to the public cloud in order to gain business agility, unlimited workload scalability, and free their time to work on the projects that matter. One of the leaders in public cloud is the Google Cloud Platform (GCP)
The Pandemic Changes Everything, the Need for Speed and ResiliencyAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
The Pandemic Changes Everything, the Need for Speed and Resiliency
Parviz Peiravi, Global CTO of Financial Services Solutions, Intel
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
The cloud seems like a logical destination for backup data. It is by definition off-site, and the organization no longer needs to worry about allocating valuable floor space to secondary data storage. The problem is that most cloud backup solutions fall short of delivering enterprise class data protection.
Most cloud backup solutions are too complicated to set up and upgrade, don't provide complete platform support, don't provide flexible recovery options, can't meet the enterprises RTO/RPO requirements and don't provide a class of support that enables organizations to lower their operational expense.
In this live webinar Storage Switzerland and Carbonite discuss the five critical capabilities that enterprises, looking to move to cloud backup, need to make sure their solution has.
Join us for this event to learn:
1. Why switch to the cloud for enterprise backup?
2. The five critical capabilities enterprises MUST have in cloud backup solutions. What they are and why enterprises need them.
3. Why and where most solutions miss the mark
4. How Carbonite Server delivers the five critical capabilities
Solving enterprise challenges through scale out storage & big compute finalAvere Systems
Google Cloud Platform, Avere Systems, and Cycle Computing experts will share best practices for advancing solutions to big challenges faced by enterprises with growing compute and storage needs. In this “best practices” webinar, you’ll hear how these companies are working to improve results that drive businesses forward through scalability, performance, and ease of management.
The slides were from a webinar presented January 24, 2017. The audience learned:
- How enterprises are using Google Cloud Platform to gain compute and storage capacity on-demand
- Best practices for efficient use of cloud compute and storage resources
- Overcoming the need for file systems within a hybrid cloud environment
- Understand how to eliminate latency between cloud and data center architectures
- Learn how to best manage simulation, analytics, and big data workloads in dynamic environments
- Look at market dynamics drawing companies to new storage models over the next several years
Presenters communicated a foundation to build infrastructure to support ongoing demand growth.
Webinar: Are You Treating Unstructured Data as a Second Class Citizen?Storage Switzerland
Join Storage Switzerland and Aparavi for our in-person webinar to learn how to improve your ability to protect unstructured data while at the same time gaining insight into it.
To some, tape storage may seem like an outdated technology in the era of NAS and object-based storage. But— here’s a surprise – tape today is more relevant than ever. Even the most modern data centers can benefit from its low cost of ownership, scalability, reliability and security. In our on demand webinar, Storage Switzerland is joined by Spectra Logic, Fujifilm and Iron Mountain to discuss why tape use shouldn’t just continue but actually expand, including in hybrid cloud environments.
Special Presentation of Meet The CEOs - Commvault and HedvigStorage Switzerland
Commvault has acquired Hedvig. What are the ramifications for the combined companies and the storage industry? Find out when we have Sanjay Mirchandani, CEO of Commvault and Avinash Lakshman, CEO of Hedvig in a special Live Edition of our Meet The CEO Webinar series.
Join us to learn a little bit more about the CEO behind this acquisition, what motivated the two companies to come together and what role Hedvig’s scale-out Software Defined Storage Software play’s in Commvault’s future. Most importantly ask questions directly of our CEO’s.
Panel Discussion: Is Computational Storage a Better Path to Extreme Performance?Storage Switzerland
Vendors are re-writing software and adding custom hardware in an attempt to not bottleneck extremely low latency NVMe Flash and Storage Class Memory technologies. Eventually, they all are at the mercy of physics. Data has to traverse an internal and sometimes and external network so the computing tier can process it. Computational Storage offers an alternative by performing at least some of the processing on the storage device itself, eliminating most of the network activity.
Join Storage Switzerland's Lead Analyst, George Crump as he leads a panel of computational storage experts to include NGD System's Scott Shadley, ScaleFlux's Thad Omura, and Samsung's Pankaj Mehra as preview the Computational Storage Workshop at the Flash Memory Summit on Thursday, August 8th.
Webinar: Complete Your Cloud Transformation - Store Your Data in The CloudStorage Switzerland
Organizations are moving to the cloud but according to a recent Osterman Research study, only 14% of companies have completed that transformation. The study clearly identifies data storage as an area where IT can easily accelerate their cloud transformation journey. Potentially more so than any other component, intelligently moving data to the cloud has the opportunity to significantly lower on-premises storage costs without the threat of impacting day to day operations.
Join Storage Switzerland, HubStor and Osterman Research for our webinar where we discuss the results of the Osterman Research study, what it means for IT, and how IT can take advantage of that research to leverage the cloud to alleviate data management and data protection concerns.
Webinar: Simplifying the Enterprise Hybrid Cloud with Azure Stack HCIStorage Switzerland
During our on demand webinar, “Simplifying the Large-Scale Hybrid Cloud”, Storage Switzerland and Axellio discuss how Microsoft Azure Stack HCI and Axellio’s FabricXpress Servers can deliver new levels of consolidation in the enterprise. Learn how to intelligently leverage Azure to simplify operations like data protection, business continuity, and data center operations – while deploying less infrastructure and less software for your demanding on-premises workloads.
Webinar: Designing a Storage Consolidation Strategy for Today, the Future and...Storage Switzerland
Most storage consolidation strategies fail because they attempt to consolidate to a single piece of storage hardware. To successfully consolidate storage, IT professionals need to look at consolidation strategies that worked. Server consolidation was VMware’s first use case. It was successful because instead of consolidating hardware, VMware consolidated the environment under a single hypervisor (ESXi) and console (vCenter) but still provided organizations with hardware flexibility. A successful storage consolidation strategy needs to follow a similar formula by providing a single software solution that controls a variety of storage hardware, but that software also has to extract maximum performance and value from each hardware platform on which it sits.
Join Storage Switzerland and StorOne in which we discuss how to design a storage consolidation strategy for today, the future and the cloud.
Webinar: Is It Time to Upgrade Your Endpoint Data Strategy?Storage Switzerland
In this webinar Storage Switzerland and Carbonite discuss the increased importance of endpoints and why organizations need to upgrade their strategy to make sure these devices are protected, secure and in compliance. In light of increasing legal requirements and potential penalties, organizations can no longer afford to ignore this critical issue.
Webinar: Rearchitecting Storage for the Next Wave of Splunk Data GrowthStorage Switzerland
Join Storage Switzerland and SwiftStack, a Splunk technology partner, for our webinar where our panel of experts will discuss the value of having Splunk analyze larger datasets while providing insight into overcoming infrastructure cost and complexity challenges through Splunk enhancements like SmartStore.
Backup software is continuously improving. Solutions like Veeam Backup and Replication deliver instant recoveries, enabling virtual machine volumes to instantiate directly on the backup device, without having to wait for data to transfer back to primary storage. These solutions can also move older backups to higher capacity, lower cost object storage or cloud storage systems. To deliver meaningful performance during instant recovery without exceeding the backup storage budget requires IT to re-think its backup storage architecture.
Modern backup processes need high performance, low capacity systems to deliver high-performance instant recovery, as well as high-capacity, modest performance systems to store backup data long term and software to manage data placement for the most appropriate recovery performance while not breaking the budget.
For over a decade Network Attached Storage (NAS) was the go to file storage device for organizations needing to store large amounts of unstructured data. But unstructured data is changing. While large file use cases are still prevalent, small file use cases are becoming more dominant. Workloads like artificial intelligence, analytics and IoT are typically driven by millions, if not billions of small files.
Object Storage is often hailed as the heir apparent but is it? Can file systems be redesigned to continue to support traditional NAS workloads while also supporting modern, small file and high velocity workloads? Join Storage Switzerland and Qumulo for our webinar, “NAS vs. Object — Can NAS Make a Comeback,” to learn the state of unstructured data storage and if NAS file systems can provide a superior alternative to object storage.
Join us on our event to learn
•. Why traditional NAS solutions fall short
•. Why object storage systems haven't replaced NAS
• How to bridge the gap by modernizing NAS file systems
•. Live Q&A with file system and NAS experts
Webinar: Overcoming the Shortcomings of Legacy NAS with Microsoft AzureStorage Switzerland
Most organizations use Network Attached Storage (NAS) to store data, but the modern workforce and organization expect more capabilities than what the typical NAS can provide. Also, as organizations themselves become more distributed, the idea of a single centralized file server with users tunneling through virtual private networks won’t scale. The common alternative, putting a NAS in each remote office offers problems of its own when IT tries to make sure the data is protected and made available to the right users at the right time.
Join Storage Switzerland and Nasuni for our webinar “Providing Global File Services Using Microsoft Azure.” By attending this event, you’ll learn:
1. The shortcomings of the Legacy NAS
2. The Cloud Advantage
3. Where the Cloud Needs Help and How to Get it
Webinar: 3 Steps to be a Storage Superhero - How to Slash Storage CostsStorage Switzerland
Reducing or a least slowing the growth of storage costs is a top priority facing IT organizations in 2019. In this live webinar with Storage Switzerland and SolarWinds, you will learn the three steps IT professionals can take to lower storage costs WITHOUT buying more storage (the typical vendor answer). The biggest challenges are that IT professionals don't arm themselves with the tools they need to be successful, take the next step in their career path and of course, save their company money.
Join our on demand webinar and learn:
1. How to Eliminate/Resolve Storage Problems - Not Throw Hardware at the Problem
2. Plan and be prepared for capacity growth and performance demands
3. How to manage multiple vendor's storage systems without replacing them
NVMe storage systems and NVMe networks promise to reduce latency further and increase performance beyond what SAS based flash systems and current networking technology can deliver. To take advantage of that performance gain however, the data center must have workloads that can take advantage of all the latency reduction and performance improvements that NVMe offers. Vendors emphatically state that NVMe is the next must-have technology, yet many still continue to provide SAS based arrays using traditional networks.
How do IT planners know then, that investing in NVMe will truly provide their organizations the benefits of NVMe for their demanding applications and see a measurable return on investment? Just creating a test environment to perform an NVMe evaluation can break the IT budget!
Register now to join Storage Switzerland, Virtual Instruments, and SANBlaze as we look at the state of the data center and provide IT planners with the information they need to decide if NVMe is an investment they should make now or if they should wait a year or more. The key is determining which applications can benefit from NVMe-based approaches.
In this event, IT professionals will learn
- About NVMe, NVMe Storage Systems and NVMe over Fabric Networking
- The Performance Potential of NVMe Storage and Networks
- What attributes are needed for a workload to take advantage of NVMe
- Why NVMe creates problems for current IT testing strategies
- Why a Workload Simulation approach is the only practical way to test NVMe
- How to build a storage performance validation practice
Webinar: All in the Cloud - Data Protection Up, Costs DownStorage Switzerland
Managing and protecting critical data across servers and applications in multiple locations around the globe is challenging. And the more decentralized and complex your infrastructure, the more difficult it is to manage your data. The potential bad news? Data loss, site outages, revenue loss, and potential non-compliance with regulations.
But here’s the good news: centralizing data protection in the cloud can make all the difference. That’s why you should join our webinar and hear from storage expert, George Crump, from Storage Switzerland and Druva’s W. Curtis Preston, Chief Technologist, as they discuss:
• Why protecting a distributed data center is challenging with traditional methods
• How a cloud-centralized backup strategy can be a game changer for your organization
• How Druva can help you drastically improve data protection quality, reduce costs, and simplify global management and configuration?
Hyperconverged Infrastructure is supposed to simplify the data center by creating an environment that automatically scales as new applications and workloads are added to it. The problem is that the current generation of HCI solutions can only address specific use cases like virtual desktops or tier 2 applications. First generation HCI solutions don’t have the per node power to accommodate enterprise workloads and tier 1 applications. The organization needs a next generation HCI solution, HCI 2.0, that can address HCI 1.0 shortcomings and fulfill the original promises of HCI; Lower costs, faster innovation, simpler scale, single vendor and unified management. The combination enables HCI 2.0 to handle a variety of storage intensive workloads.
15 Minute Friday: Tips for The Weekend - Stop the Unstructured Data MadnessStorage Switzerland
Join Storage Switzerland and Igneous for another Fifteen Minute Friday: Storage Tips for the Weekend. Most organizations are not satisfied with their ability to backup, recovery and correctly retain unstructured data. The combination of unprecedented growth and increased scrutiny caused by regulations like GDPR and CCPA is pushing organizations to the brink. It's time to stop the madness!
By joining this short webinar, you’ll learn tips to overcome:
- The Unstructured Data Backup Challenge
- The Archive Challenge
- The Data Privacy / Data Retention Challenge
Webinar: 2019 Storage Strategies Series - What’s Your Plan for Object Storage?Storage Switzerland
Join Storage Switzerland, Caringo, Cloudian and Scality, for a live roundtable discussion on Object Storage. Learn what you need to know to develop a strategy for object storage. Our panel of experts will discuss what object storage is, what is better/different about object storage than other types of storage and what are the use cases enterprises should consider.
Our Object Storage Panel will provide specific details on how to get started with object storage, what use cases pay the most immediate return on the investment and where object storage fits in an organization's overall storage strategy. The panel will also discuss what is coming next in object storage. Each expert will also give a brief overview of their company's object storage solution. We will close out the roundtable by taking questions from our live audience.
By attending one webinar, you’ll learn everything you need to know about object storage, provide actionable information in developing an object storage strategy and hear from three top vendors about how their products will fit into those plans.
Webinar: Designing Storage Architectures for Data Privacy, Compliance and Gov...Storage Switzerland
Managing data is about more than managing capacity growth; organizations today need to adhere to increasingly strict data privacy, compliance and governance regulations. Privacy regulations like GDPR and California’s Consumer Privacy Act place new expectations on organizations that require them to not only protect data but also organize it so it can be found and deleted on request. Traditional backup and archive are ill-equipped to help organization adhere to these new regulations.
In this webinar join Storage Switzerland and Hitachi Vantara for a roundtable discussion on the meaning of these various regulations, the impact of them on traditional storage infrastructures and how to design a storage architecture that can meet today’s regulations as well as tomorrows.
Webinar: The Elephant in the Datacenter - Protect, Manage, & Leverage Unstruc...Storage Switzerland
Join Storage Switzerland and Igneous for this on demand webinar where we dive into the results of a recent survey on the impact of unstructured data management and its challenges on organizations. During the webinar, we discuss best practices that organizations like yours can implement to effectively manage the unstructured data elephant.
Key Webinar Takeaways - The True Value of Unstructured Data - The Challenges Unstructured Data Creates - Best Practices for Protection, Management, and Monetization
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
3. The State of
Production Storage
Protection
● Media Protection
● System Protection
● Data Protection
○ Snapshots
○ Replication
● But lack Backup
4. The Difference Between Protection and Backup
● Protection:
○ Rapid Recovery
○ Limited “air-gap”
○ Short-term retention
○ Increases primary storage
capacity consumption
○ No data management / cost
reduction
● Backup:
○ Rapid Recovery
○ Air-gapped
○ Long-term retention
○ Primary storage capacity
neutral - or decreases
5. Why Most Production
Systems Can’t
Replace Backup
● Not designed for long-term
retention
● Not air-gapped
● Does not reduce its own costs
● DR requires second system
and second site
6. How to Design a Production System that can
Protect Itself
● Cloud focused but address latency
● On-premises cache not enough
● Needs middle tier
● Unlimited snapshots on low cost storage
● Replication to other cloud regions
7. Addressing Latency
● Flash on-premises but as a
cache not as storage
● Middle tier to address cache
misses without round trip to
cloud
8. What About Disaster Recovery?
● Snapshots are unlimited and don’t
impact performance
● Data is accessible from all tiers
● Performance of middle tier is production
class
● Disaster Recoveries can occur in public
cloud and leverage cloud compute
9. CONFIDENTIAL 8
Company background and momentum
M I S S I O N
To build and deliver a storage
service that manages the entire data
lifecycle, combining the performance
and availability of local enterprise
storage with the scalability and
economics of the cloud.
E N T E R P R I S E C U S T O M E R S
Ellen Rubin
CloudSwitch/Verizon
Netezza/IBM
Laz Vekiarides
EqualLogic/Dell-EMC
I N V E S T O R S
L E A D E R S H I P
S T R A T E G I C A L L I A N C E S
F O U N D E D
2014
M E T R O S
New York, Boston, DC, Chicago,
Dallas, San Francisco
H E A D Q U A R T E R S
Boston, MA
10. CONFIDENTIAL 9
One network, one copy of data: Fully-protected, accessed anywhere
PRIVATE CLOUD
DATA CENTERS
SPECIALTY
CLOUDS
CLEARSKY
EDGE
CLEARSKY
EDGE
CLEARSKY
EDGE
CLEARSKY
EDGE
CLEARSKY
EDGE
CLEARSKY
EDGE
12. CONFIDENTIAL 11
Customer DC
2
ClearSky
Edge Cache
VPN
Customer DC 1
ClearSky
Edge Cache
ClearSky architecture for DR and data center consolidation
Benefits:
• RPO 0 in-metro, 5-15 min out of metro; RTO <1min, instant restores
• Instantly access data from the cloud or secondary site
• Perform DR tests on-demand
• Achieve compliance
• Eliminate secondary data center(s)
• Rapid data ingest +20TB/day
Service
Primary Metro PoP
Secondary Metro PoP Backing Cloud
Customer
Environment
(End Users)
VPN
Customer DC 2
(Physical or Virtual)
ClearSky
Edge Cache
Home Dir
Home Dir
Home Dir
13. CONFIDENTIAL 12
ClearSky’s automatic data protection and DR
● Use native VMware tools to manage
data access
● No requirement for secondary data
centers or infrastructure to manage
● Recovery Time Objectives (RTOs) to
match your application SLA’s
● vSphere protection providing VM-
centric DR failover
● Monitoring and Error Remediation
● Failover RPO SLA Success or Failure
Notifications
● 24x7 support from our NOC support
center
15. CONFIDENTIAL 14
Why customers choose ClearSky Data
Decrease costs and
complexity of
storage
infrastructure
Storage cost
reduction
Consolidate and
minimize data
center footprints
Data center
consolidation
Manage data growth
while optimizing access,
security and data
protection
Harness the cloud
without latency and cost
variability
Cloud
adoption
Data access
and
management
17. How to Design Self-Protecting Production
Storage and Gain Backup Independence
For complete audio and Q&A please register for the On Demand Version:
bit.ly/BackupIndep