Veritas is expanding its appliance portfolio to meet the challenges of today's rapidly changing data protection and data management environments. Join this session for a detailed look at the many new features and capabilities these next-generation Veritas appliances have to offer. You'll learn how new High Availability (HA) capabilities will reduce the operational costs of planned and unplanned downtime, explore the latest centralized appliance management capabilities, receive a detailed overview of new dedupe to the cloud capabilities, get a detailed look at expanded database protection capabilities, and much more. Don't miss this chance to gain deep technical insights into the many ways these next-generation Veritas appliances will improve your ability to protect your most critical data.
What can the new NetBackup Appliances offer your organization that Data Domain can't? This session is dedicated to exploring the answers. You'll learn how migrating from Data Domain and other dedupe appliances to the latest appliances from Veritas can reduce backup times and rack space, lower your power and cooling costs, and deliver crucial new 360 Data Management capabilities. After you attend, you'll understand exactly why modern backup appliances need to do more than dedupe--and how intelligent Veritas appliances can deliver the scale, performance, resiliency, availability, and small data center footprint you need.
An Introduction to Big Data, Hadoop architecture, HDFS and MapReduce. Some concepts are explained through animation which is best viewed by downloading and opening in PowerPoint.
Data lakes provide a flexible way to store large amounts of raw data from various sources without having to structure the data upfront. This allows for exploration of the data and helps break down data silos. Some benefits of data lakes include flexible data modeling, low costs, and acting as a staging area for ETL. However, data lakes also face challenges around data governance, metadata, security, and information lifecycle management. As data lakes mature, organizations typically progress through four stages - from standalone applications to building new applications on a Hadoop platform centered around the flexible data lake.
Unlocking the Full Power of Your Backup Data with Veritas NetBackup Data Virt...Veritas Technologies LLC
Your backup data is more powerful and valuable than you might think. In this session, Veritas experts will show you how you can leverage your backup data for much more than just restores using new Veritas Velocity powered NetBackup Data Virtualization capabilities. Find out how this new solution can add important new capabilities to your current NetBackup infrastructure--including self-service, instant data provisioning to end users, and solving for different use cases that require fast data distribution, such as Test Data Refresh for TestDev.
SplunkLive! Nutanix Session - Turnkey and scalable infrastructure for Splunk ...Splunk
Nutanix provides a turnkey and scalable infrastructure for Splunk in 3 sentences:
1) The Nutanix solution uses SSD and a scale-out datacenter appliance to address Splunk's IO intensity and provide faster time to value.
2) It employs a scale-out cluster to eliminate server sprawl and simplify adding more data sources.
3) The converged and software-defined Nutanix platform virtualizes Splunk for enterprise features while improving performance, capacity, and manageability over direct deployment.
The Power of DataOps for Cloud and Digital Transformation Delphix
Companies have been trying to speed up their innovation delivery for many years but often at the cost of higher quality and stronger security. Despite billions invested to accelerate innovation, projects are too often slowed by data friction - the result of growing volumes of silo’d data and multiple requests for data.
Overcoming these sources of friction requires constant iteration across several key dimensions:
• Reducing the total cost of data by making it fast and efficient to deliver data, regardless of source or consumer. Automation and tooling is critical.
• Integrating security and governance into a seamless data delivery process. This requires integrated masking, but also a governance platform and process to ensure the right rules and access controls are in place.
• Breaking down silos between people and organizations. This starts with the organizational change to bring people together into one team, but requires technology change to provide self-service data access and control.
Your data is no longer constrained by location or infrastructure. So why should your data protection be any different? Attend this session to learn how Veritas Backup Exec can help you tear down the data protection silos between physical, virtual, and cloud; escape Veeam's virtual prison; free your data; eliminate the cost of paying for multiple data protection solutions; and stay protected no matter where your data lives. Find out how Backup Exec makes backup easy--with one platform, one console, one license that protects everything everywhere.
Veritas is expanding its appliance portfolio to meet the challenges of today's rapidly changing data protection and data management environments. Join this session for a detailed look at the many new features and capabilities these next-generation Veritas appliances have to offer. You'll learn how new High Availability (HA) capabilities will reduce the operational costs of planned and unplanned downtime, explore the latest centralized appliance management capabilities, receive a detailed overview of new dedupe to the cloud capabilities, get a detailed look at expanded database protection capabilities, and much more. Don't miss this chance to gain deep technical insights into the many ways these next-generation Veritas appliances will improve your ability to protect your most critical data.
What can the new NetBackup Appliances offer your organization that Data Domain can't? This session is dedicated to exploring the answers. You'll learn how migrating from Data Domain and other dedupe appliances to the latest appliances from Veritas can reduce backup times and rack space, lower your power and cooling costs, and deliver crucial new 360 Data Management capabilities. After you attend, you'll understand exactly why modern backup appliances need to do more than dedupe--and how intelligent Veritas appliances can deliver the scale, performance, resiliency, availability, and small data center footprint you need.
An Introduction to Big Data, Hadoop architecture, HDFS and MapReduce. Some concepts are explained through animation which is best viewed by downloading and opening in PowerPoint.
Data lakes provide a flexible way to store large amounts of raw data from various sources without having to structure the data upfront. This allows for exploration of the data and helps break down data silos. Some benefits of data lakes include flexible data modeling, low costs, and acting as a staging area for ETL. However, data lakes also face challenges around data governance, metadata, security, and information lifecycle management. As data lakes mature, organizations typically progress through four stages - from standalone applications to building new applications on a Hadoop platform centered around the flexible data lake.
Unlocking the Full Power of Your Backup Data with Veritas NetBackup Data Virt...Veritas Technologies LLC
Your backup data is more powerful and valuable than you might think. In this session, Veritas experts will show you how you can leverage your backup data for much more than just restores using new Veritas Velocity powered NetBackup Data Virtualization capabilities. Find out how this new solution can add important new capabilities to your current NetBackup infrastructure--including self-service, instant data provisioning to end users, and solving for different use cases that require fast data distribution, such as Test Data Refresh for TestDev.
SplunkLive! Nutanix Session - Turnkey and scalable infrastructure for Splunk ...Splunk
Nutanix provides a turnkey and scalable infrastructure for Splunk in 3 sentences:
1) The Nutanix solution uses SSD and a scale-out datacenter appliance to address Splunk's IO intensity and provide faster time to value.
2) It employs a scale-out cluster to eliminate server sprawl and simplify adding more data sources.
3) The converged and software-defined Nutanix platform virtualizes Splunk for enterprise features while improving performance, capacity, and manageability over direct deployment.
The Power of DataOps for Cloud and Digital Transformation Delphix
Companies have been trying to speed up their innovation delivery for many years but often at the cost of higher quality and stronger security. Despite billions invested to accelerate innovation, projects are too often slowed by data friction - the result of growing volumes of silo’d data and multiple requests for data.
Overcoming these sources of friction requires constant iteration across several key dimensions:
• Reducing the total cost of data by making it fast and efficient to deliver data, regardless of source or consumer. Automation and tooling is critical.
• Integrating security and governance into a seamless data delivery process. This requires integrated masking, but also a governance platform and process to ensure the right rules and access controls are in place.
• Breaking down silos between people and organizations. This starts with the organizational change to bring people together into one team, but requires technology change to provide self-service data access and control.
Your data is no longer constrained by location or infrastructure. So why should your data protection be any different? Attend this session to learn how Veritas Backup Exec can help you tear down the data protection silos between physical, virtual, and cloud; escape Veeam's virtual prison; free your data; eliminate the cost of paying for multiple data protection solutions; and stay protected no matter where your data lives. Find out how Backup Exec makes backup easy--with one platform, one console, one license that protects everything everywhere.
The document discusses EarthLink Cloud Server Backup and its advantages over traditional tape backup systems. Key points include:
- Cloud backup provides instant access to data for rapid recovery from disasters, unlike slow tape backups which are prone to errors.
- The cloud backup service eliminates upfront costs, reduces data center expenses and costs by paying only for needed storage.
- Data is securely stored in SSAE 16 compliant data centers and regularly backed up with encryption both at rest and in transit.
Siebel Clinical for Small and Medium-Sized OrganizationsPerficient
Param Singh gave a presentation on Siebel Clinical and cloud computing solutions for small to medium life sciences organizations. The presentation covered Siebel Clinical accelerators like ASCEND that provide pre-configured functionality to reduce implementation time and costs. It discussed how cloud computing, such as with ASCEND On-Demand, can make sophisticated software accessible with a low initial investment through a monthly subscription model while maintaining data security. Implementations were compared from fully custom to accelerators with various hosting options to the cloud-based ASCEND On-Demand solution.
How does SolrCloud ensure that replicated data remains consistent? How does Solr avoid data loss when hardware inevitably fails? In this talk, we will cover how Solr addresses failures and what recovery steps the cluster can automatically perform.
BigDataBx #1 - Atelier 1 Cloudera Datawarehouse OptimisationExcelerate Systems
The document discusses how Cloudera can optimize an enterprise data warehouse. It addresses challenges with existing complex architectures that include many specialized systems and data silos. This leads to issues with visibility, time to access data, and high costs of analytics. Cloudera proposes solutions like using their platform for a multi-workload analytic environment, active data archiving at a tenth the cost, faster and cheaper data transformations, and self-service business intelligence. Case studies show customers saving tens of millions through solutions like offloading processing, avoiding expansion costs, and getting insights from more extensive data exploration.
After a disaster, how much of your critical infrastructure and data could you recover? And how long would it take? To make sure you can answer these important questions with complete confidence, Veritas is adding machine learning technology to its data protection solutions. Attend this session to find out how combining machine learning and data protection enhances your ability to completely protect and recover critical systems and information more quickly and efficiently – no matter where it lives or what happens to it.
This is the tale of a project for a client with a "Cloud First" strategy, and how the client was unprepared for the implications and assumptions of the strategy.
We will explore the assumptions implied by the "Cloud First" strategy, and how, as they were tested, the design of the solution went from "Cloud First" to "Cloud, if possible" and finally to "Cloud, if we're lucky".
Through analysis of the assumptions and the reasons they failed, you will gain a valuable insight into the nature of a "Cloud First" strategy and some of the implications of this strategy.
The scenario that is explored includes the use of Software as a Service and Platform as a Service elements such as Office 365, Project Online, Azure Data Factory, Azure SQL DB and PowerBI.
How to Avoid Disasters via Software-Defined Storage Replication & Site RecoveryDataCore Software
Shifting weather patterns across the globe force us to re-evaluate data protection practices in locations we once thought immune from hurricanes, flooding and other natural disasters.
Offsite data replication combined with advanced site recovery methods should top your list.
In this webcast and live demo, you’ll learn about:
• Software-defined storage services that continuously replicate data, containers and virtual machine images over long distances
• Differences between secondary sites you own or rent vs. virtual destinations in public Clouds
• Techniques that help you test and fine tune recovery measures without disrupting production workloads
• Transferring responsibilities to the remote site
• Rapid restoration of normal operations at the primary facilities when conditions permit
Implementing a long term data retention strategy that leverages the cloudVeritas Technologies LLC
This document discusses implementing a long-term data retention strategy using cloud storage. It outlines challenges with cost, complexity and visibility of long-term retention. The Veritas solution integrates data retention, visibility and management across on-premise and cloud storage using NetBackup and Access. It provides several use cases including building a private cloud for retention, moving retention to public cloud for cost savings, and using Information Map for data visibility and governance. The presentation includes a customer example and demo of the Veritas tools.
This document discusses laptop backup solutions and introduces Druva inSync. It notes that 80% of corporate data resides on laptops and desktops, which are not always connected and have changing IP addresses. Existing solutions like Time Machine have limitations as backups may be stored locally and lost if the laptop is lost. An ideal laptop backup solution would not require removable devices or human intervention, use incremental backups without full backups, and have cross-site deduplication. Druva inSync is presented as a simple, fully automated solution that uses deduplication and WAN optimization to provide efficient backups with fast restores while minimizing storage usage and bandwidth. It can scale to backup 2000 laptops from a single server appliance.
NetBackup CloudCatalyst: Efficient, Cost-Effective Deduplication to the CloudVeritas Technologies LLC
Is your organization looking for a more efficient, cost-effective way to use public and private cloud storage as a backup target? Attend this session to find out how CloudCatalyst can help--by providing deduplication of backup data to object storage environments in both public and private clouds. You'll learn how you can use CloudCatalyst to achieve petabyte scale with minimal cache storage requirements, transfer data from a NetBackup Dedupe Media Server without going through a rehydration process, and much more. Don't miss this chance find out exactly how CloudCatalyst provides the most efficient and cost-effective backups from a data center, to the cloud, or in the cloud.
This document provides an overview of Veritas NetBackup 8.1, which offers improved data protection capabilities for multi-cloud environments, modern workloads, and resilient infrastructure. Key features highlighted include leveraging multiple cloud platforms for long-term storage, faster backup to the cloud, protection for scale-out workloads and hyperconverged environments, simplified deployment and maintenance, and enhanced data security across networks. NetBackup 8.1 also features integrated support for popular databases and big data platforms through its use of flexible policies and frameworks.
Tier1 Backup and Disaster Recovery provides a fully managed private cloud solution for backing up and recovering data and virtual machines. It maintains data in geographically dispersed data centers so data can be recovered even if one location fails. The solution offers affordable daily backups and on-demand recovery of files, servers, or full environments within 4 hours. It avoids costs of additional hardware, software, licensing and data center space while providing secure off-site data protection and testing of disaster recovery plans.
This document discusses converged infrastructure solutions from Microsoft and Nutanix for running Microsoft workloads. It notes that CIOs want to deliver uncompromised performance with minimal budget impact while businesses require more agility. The Microsoft Cloud Platform and technologies like Windows Server, Azure, and Microsoft Private Cloud are presented as addressing these needs. Nutanix is described as enabling the Microsoft vision through its web-scale converged infrastructure that provides predictable performance, economics, and growth for workloads like Exchange and SQL Server in private and hybrid cloud environments.
Data analytics, Spark, Hadoop and AI have become fundamental tools to drive digital transformation. A critical challenge is moving from isolated experiments to an organizational or enterprise production infrastructure. In this talk, we break apart the modern data analytics workflow to focus on the data challenges across different phases of the analytics and AI life cycle. By presenting a unified approach to data storage for AI and Analytics, organizations can reduce costs, modernize their data strategy and build a sustainable enterprise data lake. By anticipating how Hadoop, Spark, Tensorflow, Caffe and traditional analytics like SAS, HPC can share data, IT departments and data science practitioners can not only co-exist, but speed time to insight. We will present the tangible benefits of a Reference Architecture using real-world installations that span proprietary and open-source frameworks. Using intelligent software-defined shared storage, users are able to eliminate silos, reduce multiple data copies, and improve time to insight.PALLAVI GALGALI, Offering Manager,IBM and DOUGLAS O'FLAHERTY, Portfolio Product Manager, IBM
Test Drive: Experience Single-Click Command with the Veritas Access User Inte...Veritas Technologies LLC
To deal with relentless data growth over the past few years, most organizations have evolved to incorporate a wide variety of different storage solutions, including SAN, NAS, tape, cloud, file, block, and object. With increasingly complex combinations of these different storage types being used for primary, secondary, and archived data, understanding and managing your overall storage environment can start to feel like an impossible task. In this session, you will see first-hand how Veritas Access, a new software-defined storage solution, makes it possible to finally manage all of your storage from a single console--and allows you to migrate data from one storage tier to another with a single mouse click.
Webinar: Cloud Storage: The 5 Reasons IT Can Do it BetterStorage Switzerland
In this webinar learn the five reasons why a private cloud storage system may be more cost effective and deliver a higher quality of service than public cloud storage providers.
In this webinar you will learn:
1. What Public Cloud Storage Architectures Look Like
2. Why Public Providers Chose These Architectures
3. The Problem With Traditional Data Center File Solutions
4. Bringing Cloud Lessons to Traditional IT
5. The Five Reasons IT can Do it Better
Microsoft: Building a Massively Scalable System with DataStax and Microsoft's...DataStax Academy
This document discusses how to build massively scalable systems using DataStax and Microsoft's Azure Service Fabric platform. It addresses large-scale challenges like operations management, high density, deployment, updates, scalability, availability and failure recovery. It demonstrates how Service Fabric enables higher microservice density, faster deployments and upgrades, and fast scaling across clusters. Placement constraints and rolling upgrades are explored, along with handling failures and scaling services. Service Fabric provides declarative models, load balancing, and auto-scaling to simplify managing large, distributed systems.
How Lenovo and Nutanix are delivering the invisible infrastructureLenovo Data Center
The document discusses Lenovo's Converged HX Series appliances powered by Nutanix software. It highlights how the appliances simplify infrastructure, reduce costs, and improve reliability. The HX Series includes the HX3500 for compute-heavy workloads, HX5500 for storage-heavy workloads, and HX7500 for high-performance workloads. The Nutanix software delivers capabilities like data services, resilience features, and management tools. Lenovo provides global support, professional services, and warranties for the appliances.
This document discusses leveraging Hadoop within the existing data warehouse environment of the Department of Immigration and Border Protection (DIBP) in Australia. It provides an overview of DIBP's business and why Hadoop was adopted, describes the existing EDW environment, and discusses the technical implementation of Hadoop. It also outlines next steps such as consolidating the departmental EDW and advanced analytics on Hadoop, and concludes by taking questions.
Even though users and application owners are demanding it, the Always-On Data Center seems unrealistic to most IT professionals. Overcoming the cost and complexity of an Always-On environment while delivering consistent results is almost too much to ask. But the reality is that data centers of all sizes can affordably meet this expectation. The Always-On environment requires a holistic approach, counting on a highly virtualized infrastructure, flexible data protection software and purpose built protection storage.
Listen in as experts from Storage Switzerland, Veeam and ExaGrid architect a data availability and protection infrastructure that can meet and even exceed the Always-On expectations of an Always-On organization.
Backing up your virtual environment best practicesInterop
- Image-based backups provide faster and more efficient protection of virtual environments compared to traditional agent-based backups. With image-based backups, entire virtual machines are captured in binary image files.
- There are two methods for image-based backups - direct-to-target which has better performance and proxy-based which can preserve SAN investments.
- Best practices for backups include implementing weekly or bi-weekly full backups and daily incremental backups, with additional snapshots, replication, and off-site storage for critical systems based on recovery SLAs. A tiered approach is needed for large environments.
The document discusses EarthLink Cloud Server Backup and its advantages over traditional tape backup systems. Key points include:
- Cloud backup provides instant access to data for rapid recovery from disasters, unlike slow tape backups which are prone to errors.
- The cloud backup service eliminates upfront costs, reduces data center expenses and costs by paying only for needed storage.
- Data is securely stored in SSAE 16 compliant data centers and regularly backed up with encryption both at rest and in transit.
Siebel Clinical for Small and Medium-Sized OrganizationsPerficient
Param Singh gave a presentation on Siebel Clinical and cloud computing solutions for small to medium life sciences organizations. The presentation covered Siebel Clinical accelerators like ASCEND that provide pre-configured functionality to reduce implementation time and costs. It discussed how cloud computing, such as with ASCEND On-Demand, can make sophisticated software accessible with a low initial investment through a monthly subscription model while maintaining data security. Implementations were compared from fully custom to accelerators with various hosting options to the cloud-based ASCEND On-Demand solution.
How does SolrCloud ensure that replicated data remains consistent? How does Solr avoid data loss when hardware inevitably fails? In this talk, we will cover how Solr addresses failures and what recovery steps the cluster can automatically perform.
BigDataBx #1 - Atelier 1 Cloudera Datawarehouse OptimisationExcelerate Systems
The document discusses how Cloudera can optimize an enterprise data warehouse. It addresses challenges with existing complex architectures that include many specialized systems and data silos. This leads to issues with visibility, time to access data, and high costs of analytics. Cloudera proposes solutions like using their platform for a multi-workload analytic environment, active data archiving at a tenth the cost, faster and cheaper data transformations, and self-service business intelligence. Case studies show customers saving tens of millions through solutions like offloading processing, avoiding expansion costs, and getting insights from more extensive data exploration.
After a disaster, how much of your critical infrastructure and data could you recover? And how long would it take? To make sure you can answer these important questions with complete confidence, Veritas is adding machine learning technology to its data protection solutions. Attend this session to find out how combining machine learning and data protection enhances your ability to completely protect and recover critical systems and information more quickly and efficiently – no matter where it lives or what happens to it.
This is the tale of a project for a client with a "Cloud First" strategy, and how the client was unprepared for the implications and assumptions of the strategy.
We will explore the assumptions implied by the "Cloud First" strategy, and how, as they were tested, the design of the solution went from "Cloud First" to "Cloud, if possible" and finally to "Cloud, if we're lucky".
Through analysis of the assumptions and the reasons they failed, you will gain a valuable insight into the nature of a "Cloud First" strategy and some of the implications of this strategy.
The scenario that is explored includes the use of Software as a Service and Platform as a Service elements such as Office 365, Project Online, Azure Data Factory, Azure SQL DB and PowerBI.
How to Avoid Disasters via Software-Defined Storage Replication & Site RecoveryDataCore Software
Shifting weather patterns across the globe force us to re-evaluate data protection practices in locations we once thought immune from hurricanes, flooding and other natural disasters.
Offsite data replication combined with advanced site recovery methods should top your list.
In this webcast and live demo, you’ll learn about:
• Software-defined storage services that continuously replicate data, containers and virtual machine images over long distances
• Differences between secondary sites you own or rent vs. virtual destinations in public Clouds
• Techniques that help you test and fine tune recovery measures without disrupting production workloads
• Transferring responsibilities to the remote site
• Rapid restoration of normal operations at the primary facilities when conditions permit
Implementing a long term data retention strategy that leverages the cloudVeritas Technologies LLC
This document discusses implementing a long-term data retention strategy using cloud storage. It outlines challenges with cost, complexity and visibility of long-term retention. The Veritas solution integrates data retention, visibility and management across on-premise and cloud storage using NetBackup and Access. It provides several use cases including building a private cloud for retention, moving retention to public cloud for cost savings, and using Information Map for data visibility and governance. The presentation includes a customer example and demo of the Veritas tools.
This document discusses laptop backup solutions and introduces Druva inSync. It notes that 80% of corporate data resides on laptops and desktops, which are not always connected and have changing IP addresses. Existing solutions like Time Machine have limitations as backups may be stored locally and lost if the laptop is lost. An ideal laptop backup solution would not require removable devices or human intervention, use incremental backups without full backups, and have cross-site deduplication. Druva inSync is presented as a simple, fully automated solution that uses deduplication and WAN optimization to provide efficient backups with fast restores while minimizing storage usage and bandwidth. It can scale to backup 2000 laptops from a single server appliance.
NetBackup CloudCatalyst: Efficient, Cost-Effective Deduplication to the CloudVeritas Technologies LLC
Is your organization looking for a more efficient, cost-effective way to use public and private cloud storage as a backup target? Attend this session to find out how CloudCatalyst can help--by providing deduplication of backup data to object storage environments in both public and private clouds. You'll learn how you can use CloudCatalyst to achieve petabyte scale with minimal cache storage requirements, transfer data from a NetBackup Dedupe Media Server without going through a rehydration process, and much more. Don't miss this chance find out exactly how CloudCatalyst provides the most efficient and cost-effective backups from a data center, to the cloud, or in the cloud.
This document provides an overview of Veritas NetBackup 8.1, which offers improved data protection capabilities for multi-cloud environments, modern workloads, and resilient infrastructure. Key features highlighted include leveraging multiple cloud platforms for long-term storage, faster backup to the cloud, protection for scale-out workloads and hyperconverged environments, simplified deployment and maintenance, and enhanced data security across networks. NetBackup 8.1 also features integrated support for popular databases and big data platforms through its use of flexible policies and frameworks.
Tier1 Backup and Disaster Recovery provides a fully managed private cloud solution for backing up and recovering data and virtual machines. It maintains data in geographically dispersed data centers so data can be recovered even if one location fails. The solution offers affordable daily backups and on-demand recovery of files, servers, or full environments within 4 hours. It avoids costs of additional hardware, software, licensing and data center space while providing secure off-site data protection and testing of disaster recovery plans.
This document discusses converged infrastructure solutions from Microsoft and Nutanix for running Microsoft workloads. It notes that CIOs want to deliver uncompromised performance with minimal budget impact while businesses require more agility. The Microsoft Cloud Platform and technologies like Windows Server, Azure, and Microsoft Private Cloud are presented as addressing these needs. Nutanix is described as enabling the Microsoft vision through its web-scale converged infrastructure that provides predictable performance, economics, and growth for workloads like Exchange and SQL Server in private and hybrid cloud environments.
Data analytics, Spark, Hadoop and AI have become fundamental tools to drive digital transformation. A critical challenge is moving from isolated experiments to an organizational or enterprise production infrastructure. In this talk, we break apart the modern data analytics workflow to focus on the data challenges across different phases of the analytics and AI life cycle. By presenting a unified approach to data storage for AI and Analytics, organizations can reduce costs, modernize their data strategy and build a sustainable enterprise data lake. By anticipating how Hadoop, Spark, Tensorflow, Caffe and traditional analytics like SAS, HPC can share data, IT departments and data science practitioners can not only co-exist, but speed time to insight. We will present the tangible benefits of a Reference Architecture using real-world installations that span proprietary and open-source frameworks. Using intelligent software-defined shared storage, users are able to eliminate silos, reduce multiple data copies, and improve time to insight.PALLAVI GALGALI, Offering Manager,IBM and DOUGLAS O'FLAHERTY, Portfolio Product Manager, IBM
Test Drive: Experience Single-Click Command with the Veritas Access User Inte...Veritas Technologies LLC
To deal with relentless data growth over the past few years, most organizations have evolved to incorporate a wide variety of different storage solutions, including SAN, NAS, tape, cloud, file, block, and object. With increasingly complex combinations of these different storage types being used for primary, secondary, and archived data, understanding and managing your overall storage environment can start to feel like an impossible task. In this session, you will see first-hand how Veritas Access, a new software-defined storage solution, makes it possible to finally manage all of your storage from a single console--and allows you to migrate data from one storage tier to another with a single mouse click.
Webinar: Cloud Storage: The 5 Reasons IT Can Do it BetterStorage Switzerland
In this webinar learn the five reasons why a private cloud storage system may be more cost effective and deliver a higher quality of service than public cloud storage providers.
In this webinar you will learn:
1. What Public Cloud Storage Architectures Look Like
2. Why Public Providers Chose These Architectures
3. The Problem With Traditional Data Center File Solutions
4. Bringing Cloud Lessons to Traditional IT
5. The Five Reasons IT can Do it Better
Microsoft: Building a Massively Scalable System with DataStax and Microsoft's...DataStax Academy
This document discusses how to build massively scalable systems using DataStax and Microsoft's Azure Service Fabric platform. It addresses large-scale challenges like operations management, high density, deployment, updates, scalability, availability and failure recovery. It demonstrates how Service Fabric enables higher microservice density, faster deployments and upgrades, and fast scaling across clusters. Placement constraints and rolling upgrades are explored, along with handling failures and scaling services. Service Fabric provides declarative models, load balancing, and auto-scaling to simplify managing large, distributed systems.
How Lenovo and Nutanix are delivering the invisible infrastructureLenovo Data Center
The document discusses Lenovo's Converged HX Series appliances powered by Nutanix software. It highlights how the appliances simplify infrastructure, reduce costs, and improve reliability. The HX Series includes the HX3500 for compute-heavy workloads, HX5500 for storage-heavy workloads, and HX7500 for high-performance workloads. The Nutanix software delivers capabilities like data services, resilience features, and management tools. Lenovo provides global support, professional services, and warranties for the appliances.
This document discusses leveraging Hadoop within the existing data warehouse environment of the Department of Immigration and Border Protection (DIBP) in Australia. It provides an overview of DIBP's business and why Hadoop was adopted, describes the existing EDW environment, and discusses the technical implementation of Hadoop. It also outlines next steps such as consolidating the departmental EDW and advanced analytics on Hadoop, and concludes by taking questions.
Even though users and application owners are demanding it, the Always-On Data Center seems unrealistic to most IT professionals. Overcoming the cost and complexity of an Always-On environment while delivering consistent results is almost too much to ask. But the reality is that data centers of all sizes can affordably meet this expectation. The Always-On environment requires a holistic approach, counting on a highly virtualized infrastructure, flexible data protection software and purpose built protection storage.
Listen in as experts from Storage Switzerland, Veeam and ExaGrid architect a data availability and protection infrastructure that can meet and even exceed the Always-On expectations of an Always-On organization.
Backing up your virtual environment best practicesInterop
- Image-based backups provide faster and more efficient protection of virtual environments compared to traditional agent-based backups. With image-based backups, entire virtual machines are captured in binary image files.
- There are two methods for image-based backups - direct-to-target which has better performance and proxy-based which can preserve SAN investments.
- Best practices for backups include implementing weekly or bi-weekly full backups and daily incremental backups, with additional snapshots, replication, and off-site storage for critical systems based on recovery SLAs. A tiered approach is needed for large environments.
The document discusses data protection challenges and solutions from Dell. It notes that 23% of respondents desire increased reliability of backups/recoveries and 22% wish for increased speed or frequency of backups. Dell's data protection approach claims to restore applications and data 6x faster than legacy solutions with near-zero downtime. It provides extensible protection across physical, virtual and cloud in one solution. Dell solutions aim to help customers spend less, operate with more agility, and unlock time in their day.
PHD Virtual: Optimizing Backups for Any StorageMark McHenry
Learn about the differences between virtual full, and traditional full and incremental backup modes, and which mode works best depending on the type of storage.
The document discusses Veeam's data protection solutions including its Availability Suite which provides availability for always-on enterprises with recovery time objectives (RTOs) of less than 15 minutes. It highlights key features like agentless protection, high-speed recovery, data loss avoidance using the 3-2-1 backup rule, and complete visibility into infrastructure with monitoring and reporting tools. The document also provides information on Veeam's growth, customers, partners and industry recognition.
PHD Virtual Image-based Backup for Citrix XenServerMark McHenry
This presentation shares information about PHD Virtual's Image-based backup for Citrix XenServer environments. This solution is a simple and cost effective alternative to those who are still wrestling with agents and writing scripts to perform backups.
How to achieve better backup with SymantecArrow ECS UK
Symantec provides holistic data protection solutions to address common customer challenges with backup and recovery, including:
1) Disparate backup solutions that add complexity and cost as data grows in volume and organizations virtualize.
2) Struggling to meet backup windows and service level agreements as data increases in size.
3) Looking for ways to reduce cost, complexity, and risk across their backup and recovery environment.
Symantec's portfolio includes NetBackup for large enterprises and Backup Exec for small and medium businesses, both utilizing shared deduplication and virtualization technologies. Symantec also offers appliances and cloud options for simplified backup and disaster recovery.
Webinar: Overcoming the Top 3 Challenges of the Storage Status QuoStorage Switzerland
Between 2010 and 2020, IDC predicts that the amount of data created by humans and enterprises will increase 50x. Legacy network attached storage (NAS) systems can't meet the unstructured data demands of the mobile workforce or distributed organizations. In this webinar, George Crump, Lead Analyst at Storage Switzerland and Brian Wink, Director of Solutions Engineering at Panzura expose the hidden gotcha's of the storage status quo and explore how to manage unstructured data in the cloud.
[NetApp] Simplified HA:DR Using Storage SolutionsPerforce
Perforce administrators have several choices for HA/DR solutions depending on RTO/RPO objectives. Using an effective storage solution such as NetApp filers simplifies HA/DR planning in several ways. In this session we'll look at using a NetApp filer for more reliable HA in the event of storage or application failure and simpler DR replication. In the latter case, deduplication and SnapMirror technology can significantly reduce the amount of data replicated to a remote site.
Dalle soluzioni di BackUp & Recovery al Data management a 360° Jürgen Ambrosi
Modernizzare le soluzioni di Data Protection è oggi un tema dettato dalla rapida comparsa di fenomeni come la Digital Trasformation (o Revolution), la crescita esponenziale del volume dei dati riscontrata ed attesa nel prossimo futuro, l’adozione del Cloud e delle nuove Applicazioni, nonché il GDPR.
Non possono più fare affidamento a soluzioni di Backup poco efficienti, costose e molto spesso complesse. Conseguentemente ci si sta orientando verso nuove strategie di protezione del dato.
Esploreremo la piattaforma Veritas nativamente integrata “360° Data Management”, una piattaforma integrata che offra la protezione, l’alta affidabilità e la visibilità del dato. Primo elemento fondamentale è l’introduzione di una soluzione di Data Protection Unificata con unica console per ambienti fisici, virtuali e in Cloud capace di agire proattivamente per individuare in quale ambiente siano depositati i dati di interesse e quali dati strategici debbano essere rapidamente protetti e preservati in modo sicuro, contenendone il volume ai soli necessari per garantire i servizi di business.
VMworld 2014: Data Protection for vSphere 101VMworld
VDP and vSphere Replication provide different data protection techniques for virtual machines. VDP uses agent-less, disk-based backups for virtual machines with capabilities like application-awareness, granular recovery, and self-service file recovery. It has an RPO of greater than 24 hours and RTO of hours. vSphere Replication provides near-synchronous replication between sites with RPO under 24 hours and RTO of minutes for disaster recovery and testing. The document discusses use cases, features, and best practices for using VDP and vSphere Replication together for backup and replication in vSphere environments.
Backup systems are being asked to do things they were never designed for - and it’s killing them. Join experts from Storage Switzerland and NEC as they discuss the four assumptions that are killing backup storage:
* Assuming backup is an archive
* Assuming it can grow forever
* Assuming it can support production applications
* Assuming deduplication won’t impact recovery
You’ll come away with strategies that could save your backup system from the changes that threaten to overwhelm it.
For complete audio and access to exclusive papers, register for our on-demand webinar"
https://www.brighttalk.com/webcast/5583/126249
How to “Future Proof” Data Protection for Organizational ResilienceStorage Switzerland
Users expectations of IT's ability to return mission critical applications to production are higher than ever. These expectations are leading IT to abandon many of their backup and recovery solutions to try new, unproven solutions that may or may not solve the problem. In either case, the organization wasted its investment in the first solution for the unknown potential of a new solution. User expectations are certain to get higher, so this cycle will likely repeat itself. IT needs a new strategy, one that will meet the current expectations of users and pave the way for true organizational resilience.
These are the *updated* slides (InnoDB clusters and MySQL Enterprise Monitor 3.4 are now GA) from the following webinar, which you can now watch on demand:
https://www.mysql.com/news-and-events/web-seminars/why-mysql-high-availability-matters/
-----------------------------------------------------
MySQL high availability matters because your data matters. If your database goes down, whether due to human error, catastrophic network failure, or planned maintenance, the accessibility and accuracy of your data can be compromised with disastrous results. We'll examine the critical elements of a high availability solution, including:
- Data redundancy
- Data consistency
- Automatic fault detection and resolution
- No single point of failure
And how you can achieve these things more easily than ever before using MySQL's new native HA solution.
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
Are you considering leveraging the cloud alongside your existing IBM AIX and IBM I systems infrastructure? There are likely benefits to be realized in scalability, flexibility and even cost.
However, to realize these benefits, you need to be aware of the challenges and opportunities that come with integrating your IBM Power Systems in the cloud. These challenges range from data synchronization to testing to planning for fallback in the event of problems.
Join us for this webcast to hear about:
• Seamless migration strategies
• Best practices for operating in the cloud
• Benefits of cloud-based HA/DR for IBM AIX and IBM i
Webinar: Achieving VDI Success Without All-Flash ProblemsStorage Switzerland
Join Storage Switzerland and Cloudistics for an informative webinar that will provide an alternative approach that meets user’s performance expectations while leveraging existing – and often already paid for – storage hardware and does not introduce new silos of storage.
Addressing VMware Data Backup and Availability Challenges with IBM Spectrum P...Paula Koziol
Whether in the enterprise or small-to-medium sized firms, VMware IT administrators and storage management teams face an increasingly complex set of decisions when it comes to deploying, managing, protecting and supporting storage infrastructure. Hear about the emerging issues in the most rapidly changing part of the IT environment, virtual machine (VM) management and availability. Learn about IBM’s perspective on data availability and how it is addressing future challenges you may not even be thinking about today with the new, highly flexible IBM Spectrum Protect Plus VM backup and availability management solution.
Presentation disaster recovery for oracle fusion middleware with the zfs st...solarisyougood
The document discusses disaster recovery for Oracle Fusion Middleware using the ZFS Storage Appliance. It outlines the business drivers for disaster recovery including decreasing acceptable downtime. It then provides an overview of using the ZFS Storage Appliance to replicate Oracle Fusion Middleware data to a secondary site for disaster recovery. Key benefits include simplicity, cost savings, and reduced risk. Oracle provides support services to maximize availability of the solution.
Windows Server 2012 R2 at VMUG.org in LeedsSimon May
A brief overview of what's coming in Windows Server 2012 R2 that I delivered at VMUG recently. Details on virtualisation improvements, storage improvements, VDI and much more
Similar to Survey Results: The State of Backup (20)
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
4. What is the average backup environment?
• Are you using file-based backups, application-aware
backups, server image backups, a combination or none?
• Which operating systems are you currently using in your
environment?
• What percentage of your environment is virtualized?
• Are you using an on-premise disaster recovery solution,
offsite physical solutions (tape, etc), cloud-based
solutions or Disaster Recovery as a Service (DRaaS)?
Do you plan to add solutions in the near future?
400+responses from IT
professionals in a
range of industries
6. Most respondents used multiple backup types.
79%used at least two
types of backup
52%used all three types
File
Backups
Application
Backups
Server Image
Backups
9%
15%
8%
3% 4%
9%
52%
7. Most used multiple operating systems
63%use more than
one OS
95%have Windows in
their environment
33%
3%
47%
1% 1%
8% 4% 1%
<1%: Mac, Other, Mac + Linux, Linux + Other
8. Virtualization is an ongoing project
47%have 71-100%
virtualized servers
34%have 31-70%
virtualized servers
19%have 0-30%
virtualized servers
9. Physical Offsite DR remains common...
15%
20%
10%
25%
70%use one of four
DR solutions.
97%using some form of
disaster recovery
On-Premise
Physical Offsite
On-Premise
+ Cloud
On-Premise
+ Offsite
10. ...but cloud adoption is growing
31%planned to add a
new DR method.
52%of those adding
DR planned to
use cloud.
13. What we do
SERVER
LOCAL MEDIA
ZettaMirror
Software
ZETTA.NET SERVICE
• NAS
• SAN
• USB
• HDD
CENTRALIZED
WEB MANAGEMENT
• Online backup and disaster recovery,
designed for enterprise
• Built-in WAN optimization transfers
up to 5TB in 24 hours
• SSAE 16-audited,
HIPAA/ITAR compliant
14. One solution, multiple backup types
PHYSICAL
SERVERS
VIRTUAL
SERVERS
FILES • DATABASES
SERVER IMAGES
BACKUP • ARCHIVING
DISASTER RECOVERY
CLOUD + LOCAL
• Back up file directories, application-specific data
or complete server images, with granular recovery options.
• Multiple physical and virtual recovery options
• Included support for SQL,
Exchange, Hyper-V, NetApp,
VMware and unstructured data
• Simultaneous backup to cloud
for offsite data protection and
local drive for onsite recovery
15. Starting at $175/month
• Cloud + local file, database & server image backup /
recovery for physical & virtual servers
• Backup & DR software licenses for an unlimited
number of servers & endpoints
• Built-in WAN acceleration for optimal transfer speeds
• 500GB secure cloud storage in SSAE16-audited
datacenters
• 24 x 7 US-based engineer-level support
• Plugins for SQL, Exchange, Hyper-V, NetApp & more
16. Streamlined central management
• Create, configure and manage
all backup jobs from a single
web-based console
• Unified interface to backup
physical and virtual servers
17. Browse files in the cloud
• Choose snapshot from versions
stored in the cloud
• Data is replicated to the cloud in
its native format, making it simple
to locate and browse files and
directories
• Recover a full backup, or select
and restore individual files