Velocity provides fast, on-demand access to virtual copies of databases without needing to create and store physical copies. It uses a virtual provisioning file system to create virtual database copies by mapping them to extents in the ingested databases stored on the Velocity storage server. Key features include supporting Oracle and SQL databases, scheduling ingestions, and empowering users to self-provision sandbox copies for tasks like testing and analytics.
Continua il ciclo di webinar in collaborazione con Veritas Technologies.
In questo secondo appuntamento abbiamo visto le soluzioni Veritas di Software Defined Storage.
Il settore IT è oggi una delle aree aziendali maggiormente impattate dal fenomeno dell’aumento esponenziale dei dati. Conseguentemente, gli IT Manager devono far fronte all'aumento dei costi e della complessità per l’implementazione di soluzioni di Storage atte a contenere la crescita del volume dei dati.
Al tempo stesso essi devono operare delle scelte orientate a soluzioni in grado di soddisfare i livelli prestazionali sempre più elevati richiesti dalle nuove applicazioni di business mantenendo altresì la funzionalità di quelle legacy.
L’implementazione di hardware NAS ad alte prestazioni o l’adozione di soluzioni storage di tipo diversificato non rappresentano oggi la soluzione ideale dal punto di vista degli impatti economici e di gestione. Sono infatti disponibili nuove tecnologie, sviluppate proprio in risposta all'esigenza di efficientamento e al contenimento dei costi, che permettono di realizzare infrastrutture che consentono di massimizzare l’utilizzo delle soluzioni storage già presenti nel Data Center e l’adozione si soluzioni Object Storage.
Allo scopo Veritas presenta la propria linea di soluzioni Software Defined Storage.
Le soluzioni tecnologiche per il disaster recovery e business continuityJürgen Ambrosi
Oggi è vitale per le aziende consolidare il proprio vantaggio competitivo sul mercato di riferimento. La crescente quantità di dati aziendali quotidianamente raccolta, elaborata ed archiviata costituisce di fatto un prezioso asset per generare nuove opportunità di business. La gestione di tale importante servizio coinvolge direttamente l’IT che, conseguentemente, deve adottare tutte le misure atte a garantirne la continuità operativa per rispettare i livelli di RTO e RPO fissati dagli obiettivi aziendali e dalle normative vigenti.Le soluzioni di Business Continuity e di Disaster Recovery indirizzano questa esigenza in modo puntuale, garantendo la funzionalità di servizio anche a fronte di fenomeni accidentali (guasto, fenomeni naturali, attacchi informatici, errore umano, ecc.) che potrebbero presentarsi nell’esercizio, evitando il rischio di interruzione del business e/o di incorrere in sanzioni amministrative.
Le soluzioni Veritas Resiliency Platform e Veritas CloudMobility permettono di realizzare infrastrutture di Business Continuity e Disaster Recovery con molta flessibilità architetturale. In particolare, entrambe – seppur con strategie diverse – permettono di sfruttare l’interessante opportunità di servizi in Cloud offerta dai vari Service Providers, risolvendo inoltre qualsiasi possibile complessità e rischio di lock-in di tipo contrattuale nell'adozione di queste tecnologie.
Dalle soluzioni di BackUp & Recovery al Data management a 360° Jürgen Ambrosi
Modernizzare le soluzioni di Data Protection è oggi un tema dettato dalla rapida comparsa di fenomeni come la Digital Trasformation (o Revolution), la crescita esponenziale del volume dei dati riscontrata ed attesa nel prossimo futuro, l’adozione del Cloud e delle nuove Applicazioni, nonché il GDPR.
Non possono più fare affidamento a soluzioni di Backup poco efficienti, costose e molto spesso complesse. Conseguentemente ci si sta orientando verso nuove strategie di protezione del dato.
Esploreremo la piattaforma Veritas nativamente integrata “360° Data Management”, una piattaforma integrata che offra la protezione, l’alta affidabilità e la visibilità del dato. Primo elemento fondamentale è l’introduzione di una soluzione di Data Protection Unificata con unica console per ambienti fisici, virtuali e in Cloud capace di agire proattivamente per individuare in quale ambiente siano depositati i dati di interesse e quali dati strategici debbano essere rapidamente protetti e preservati in modo sicuro, contenendone il volume ai soli necessari per garantire i servizi di business.
Le soluzioni tecnologiche a supporto del mondo OpenStack e ContainerJürgen Ambrosi
L’interesse da parte delle aziende verso soluzioni come i Containers e cloud-based come OpenStack è ampiamente confermato dal trend positivo rilevato dagli analisti. I benefici derivanti dall’adozione di tali soluzioni nell’ambito IT sono rappresentati dalla possibitità di realizzare architetture maggiormente agili, scalabili ed economiche in grado di soddisfare le sempre piu’ stingenti esigenze di business ed affrontare le pressioni competitive. Veritas presenta le proprie soluzioni software defined storage Veritas ™ HyperScale per OpenStack e Veritas ™ HyperScale for Containers quali piattaforme abilitanti all’introduzione di tali nuove soluzioni tecnologiche garantendo altresì un livello di affidabilità Enterprise-class.
Examining Technical Best Practices for Veritas and AWS Using a Detailed Refer...Veritas Technologies LLC
What is the safest, most efficient way to move data to and manage information on the Amazon Web Services (AWS) platform--and how can Veritas solutions help? In this deep dive technical session, Veritas experts will answer these questions by walking you through a detailed reference architecture for AWS. This includes exploring the best model for deploying, integrating, and managing Veritas solutions with AWS services; learning how Veritas can help you move and manage data most effectively on the AWS platform; and seeing a live, fully-functioning reference architecture in action.
Scalar Cisco Hyperflex Presentation, May 13 2016, Part III: Scalar Lunch & Le...patmisasi
Part III: Scalar Lunch & Learn Seminar Series: HyperConverged Systems: Cisco HyperFlex Systems. Cisco HyperFlex HX-Series combines compute, storage, and networking into an easy-to-use system that brings new levels of speed and efficiency to IT.
HyperFlex represents true hyperconvergence, combining innovative software defined storage and data services software with Cisco UCS, the proven system that unifies servers and networking like no other. Use HyperFlex to
unlock the full potential of hyperconverged infrastructure.
VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...Maichino Sepede
An overview of the VxRail Appliance, including what’s new with VxRail on the 14th generation PowerEdge server, and advancements in the VxRail 4.5 software.
Continua il ciclo di webinar in collaborazione con Veritas Technologies.
In questo secondo appuntamento abbiamo visto le soluzioni Veritas di Software Defined Storage.
Il settore IT è oggi una delle aree aziendali maggiormente impattate dal fenomeno dell’aumento esponenziale dei dati. Conseguentemente, gli IT Manager devono far fronte all'aumento dei costi e della complessità per l’implementazione di soluzioni di Storage atte a contenere la crescita del volume dei dati.
Al tempo stesso essi devono operare delle scelte orientate a soluzioni in grado di soddisfare i livelli prestazionali sempre più elevati richiesti dalle nuove applicazioni di business mantenendo altresì la funzionalità di quelle legacy.
L’implementazione di hardware NAS ad alte prestazioni o l’adozione di soluzioni storage di tipo diversificato non rappresentano oggi la soluzione ideale dal punto di vista degli impatti economici e di gestione. Sono infatti disponibili nuove tecnologie, sviluppate proprio in risposta all'esigenza di efficientamento e al contenimento dei costi, che permettono di realizzare infrastrutture che consentono di massimizzare l’utilizzo delle soluzioni storage già presenti nel Data Center e l’adozione si soluzioni Object Storage.
Allo scopo Veritas presenta la propria linea di soluzioni Software Defined Storage.
Le soluzioni tecnologiche per il disaster recovery e business continuityJürgen Ambrosi
Oggi è vitale per le aziende consolidare il proprio vantaggio competitivo sul mercato di riferimento. La crescente quantità di dati aziendali quotidianamente raccolta, elaborata ed archiviata costituisce di fatto un prezioso asset per generare nuove opportunità di business. La gestione di tale importante servizio coinvolge direttamente l’IT che, conseguentemente, deve adottare tutte le misure atte a garantirne la continuità operativa per rispettare i livelli di RTO e RPO fissati dagli obiettivi aziendali e dalle normative vigenti.Le soluzioni di Business Continuity e di Disaster Recovery indirizzano questa esigenza in modo puntuale, garantendo la funzionalità di servizio anche a fronte di fenomeni accidentali (guasto, fenomeni naturali, attacchi informatici, errore umano, ecc.) che potrebbero presentarsi nell’esercizio, evitando il rischio di interruzione del business e/o di incorrere in sanzioni amministrative.
Le soluzioni Veritas Resiliency Platform e Veritas CloudMobility permettono di realizzare infrastrutture di Business Continuity e Disaster Recovery con molta flessibilità architetturale. In particolare, entrambe – seppur con strategie diverse – permettono di sfruttare l’interessante opportunità di servizi in Cloud offerta dai vari Service Providers, risolvendo inoltre qualsiasi possibile complessità e rischio di lock-in di tipo contrattuale nell'adozione di queste tecnologie.
Dalle soluzioni di BackUp & Recovery al Data management a 360° Jürgen Ambrosi
Modernizzare le soluzioni di Data Protection è oggi un tema dettato dalla rapida comparsa di fenomeni come la Digital Trasformation (o Revolution), la crescita esponenziale del volume dei dati riscontrata ed attesa nel prossimo futuro, l’adozione del Cloud e delle nuove Applicazioni, nonché il GDPR.
Non possono più fare affidamento a soluzioni di Backup poco efficienti, costose e molto spesso complesse. Conseguentemente ci si sta orientando verso nuove strategie di protezione del dato.
Esploreremo la piattaforma Veritas nativamente integrata “360° Data Management”, una piattaforma integrata che offra la protezione, l’alta affidabilità e la visibilità del dato. Primo elemento fondamentale è l’introduzione di una soluzione di Data Protection Unificata con unica console per ambienti fisici, virtuali e in Cloud capace di agire proattivamente per individuare in quale ambiente siano depositati i dati di interesse e quali dati strategici debbano essere rapidamente protetti e preservati in modo sicuro, contenendone il volume ai soli necessari per garantire i servizi di business.
Le soluzioni tecnologiche a supporto del mondo OpenStack e ContainerJürgen Ambrosi
L’interesse da parte delle aziende verso soluzioni come i Containers e cloud-based come OpenStack è ampiamente confermato dal trend positivo rilevato dagli analisti. I benefici derivanti dall’adozione di tali soluzioni nell’ambito IT sono rappresentati dalla possibitità di realizzare architetture maggiormente agili, scalabili ed economiche in grado di soddisfare le sempre piu’ stingenti esigenze di business ed affrontare le pressioni competitive. Veritas presenta le proprie soluzioni software defined storage Veritas ™ HyperScale per OpenStack e Veritas ™ HyperScale for Containers quali piattaforme abilitanti all’introduzione di tali nuove soluzioni tecnologiche garantendo altresì un livello di affidabilità Enterprise-class.
Examining Technical Best Practices for Veritas and AWS Using a Detailed Refer...Veritas Technologies LLC
What is the safest, most efficient way to move data to and manage information on the Amazon Web Services (AWS) platform--and how can Veritas solutions help? In this deep dive technical session, Veritas experts will answer these questions by walking you through a detailed reference architecture for AWS. This includes exploring the best model for deploying, integrating, and managing Veritas solutions with AWS services; learning how Veritas can help you move and manage data most effectively on the AWS platform; and seeing a live, fully-functioning reference architecture in action.
Scalar Cisco Hyperflex Presentation, May 13 2016, Part III: Scalar Lunch & Le...patmisasi
Part III: Scalar Lunch & Learn Seminar Series: HyperConverged Systems: Cisco HyperFlex Systems. Cisco HyperFlex HX-Series combines compute, storage, and networking into an easy-to-use system that brings new levels of speed and efficiency to IT.
HyperFlex represents true hyperconvergence, combining innovative software defined storage and data services software with Cisco UCS, the proven system that unifies servers and networking like no other. Use HyperFlex to
unlock the full potential of hyperconverged infrastructure.
VxRail Appliance - Modernize your infrastructure and accelerate IT transforma...Maichino Sepede
An overview of the VxRail Appliance, including what’s new with VxRail on the 14th generation PowerEdge server, and advancements in the VxRail 4.5 software.
Examining Technical Best Practices for Veritas and Azure Using a Detailed Re...Veritas Technologies LLC
Attend this deep dive technical session to learn how Veritas can help you move and manage data more effectively on the Microsoft Azure platform. This includes walking you through a detailed reference architecture to reveal the best model for deploying, integrating, and managing Veritas solutions with Azure services. Don't miss this opportunity to see and explore the benefits of using Veritas in an Azure environment through the lens of a live, fully-functioning reference architecture.
Veritas is expanding its appliance portfolio to meet the challenges of today's rapidly changing data protection and data management environments. Join this session for a detailed look at the many new features and capabilities these next-generation Veritas appliances have to offer. You'll learn how new High Availability (HA) capabilities will reduce the operational costs of planned and unplanned downtime, explore the latest centralized appliance management capabilities, receive a detailed overview of new dedupe to the cloud capabilities, get a detailed look at expanded database protection capabilities, and much more. Don't miss this chance to gain deep technical insights into the many ways these next-generation Veritas appliances will improve your ability to protect your most critical data.
Cloud Bursting: Leveraging the Cloud to Maintain App Performance during Peak ...Veritas Technologies LLC
Even in a multi-cloud world, some mission-critical applications with high performance requirements will continue to run primarily in the data center. However, that doesn't mean these apps can't benefit from public cloud infrastructures, especially during peak times. Join this session to explore how the latest hybrid cloud use cases--including cloud bursting to public infrastructure--can help you maintain performance and meet peak workload demands in a more predictable, cost-effective manner.
Dell Technologies è un’esclusiva famiglia di aziende che offre alle organizzazioni l’infrastruttura necessaria per costruire il loro futuro digitale, favorire l’IT Transformation e proteggere le loro risorse più importanti: le informazioni.
In particolare per il settore dell’Education di livello superiore, Dell EMC ha studiato un catalogo di soluzioni in aree quali:
Converged Infrastructure
Storage e Protection dei dati
Servizi di didattica digitale
In questo ciclo di webinar illustreremo le soluzioni Dell EMC più all'avanguardia, attualmente oggetto di studio da parte della Fondazione CRUI per un possibile contratto in convenzione.
VMworld 2013: Dell Solutions for VMware Virtual SAN VMworld
VMworld 2013
Sheetal Kochavara, VMware
Bryan Martin, Dell Inc.
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
MT44 Dell EMC Data Protection: What You Need to Know About Data Protection Ev...Dell EMC World
Data protection is a critical pillar of any organization’s IT transformation, and Dell EMC is #1 in data protection, offering the industry’s most comprehensive portfolio of solutions.Our ‘Data Protection Everywhere’ strategy provides customers with the ultimate in choice and flexibility and eliminates the need to work with multiple vendors ‘point’ products. In this session learn how we enable you to solve your most difficult data protection challenges of today while laying the foundation to address the challenges of tomorrow. Whether your data is local or in the cloud, Dell EMC has you covered. Join this session and learn how to ensure you are protected.
More about Dell EMC World at http://dellemcworld.com/
Nutanix NEXT on Tour - Maarssen, Netherlands NEXTtour
Ontmoet Nutanix, IT Industry experts en uw peers voor een dynamische middag sessie in De Glazen Ruimte te Maarssen. Als deelnemer laten we u Nutanix Acropolis en Prism zien, oplossingen voor de volgende generatie enterprise computing, met als afsluiting een cocktail hour.
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup AppliancesVeritas Technologies LLC
Together, NetBackup 8.0 and 8.1 are perhaps the two most significant consecutive releases in NetBackup history. Attend this session to learn how the newly released NetBackup 8.1 builds on version 8.0 to deliver the promise of modern data protection and advanced information management like never before. This session will feature a detailed technical overview of the new security architecture in NetBackup 8.1 that keeps data secure across any network, new dedupe to the cloud capabilities that deliver industry-leading performance, instant recovery for Oracle, added support for virtual and next-gen workloads, faster and easier deployments, and many other new features and capabilities.
You've decided to adopt Hyperconverged technologies for your hybrid cloud, however what are the hardware platform choices and how do you go about deciding which platform best suits your needs?
What can the new NetBackup Appliances offer your organization that Data Domain can't? This session is dedicated to exploring the answers. You'll learn how migrating from Data Domain and other dedupe appliances to the latest appliances from Veritas can reduce backup times and rack space, lower your power and cooling costs, and deliver crucial new 360 Data Management capabilities. After you attend, you'll understand exactly why modern backup appliances need to do more than dedupe--and how intelligent Veritas appliances can deliver the scale, performance, resiliency, availability, and small data center footprint you need.
Examining Technical Best Practices for Veritas and Azure Using a Detailed Re...Veritas Technologies LLC
Attend this deep dive technical session to learn how Veritas can help you move and manage data more effectively on the Microsoft Azure platform. This includes walking you through a detailed reference architecture to reveal the best model for deploying, integrating, and managing Veritas solutions with Azure services. Don't miss this opportunity to see and explore the benefits of using Veritas in an Azure environment through the lens of a live, fully-functioning reference architecture.
Veritas is expanding its appliance portfolio to meet the challenges of today's rapidly changing data protection and data management environments. Join this session for a detailed look at the many new features and capabilities these next-generation Veritas appliances have to offer. You'll learn how new High Availability (HA) capabilities will reduce the operational costs of planned and unplanned downtime, explore the latest centralized appliance management capabilities, receive a detailed overview of new dedupe to the cloud capabilities, get a detailed look at expanded database protection capabilities, and much more. Don't miss this chance to gain deep technical insights into the many ways these next-generation Veritas appliances will improve your ability to protect your most critical data.
Cloud Bursting: Leveraging the Cloud to Maintain App Performance during Peak ...Veritas Technologies LLC
Even in a multi-cloud world, some mission-critical applications with high performance requirements will continue to run primarily in the data center. However, that doesn't mean these apps can't benefit from public cloud infrastructures, especially during peak times. Join this session to explore how the latest hybrid cloud use cases--including cloud bursting to public infrastructure--can help you maintain performance and meet peak workload demands in a more predictable, cost-effective manner.
Dell Technologies è un’esclusiva famiglia di aziende che offre alle organizzazioni l’infrastruttura necessaria per costruire il loro futuro digitale, favorire l’IT Transformation e proteggere le loro risorse più importanti: le informazioni.
In particolare per il settore dell’Education di livello superiore, Dell EMC ha studiato un catalogo di soluzioni in aree quali:
Converged Infrastructure
Storage e Protection dei dati
Servizi di didattica digitale
In questo ciclo di webinar illustreremo le soluzioni Dell EMC più all'avanguardia, attualmente oggetto di studio da parte della Fondazione CRUI per un possibile contratto in convenzione.
VMworld 2013: Dell Solutions for VMware Virtual SAN VMworld
VMworld 2013
Sheetal Kochavara, VMware
Bryan Martin, Dell Inc.
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
MT44 Dell EMC Data Protection: What You Need to Know About Data Protection Ev...Dell EMC World
Data protection is a critical pillar of any organization’s IT transformation, and Dell EMC is #1 in data protection, offering the industry’s most comprehensive portfolio of solutions.Our ‘Data Protection Everywhere’ strategy provides customers with the ultimate in choice and flexibility and eliminates the need to work with multiple vendors ‘point’ products. In this session learn how we enable you to solve your most difficult data protection challenges of today while laying the foundation to address the challenges of tomorrow. Whether your data is local or in the cloud, Dell EMC has you covered. Join this session and learn how to ensure you are protected.
More about Dell EMC World at http://dellemcworld.com/
Nutanix NEXT on Tour - Maarssen, Netherlands NEXTtour
Ontmoet Nutanix, IT Industry experts en uw peers voor een dynamische middag sessie in De Glazen Ruimte te Maarssen. Als deelnemer laten we u Nutanix Acropolis en Prism zien, oplossingen voor de volgende generatie enterprise computing, met als afsluiting een cocktail hour.
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup AppliancesVeritas Technologies LLC
Together, NetBackup 8.0 and 8.1 are perhaps the two most significant consecutive releases in NetBackup history. Attend this session to learn how the newly released NetBackup 8.1 builds on version 8.0 to deliver the promise of modern data protection and advanced information management like never before. This session will feature a detailed technical overview of the new security architecture in NetBackup 8.1 that keeps data secure across any network, new dedupe to the cloud capabilities that deliver industry-leading performance, instant recovery for Oracle, added support for virtual and next-gen workloads, faster and easier deployments, and many other new features and capabilities.
You've decided to adopt Hyperconverged technologies for your hybrid cloud, however what are the hardware platform choices and how do you go about deciding which platform best suits your needs?
What can the new NetBackup Appliances offer your organization that Data Domain can't? This session is dedicated to exploring the answers. You'll learn how migrating from Data Domain and other dedupe appliances to the latest appliances from Veritas can reduce backup times and rack space, lower your power and cooling costs, and deliver crucial new 360 Data Management capabilities. After you attend, you'll understand exactly why modern backup appliances need to do more than dedupe--and how intelligent Veritas appliances can deliver the scale, performance, resiliency, availability, and small data center footprint you need.
Windows Server 2012 R2 at VMUG.org in LeedsSimon May
A brief overview of what's coming in Windows Server 2012 R2 that I delivered at VMUG recently. Details on virtualisation improvements, storage improvements, VDI and much more
Azure SQL Database Managed Instance is a new flavor of Azure SQL Database that is a game changer. It offers near-complete SQL Server compatibility and network isolation to easily lift and shift databases to Azure (you can literally backup an on-premise database and restore it into a Azure SQL Database Managed Instance). Think of it as an enhancement to Azure SQL Database that is built on the same PaaS infrastructure and maintains all it's features (i.e. active geo-replication, high availability, automatic backups, database advisor, threat detection, intelligent insights, vulnerability assessment, etc) but adds support for databases up to 35TB, VNET, SQL Agent, cross-database querying, replication, etc. So, you can migrate your databases from on-prem to Azure with very little migration effort which is a big improvement from the current Singleton or Elastic Pool flavors which can require substantial changes.
The Fastest Way to Redis on Pivotal Cloud FoundryVMware Tanzu
What do developers choose when they need a fast performing datastore with a flexible data model? Hands-down, they choose Redis.
But, waiting for a Redis instance to be set up is not a favorite activity for many developers. This is why on-demand services for Redis have become popular. Developers can start building their applications with Redis right away. There is no fiddling around with installing, configuring, and operating the service.
Redis for Pivotal Cloud Foundry offers dedicated and pre-provisioned service plans for Cloud Foundry developers that work in any cloud. These plans are tailored for typical patterns such as application caching and providing an in-memory datastore. These cover the most common requirements for developers creating net new applications or who are replatforming existing Redis applications.
We'd like to invite you to a webinar discussing different ways to use Redis in cloud-native applications. We'll cover:
- Use cases and requirements for developers
- Alternative ways to access and manage Redis in the cloud
- Features and roadmap of Redis for Pivotal Cloud Foundry
- Quick demo
Presenters: Greg Chase, Director of Products, Pivotal and Craig Olrich, Platform Architect, Pivotal
Microsoft Azure Bringing Cloud to Your EnterpriseCA Technologies
Take a look at scenarios to get started with Microsoft Azure.
For more information on Management Cloud solutions from CA Technologies, please visit: http://bit.ly/1wEnPhz
Are you actively using or moving to Office 365, G-Suite, or other popular cloud applications? If so, how confident are you that you can keep all of that critical data protected? Attend this session to learn how Veritas can help protect data across all of your different cloud applications--using the same solution you use to protect your existing non-cloud applications. Don't miss this opportunity to explore the advantages of using one unified solution to protect all of your data--across all of your physical, virtual, and cloud environments.
In this talk we review what Docker is and why it’s important to Developers, Admins and DevOps when they are using a NoSQL Database such as Aerospike, the high performance NoSQL Database. Persistence is a critical element for a successful multi-Container strategy. We also cover the following topics: Using Docker to Orchestrate a multi container application (Flask + Aerospike) Injecting HAProxy and other production requirements as we deploy to production Scaling the Web and Aerospike clusters to grow to meet demand This presentation led by Alvin Richards, VP of Product at Aerospike includes an interactive demo showcasing the core Docker components (Machine, Engine, Swarm and Compose) along with Aerospike’s integration. We hope you will see how much simpler Docker can make building and deploying multi-node Aerospike based applications.
VMworld 2013: Maximize Database Performance in Your Software-Defined Data CenterVMworld
VMworld 2013
Mark Achtemichuk, VMware
Michael Webster, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Test Drive: Experience Single-Click Command with the Veritas Access User Inte...Veritas Technologies LLC
To deal with relentless data growth over the past few years, most organizations have evolved to incorporate a wide variety of different storage solutions, including SAN, NAS, tape, cloud, file, block, and object. With increasingly complex combinations of these different storage types being used for primary, secondary, and archived data, understanding and managing your overall storage environment can start to feel like an impossible task. In this session, you will see first-hand how Veritas Access, a new software-defined storage solution, makes it possible to finally manage all of your storage from a single console--and allows you to migrate data from one storage tier to another with a single mouse click.
An introduction to {code} by Dell EMC, our mission on containers, and our core project REX-Ray. This will give the audience an understanding of why REX-Ray is important and where you can go to learn more.
Windows Server 2016 on pilve-valmis operatsioonisüsteem, mis toetab ettevõtte praegusi töövooge, samal ajal tutvustades uusi tehnoloogiaid, mis teevad pilve ülemineku sujuvaks, kui aeg õige. Millised on põhilised uuendused ja kuidas need ettevõtteid aitavad - nendele küsimustele leiate vastused esitlusest.
Cassandra Summit 2014: Internet of Complex Things Analytics with Apache Cassa...DataStax Academy
Speaker: Mohammed Guller, Application Architect & Lead Developer at Glassbeam.
Learn how Cassandra can be used to build a multi-tenant solution for analyzing operational data from Internet of Complex Things (IoCT). IoCT includes complex systems such as computing, storage, networking and medical devices. In this session, we will discuss why Glassbeam migrated from a traditional RDBMS-based architecture to a Cassandra-based architecture. We will discuss the challenges with our first-generation architecture and how Cassandra helped us overcome those challenges. In addition, we will share our next-gen architecture and lessons learned.
Similar to Le soluzioni tecnologiche per il Copy Data Management (20)
La collaborazione IBM CRUI
Il Cloud IBM: caratteristiche e punti di forza
Cloud First e la soluzione per qualunque necessità: IBM IaaS, IBM e VMWare, IBM e Skytap, Cloud Object Storage
Modernizzazione applicativa e Cloud Native: IBM PaaS
Soluzioni Cognitive con IBM Watson
IBM: il primo fornitore a qualificare i propri servizi sul MarketPlace di AGID
IBM Garage
Visita al DataCenter Cloud a Cornaredo
I Virtual Labs sono una soluzione Microsoft, studiata per implementare in maniera rapida ed efficace ambienti e classi virtuali, sia a scopo didattico\formativo che di ricerca\sviluppo. Grazie a questa tecnologia è possibile creare Virtual Machine (VM) Windows e Linux, in grado di ridurre al minimo gli sprechi di risorse, grazie all’utilizzo di quote e criteri puntuali, come ad esempio l’avvio e lo spegnimento automatico delle VM o il numero massimo di VM utilizzabile da ogni utente (Professore, Ricercatore, Tesista o Studente)
Esploriamo Windows 10: nuove funzionalità e aggiornamenti. Potenziare l’esper...Jürgen Ambrosi
Come utilizzare e gestire in un’ottica moderna il sistema operativo client di Microsoft. Crea, studia e lavora praticamente ovunque, lo straordinario e ultraleggero Surface offre il meglio per la produttività mobile.
I nuovi strumenti di comunicazione e collaborazione di Office 365 e la loro i...Jürgen Ambrosi
I vantaggi di Office 2019; Gestione e condivisione dei documenti: OneDrive e SharePoint; Lavoro di gruppo con Teams; Strumenti moderni per la formazione (Forms, Sway e Stream). Funzionalità di centralino telefonico e di audio-conferencing integrate in Skype for Business e Teams che abilitano le comunicazioni interne ed esterne all’organizzazione
Power BI Overview e la soluzione SCA per gli AteneiJürgen Ambrosi
Presentazione delle potenzialità di PowerBI e demo di creazione di un Report e Dashboard.
SCA (Università degli Studi di Roma “Tor Vergata”) è la soluzione per le Università in grado di fornire un unico punto di accesso alle informazioni degli studenti relative a performance, carriere e amministrazione, dando facile accesso a risultati di potenti query per prendere rapidamente decisioni
Liberati dal sovraccarico e dalle limitazioni dell’infrastruttura locale. Sfrutta risorse illimitate per ottenere scalabilità per i processi HPC (High Performance Computing), per analizzare dati su vasta scala, eseguire simulazioni e modelli finanziari e sperimentare riducendo il tempo di immissione sul mercato.
Threat management lifecycle in ottica GDPRJürgen Ambrosi
Introduzione agli scenari di autenticazione per i servizi informativi nei contesti lavorativi moderni. Panoramica delle soluzioni offerte dalla soluzione Enterprise Mobility and Security per la messa in sicurezza delle identità e delle informazioni nel loro completo ciclo di vita. Prevenzione, rilevamento, contenimento e risposta a minacce di tipo avanzato con riferimenti alla cyber kill chain (focus su Endpoint, Identità, servizi di produttività e cloud app).
Identity and Data protection with Enterprise Mobility Security in ottica GDPRJürgen Ambrosi
Introduzione agli scenari di autenticazione per i servizi informativi nei contesti lavorativi moderni. Panoramica delle soluzioni offerte dalla soluzione Enterprise Mobility and Security per la messa in sicurezza delle identità e delle informazioni nel loro completo ciclo di vita. Prevenzione, rilevamento, contenimento e risposta a minacce di tipo avanzato con riferimenti alla cyber kill chain (focus su Endpoint, Identità, servizi di produttività e cloud app).
Proposte ORACLE per la gestione dei contenuti digitali e per la ricerca scien...Jürgen Ambrosi
Agenda
gli obiettivi della collaborazione Oracle / CRUI; overview delle soluzioni proposte
l’evoluzione dell’offerta Oracle, on prem e in Cloud
certificazione CSP Agid e modello di pricing su Cloud
le soluzioni per la Comunicazione “Digital” (prodotti, servizi e formazione)
Redazione collaborativa e gestione dei contenuti digitali; integrazione con strumenti di produttività come Office365 e Google
Sviluppo rapido e self-service di micrositi e API per front-end digitali
Assistenti Digitali
le soluzioni per la Ricerca Scientifica e l’Innovazione tecnologica
Il Cloud Oracle per l’HPC
soluzioni on-premise e Cloud per BigData e Data Science / Deep Learning
soluzioni in Cloud per IoT, Blockchain
Survey
Q/A
Proposte ORACLE per la modernizzazione dello sviluppo applicativoJürgen Ambrosi
Argomenti trattati nella sessione:
•gli obiettivi della collaborazione Oracle / CRUI; overview delle soluzioni proposte
l’evoluzione dell’offerta Oracle, on prem e in Cloud
•certificazione CSP Agid e modello di pricing su Cloud
•le soluzioni per la modernizzazione dello Sviluppo Applicativo (prodotti, servizi e formazione)
•Database “Multi-Modello” (relazionale, non relazionale / json, REST): le novità del DB Oracle
•Sviluppo rapido di API e UI “Digital” su Oracle DB: le novità di Apex 18.2
•Sviluppo “poliglotta” su Docker e Kubernetes, in Integrazione e Deployment continui
•Arricchire le applicazioni con funzionalità analitiche evolute, “in-database”
•Tecnologia e framework per gli adempimenti di base del GDPR
•Gestione federata delle Identità (SPID, Social Login)
•Survey
•Q/A
Proposte ORACLE per la modernizzazione del Datacenter e delle infrastrutture ITJürgen Ambrosi
Argomenti trattati nella sessione:
• gli obiettivi della collaborazione Oracle / CRUI; overview delle soluzioni proposte.
• l’evoluzione dell’offerta Oracle, on prem e in Cloud
• certificazione CSP Agid e modello di pricing su Cloud
• le soluzioni per la modernizzazione delle Infrastrutture IT (prodotti, servizi e formazione)
• efficientamento dei Database Oracle
• Appliances per il Database (ODA) e per BigData
• Offloading di workload su Cloud Oracle
• Storage e Backup as-a-Service, Lift/Shift di ambienti di Sviluppo e Test, Decommissioning
• VirtualLabs e MOOC “on-demand” su cloud
• Continuità e DR (su on-prem o su Cloud): soluzioni per basi dati Oracle e non Oracle
L’assistente virtuale che informa gli studenti: l'esperienza del Politecnico ...Jürgen Ambrosi
Il Politecnico di Milano ha implementato una chatbot che consente agli studenti, di interagire con una piattaforma alimentata da intelligenza artificiale. Il sistema sfrutta IBM Watson Conversation, un servizio cognitivo basato su cloud, per migliorare e facilitare l'esperienza. L'assistente virtuale è addestrato per rispondere a domande relative a tre aree specifiche nell'ambito del supporto agli studenti: ammissioni, certificati e tasse. In aggiunta, se le informazioni richieste esulano dalle aree di riferimento, la chatbot rimanda la ricerca delle risposte a pagine specifiche o ai contatti di segreteria.
L'assistente virtuale consente di fornire un servizio continuo agli studenti, senza limiti di orario. Informazioni aggiornate e dettagliate sui quesiti più comuni saranno sempre disponibile e fruibili grazie ad un'interazione guidata. La chatbot è attiva nell'area pubblica del sito e chiunque può porre i quesiti senza la necessità di autenticarsi, ovviamente ciò implica che le informazioni fornite non siano personalizzate.
Dal punto di vista dell'università, la chatbot consente alla segreteria di fornire un servizio di maggior qualità, potendo questa dedicarsi maggiormente al soddisfare le esigenze più specifiche dei singoli studenti.
Webinar Fondazione CRUI e VMware: VMware vRealize SuiteJürgen Ambrosi
vRealize Suite è una piattaforma di Cloud Management di classe enterprise progettata appositamente per il cloud ibrido che consente di distribuire e gestire rapidamente l’infrastruttura e le applicazioni senza compromettere il controllo IT.
Le soluzioni tecnologiche a supporto della normativa GDPRJürgen Ambrosi
Il nuovo ciclo di webinar Fondazione CRUI e Veritas si apre mercoledì 15 novembre alle 10.00 con il tema del GDPR (l’acronimo sta per General Data Protection Regulation: il nuovo regolamento Europeo sulla protezione e il trattamento dei Dati Personali) che sta diventando di estrema attualità in quanto a partire dal 25 maggio 2018 il regolamento entrerà in vigore imponendo alle aziende l’implementazione di metodologie e processi per il controllo e la gestione dei dati personali e/o sensibili con ovvi riflessi anche sui dati presenti sui loro sistemi informatici. Quindi le aziende dovranno necessariamente provvedere a verificare ed eventualmente adeguare anche le loro tecnologie per rendersi “compliant” al nuovo regolamento onde evitare pesanti sanzioni amministrative.
Veritas, da sempre attenta alla governace dei Dati aziendali, ha arricchito il proprio portafoglio di Prodotti offrendo una suite di soluzioni integrate tra loro che permettono di realizzare una vera e propria piattaforma tecnologica per la gestione dei dati che definiamo il Data Management a 360 gradi che indirizza anche la tematica del GDPR. La soluzione è ottenuta facendo interagire in modo complementare tra loro prodotti delle tre linee fondamentali di Veritas ovvero Protezione, Alta Affidabilità e Visibilità del dato.
E’ una piattaforma di Digital Workspace che consente di distribuire e gestire con semplicità e sicurezza qualsiasi applicazione su qualunque dispositivo, integrando funzionalità di controllo dell’accesso, gestione delle applicazioni e gestione degli endpoint multipiattaforma. È disponibile come servizio cloud o per la distribuzione on-site.
E’ un’estensione di VMware vCenter che fornisce ai professionisti IT la possibilità di disaster recovery, migrazione di siti e funzionalità di test non distruttive.
In questa sessione pratica, della durata di almeno 2 ore, useremo l’Hands On Lab “HOL-1803-01-NET – VMware NSX – Getting Started” per analizzare i seguenti punti:
– Logical Switching Come creare nuove reti senza dover modificare l’infrastruttura network fisica sottostante
– Logical Routing Come ottimizzare il traffico di rete utilizzando logiche di routing distribuito
– Edge Service Gateway Presentazione delle funzionalità della componente
– Microsegmentation Come creare regole sul distribuited firewall- Microsegmentation
– Parte 2 Mettere in sicurezza gli ambienti sfruttando il concetto di security group
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
2. Agenda
6
1 Presentation Purpose and Desired Outcomes
2 Customer Use Cases
3 Technical Architecture Overview
4 Features and Functionality
5 Veritas Product Integration
6 Summary and Additional Resources
3. Intended use of this presentation and its contents
Presentation Purpose and Desired
Outcomes
4. Presentation Purpose and Desired Outcomes
8
Illustrate the critical business challenges Velocity
addresses for our customers
Describe Velocity features and technology
Explain Velocity architecture
Articulate Velocity Appliance strengths
Use this contents of this slide deck to create a custom presentation that addresses the
specific business challenges and associated pain points faced by your customer.
DESIREDOUTCOMES
6. Velocity Customer Use Cases
Streamline data access
10
Provide fast, self-service access to the right data at the
right time.
Accelerate application releases, upgrades, and patches
Empower end users with self-service data access
Eliminate the waiting, delays, or resource dependencies of physical copies
by accessing virtual copies on-demand
Velocity – Streamline data access
7. Velocity Customer Use Cases
Improve storage optimization and cut costs
11
Dramatically reduce copy data sprawl.
Reduce your copy data footprint by up to 90%
Use less storage by provisioning virtual copies on-demand
Lower storage costs
Velocity – Improve storage optimization and cut costs
8. Velocity Customer Use Cases
Decrease information risk
12
Set access permissions by role and improve
protection.
Easily set access permissions by role
Centrally control, manage, and track virtual copies via a single interface
Protect vital information with dynamic data masking
Velocity – Decrease information risk
9. Velocity Customer Use Cases
Boost backup and recovery performance
13
Reduce backup windows by eliminating redundant
data.
Shorten backup windows by reducing the volume of copy data
Ensure highly efficient storage utilization
Lower storage costs
Velocity – Boost backup and recovery performance
10. Velocity Introduction
14
Velocity provides rapid, on-demand, self-
service access to data without the burden
of creating, storing, and maintaining physical
database copies or resource dependencies.
Velocity is part of the Veritas 360 Data Management
portfolio.
Veritas
Velocity
11. Velocity Core Capabilities
15
Copy Data Virtualization
Velocity automates the creation, consumption, and deletion
of copy data, empowering authorized end users to instantly
access self-service virtual data copies.
Faster Copy Data Access
Velocity helps organizations consolidate storage, reduce
cost, and eliminate copy data management complexity.
12. Data Refresh Example
Without Veritas Velocity
16
TIME NEEDED
Could take weeks…
• Data provisioning and preparation processes are manual and inefficient
• Work requires multiple teams to be involved and impacted
• May need to be repeated multiple times to realize the required resultTime Cost…
Ticket Sent
End user
Ticket Received
Workflow system
IT Engaged
IT admin
Data Located
Database admin
Data Copied
Storage admin
Data Prepared
Database admin
Data Ready
IT admin
Data Access
End user
13. Data Refresh Example
With Veritas Velocity
17
TIME NEEDED
Takes minutes…
• Virtual data copy creation is fast and storage/network efficient
• Process can be fully automated
• Self-service experience for end users
User Access
Application host
Time Savings…
TIME SAVEDVelocity Used
Cloud Console
Data Needed
End user
Data Prepared
Database admin
14. Component roles and relationships in a Velocity
architecture
Technical Architecture Overview
15. Technical Architecture Overview
Architectural overview diagram
19
Sandbox User
Sandbox provisioning
Self-service
Sandbox User
Sandbox access
Data mining
Data analysis
Data testing
Data manipulation
Network
Admin User
Solution setup
User roles
User privileges
Hosted in the Veritas Cloud
or available as on premise
solution
Not a Velocity
component
Not a Velocity
component
Database Server
Ingestion source
V
Application Host
Sandbox server
V
Velocity Console
Veritas Cloud or on premise
Control Connection
Registration
Database
Mount
Database
Ingest
Non-data movement eventData movement event
Virtual CopyDatabase
Storage Server
Virtual machine
VM
16. Technical Architecture Overview
Velocity Storage Server
20
InfrastructureRole
Storage Server
• Veritas Provisioning File System (VPFS)
• Repository for ingested databases and
virtual database copies
• Data management and optimization
solution
• Virtual database copy generator
Storage Server
Virtual Machine
VM
17. Technical Architecture Overview
Velocity Console – Veritas Cloud
21
Velocity Console
Veritas Cloud
InfrastructureRole
Velocity Cloud Console
• SaaS portal in the Veritas Cloud
• Multi-tenant platform
• Designed for user access and
Storage Server management
• Includes user identity and
authentication services
18. Technical Architecture Overview
Velocity Console – NEW! On-premises Management Server
22
Velocity Console
On-premises Management Server
InfrastructureRole
On-premises
Management Server
• OVA provided by Veritas
• Comprised of six Docker images
• Based on RHEL 7
• Private cloud solution
• Does not require internet access
vSphere OVA
19. Technical Architecture Overview
Velocity Console – NEW! On-premises Management Server
23
Preinstalled Image
Couchbase
Couchbase database datastore
Preinstalled Image
Elasticsearch
Clustering & indexing for events,
mdr bucket, and log data
Preinstalled Image
RabbitMQ
Message, routing, and queuing
Preinstalled Image
EPMP Services
Provisioning, identity, event,
messaging, and core
Preinstalled Image
Velocity Tomcat
Front-end for Private Cloud,
hosting the Cloud Console
Preinstalled Image
EPMP Tomcat
Java, Elasticsearch, RabbitMQ,
Couchbase front-end
Docker Services
Velocity Console
On-premises Management Server
vSphere OVA
20. Technical Architecture Overview
Velocity Client
24
InfrastructureRole
Velocity Client
• Deployed to database servers
• Secures communications
between database servers and
the Velocity Storage Server
• Database servers must be
registered with Cloud Console
Velocity Client
Installed to host database servers
21. Technical Architecture Overview
Database server (non-Velocity component)
25
Database Server
Ingestion source
InfrastructureRole
Database Server
• Hosts production databases, such as
Oracle or SQL
• Server from which Velocity performs
ingestion operations
• Requires Velocity Client
• Not provided by Velocity
V
22. Technical Architecture Overview
Application host (non-Velocity component)
26
Application Host
Sandbox server
InfrastructureRole
Application Host
• Used to access sandboxes on Velocity
Storage Server
• Requires appropriate database software
to interact with mounted sandboxes
• Requires Velocity Client
• Not provided by Velocity
V
23. Technical Architecture Overview
Preparation host (non-Velocity component)
27
Preparation Host
Data preparation/masking
InfrastructureRole
Preparation Host (Oracle)
• Allows administrator to mask or remove
sensitive data from a ‘raw’ sandbox
• Velocity exposes a ‘raw’ sandbox to a
preparation server
• Requires Velocity Client
• Not provided by Velocity
V
24. Technical Architecture Overview
Process flow and benefits
28
Velocity captures copies of production databases and stores them to
the Velocity Storage Server through a process called ingestion.
After ingestion, users can create virtual copies of a database for
various business purposes through a process called provisioning.
Process
Flow
Key
Benefits
Self-service – Users create their own virtual database copies
Performance – Provisioning is fast and does not impact production
Security – Administrator controls user access permissions
Flexibility – Storage server can be an appliance or virtual machine
25. Technical Architecture Overview
Velocity roles
29
• Oversees all users and data; can assign additional users to the administrator role
• Can access all dialogs in the Velocity Cloud Console and can view all sandboxes,
database sources, and database copiesVelocity Admin
• Has access to specific databases and database copies
• Can create and delete sandboxes based on the databases that are made available
to themSandbox User
• Creates database sources by adding databases, allowing databases to be ingested
• Grants Sandbox Users, or groups of users, access to database sources allowing
them to creates sandboxesDatabase Admin
27. Features and Functionality
Supported database workloads
31
Database users ingest databases from a
source server using the Velocity Console
Databases are stored and deduplicated on
the Velocity Storage Server
Sandbox users provision virtual database
copies using the Velocity Console
Sandbox users access virtual database
copies using an application host server
Note: Refer to the Velocity Help Center for the
latest information on supported database
applications and versions.
28. Features and Functionality
Ingesting Oracle data
32
Sandbox User
Sandbox provisioning
Self-service
Sandbox User
Sandbox access
Data mining
Data analysis
Data testing
Data manipulation
Network
Admin User
Solution setup
User roles
User privileges
Hosted in the Veritas Cloud
or available as on premise
solution
Not a Velocity
component
Not a Velocity
component
Oracle Server
Ingestion source
V
Application Host
Sandbox server
V
Velocity Console
Veritas Cloud or on premise
Control Connection
Registration
NFS
Mount
RMAN
Ingest
Non-data movement eventData movement event
Virtual CopyDatabase
Storage Server
Virtual machine
VM
29. Features and Functionality
Ingesting Oracle data
33
Oracle Server
Ingestion source
V
RMAN/NFS
Ingest
Oracle RMAN
controls creation of
logical database copy
Data movement
event
Storage Server
Virtual machine
VM
30. Features and Functionality
Ingesting Oracle data
34
Oracle Server
Ingestion source
N
RMAN/NFS
Ingest
NetBackup policies
control ingestion of
Oracle database
Data movement
event
NetBackup
Master Server
An Oracle database can also be ingested through integration with NetBackup, allowing both
NetBackup and Velocity to catalog and use the data for backup and data provisioning
respectively.
Job Management
Storage Server
Virtual machine
V
VM
31. Features and Functionality
Ingesting Oracle data
35
• Velocity ingestion – Process is initiated by the Velocity Client on the Oracle
server, after receiving commands from the Velocity Storage Server
• NetBackup ingestion – Process is initiated by the NetBackup Client on the Oracle
server, after receiving policy commands from the NetBackup Master Server
When using NetBackup ingestion, both RMAN and NetBackup catalogs are notified of
the ingestion event and updated
• With either ingestion process, ingestion is performed by Oracle RMAN
• First ingestion is a full copy of the database; subsequent ingestions are incremental
Oracle Ingestion Notes
32. Features and Functionality
Ingesting Oracle data
36
Sandbox User
Sandbox provisioning
Self-service
Sandbox User
Sandbox access
Data mining
Data analysis
Data testing
Data manipulation
Network
Admin User
Solution setup
User roles
User privileges
Hosted in the Veritas Cloud
or available as on premise
solution
Not a Velocity
component
Not a Velocity
component
Oracle Server
Ingestion source
V
Application Host
Sandbox server
V
Velocity Console
Veritas Cloud or on premise
Control Connection
Registration
RMAN
Ingest
Data movement event
Database
Storage Server
Virtual machine
VM
1
2 3
2 6
./sandbox snapshot
Snapshot
Masking Scripts
Preparation4
5
NEW! Automated Script Execution
“Copy Preparation Script Path” field in the
Velocity Console automates the execution of
DBA-provided data preparation (masking) scripts.
33. Features and Functionality
Ingesting Oracle data
37
NEW! Support for Oracle 12c CDB/PDB Databases
• Velocity 2.8 supports ingesting Oracle container databases (CDBs) and pluggable
databases (PDBs)
• Databases ingested at the CDB level, and includes all pluggable databases within
• Processes for ingestion Oracle remains the same
• Process for provisioning virtual CDB/PDB database copies remains the same
• For more information on Oracle CDBs and PDBs:
https://docs.oracle.com/database/121/ADMQS/GUID-0FEBEF5F-DF3E-4101-B18B-
84921E2F6AA2.htm#ADMQS12498
34. Features and Functionality
Ingesting Oracle data
38
Ingestion Scheduling
• Ingestion of Oracle databases can be scheduled
with Velocity 2.7 and later
• Schedules created in the Velocity Console but
driven by the Storage Server (vsched service)
• Schedules are tied to Database Sources
• Days/Weeks/Months granularity supported
• One schedule per Database Source
Frequency
Days
Weeks
Months
JAN
31
M-F
Storage Server
Virtual machine
VM
35. Features and Functionality
Ingesting SQL data
39
Sandbox User
Sandbox provisioning
Self-service
Sandbox User
Sandbox access
Data mining
Data analysis
Data testing
Data manipulation
Network
Admin User
Solution setup
User roles
User privileges
Hosted in the Veritas Cloud
or available as on premise
solution
Not a Velocity
component
Not a Velocity
component
SQL Server
Ingestion source
V
Application Host
Sandbox server
V
Velocity Console
Veritas Cloud or on premise
Control Connection
Registration
CIFS
Mount
VSS Copy
Ingest
Non-data movement eventData movement event
Virtual CopyDatabase
Storage Server
Virtual machine
VM
36. Features and Functionality
Ingesting SQL data
40
SQL Server
Ingestion source
V
VSS/CIFS
Ingest
VSS Copy snapshot
is used to create
logical database copy
Data movement
event
Storage Server
Virtual machine
VM
37. Features and Functionality
Ingesting SQL data
41
• For Windows SQL servers, the Velocity Client is deployed as an MSI package
• Client must be installed manually or using a 3rd-party software management system;
silent install is supported
• Installing the Velocity Client creates a service with Local System Account privileges;
service facilitates execution of commands received from Velocity Storage Server
• Commands are received using an HTTPS pull request from Velocity Client
• Connections initiated from Velocity Storage Server (no need to open additional ports)
Velocity Client on SQL Servers
38. Features and Functionality
Ingesting SQL data
42
• Ingestion is initiated by the Velocity Client on the SQL server, after receiving
commands from the Velocity Storage Server
• Process uses SQL VSS Writer to capture a snapshot of the SQL database
• Snapshot is captured using the VSS Copy method; SQL logs are not truncated
• Ingestion moves copy of SQL database to CIFS share on Velocity Storage Server
• Ingestion events are full database copies, and are supported for standalone SQL
configurations only
SQL Ingestion Notes
39. Features and Functionality
Ingesting SQL data
43
• Ingestion can be initiated
immediately or at a later time
• Each SQL ingestion is a data
movement event which captures a
full copy of the source database
• Ingestion of SQL databases uses
‘VSS Copy’ method; logs are not
truncated during ingestion
SQL Ingestion Notes
40. Features and Functionality
Ingesting SQL data
44
NEW! SQL Ingestion Scheduling
• Ingestion of SQL databases can be scheduled
with Velocity 2.8 and later
• Schedules created in the Velocity Console but
driven by the Storage Server (vsched service)
• Schedules are tied to Database Sources
• Days/Weeks/Months granularity supported
• One schedule per Database Source
Frequency
Days
Weeks
Months
JAN
31
M-F
Storage Server
Virtual machine
VM
41. Features and Functionality
Provisioning virtual database copies
45
When a user requests a virtual database copy in Velocity, the Velocity Storage Server
creates a virtual database copy within the VPFS file system.
VPFS
Extent 3
Extent 1
Extent 2
Extent …n
Original Copy Extent
Map
Virtual Copy Extent
Map
Ingested Database Virtual Data Copy
Storage Server
Virtual machine
VM
42. Features and Functionality
Provisioning virtual database copies
46
Virtual Database Copies – Overview
Virtual Data Copies
Virtual database copies exist within the Storage Server’s VPFS as data blocks and extents, and an extent map that
represents the current virtual database copy. The extents can be shared across multiple virtual copies.
• Consists of a set of data extents and an extent map
• Extent map references data extents that represent current
copy
• As changes are made to virtual database copy, new extents
are created and map is updated
43. Features and Functionality
Provisioning virtual database copies
47
Virtual Database Copies – Changes
Virtual Data Copies
Virtual database copies exist within the Storage Server’s VPFS as data blocks and extents, and an extent map that
represents the current virtual database copy. The extents can be shared across multiple virtual copies.
• As changes occur over time, data may be overwritten or
deleted
• Extents for overwritten or deleted data are removed from map
• Deleted extents may still be referenced in saved database
copy or other virtual copies
44. Features and Functionality
Provisioning virtual database copies
48
Virtual Database Copies – Access
Virtual Data Copies
Virtual database copies exist within the Storage Server’s VPFS as data blocks and extents, and an extent map that
represents the current virtual database copy. The extents can be shared across multiple virtual copies.
• Virtual copy is stored within Velocity Storage Server’s VPFS
• Virtual database copy is exposed transparently via VPFS
interface
• The VPFS interface is exported from the Velocity Storage
Server via NFS (Oracle) or CIFS (SQL) today
45. Features and Functionality
Provisioning virtual database copies
49
Virtual Database Copies – Removal/Deletion
• Users can delete old or obsolete sandboxes
• Sandboxes are tracked and accessed in the ‘Sandboxes’ section
of the Velocity Cloud Console, and are deleted from here also
46. Features and Functionality
Recovering databases
50
Oracle Database Recovery
• Velocity supports recovery of Oracle databases
• No additional configuration or licenses required for Oracle recovery
• Oracle recovery is accomplished through Recovery Sandboxes
• Recovery Sandboxes can only be created by DBA and Admin users
• Creating a Recovery Sandbox executes an RMAN script which moves data
to target Oracle host
• Current valid control file must be available on the source Oracle host
• Post-recovery operations required by Oracle DBA to finish recovery
47. Features and Functionality
Recovering databases
51
Network
Admin or DBA
Create Recovery
Sandbox
Currently supported
for Oracle databases
Not a
Velocity
component
Oracle Server
Recovery Target
V
Cloud Console
Based in the Veritas Cloud
Control Connection
Registration
Database
Recovery
Sandbox
Oracle Recovery Notes
• No additional configuration or licenses required to
support Oracle recovery capability
• Creation of Recovery Sandbox executes RMAN
script to transfer data files to target Oracle host
Sandbox User
Recovery Sandbox
option not available
to Sandbox Users
Important: After Recovery Sandbox creation, Oracle DBA must
run recovery commands to finish the Oracle restore operation.
Hosted in the Veritas Cloud
or available as on premise
solution
Storage Server
Virtual machine
VM
Data movement event
48. Features and Functionality
Requirements and prerequisites
52
Velocity Requirements and Prerequisites
• Information on Velocity requirements and prerequisites is provided by the Velocity
Help Center on-line resource
• The Velocity Help Center is the authoritative source of information on Velocity
requirements, and supported workloads
• The Velocity Help Center includes the Veritas Velocity Users Guide and Velocity
Release Notes
• Access to the on-line Velocity Help Center can be found here:
http://veritashelp.com/Welcome?locale=EN_US&context=VV2015
49. How Velocity integrates with other solutions in the
Veritas 360 Data Management portfolio
Veritas Product Integration
50. Veritas Product Integration
Velocity and NetBackup
54
Unified storage infrastructure for Data Protection and Copy Data Management
Full benefits from proven Veritas data deduplication technology
Benefits
1. Notified of NetBackup-driven
database ingestion
2. Indexes ingested databases
for virtual copy provisioning
3. After ingestion, can provision
virtual copies
Velocity
1. Performs database ingestion
2. Use Oracle Intelligent
Policies & Storage Lifecycle
Policies
3. After ingestion, can create
backup copies with SLPs
NetBackup
Storage Server
Virtual machine
VM
51. Review of presentation desired outcomes and links to
additional resources
Summary and Additional Resources
52. Summary and Additional Resources
56
Illustrate the critical business challenges Velocity
addresses for our customers
Describe Velocity features and technology
Explain Velocity architecture
Articulate Velocity Appliance strengths
Use this contents of this slide deck to create a custom presentation that addresses the
specific business challenges and associated pain points faced by your customer.
DESIREDOUTCOMES
53. Summary and Additional Resources
57
Link Description
www.veritas.com Veritas website
www.veritas.com/product/data-virtualization Veritas data virtualization solutions
www.facebook.com/veritasvelocity Velocity Facebook page
https://twitter.com/veritasvelocity Velocity Twitter home page
http://veritashelp.com/Welcome?context=vv2015 Velocity Help Center
Manual, labor-intensive processes to provision copies of production data can take days or weeks and in some cases, may be impossible. Velocity stores a single copy of production data ingested directly from NetBackup or the source, letting you easily provision virtual copies on-demand and speed up data access. Because provisioning virtual copies takes only seconds, you can create copies whenever you need them and get to market faster.
One of the biggest causes of rising data volumes is copy data. According to IDC, organizations create on average 20 copies of production data. By provisioning virtual copies instead of physical, you can eliminate the need for additional infrastructure or separate processes to create, store, and manage physical copies.
Veritas Velocity lets you easily set role-based access permissions and user privileges from a central interface, ensuring only authorized users can access virtual copies of production data. To further protect data against exposure or theft, Velocity enables you to run your own scripts or other data preparation processes to mask sensitive data. With these controls in place, you can accelerate data-driven processes and strengthen information governance while protecting data against exposure, breach, and theft.
Today’s organizations are struggling to keep pace with phenomenal data growth. As data volumes increase, so do backup windows, storage consumption, and data management costs. By ingesting and storing a single copy of production data, Veritas Velocity instantly provisions virtual copies and makes them available to end users via a self-service portal, eliminating the need for physical copies that increase data volume, storage, and costs.
Introduction to Veritas Velocity
Velocity provides rapid, on-demand, self-service access to data without the burden of creating, storing, and maintaining physical database copies or resource dependencies.
Velocity is part of the Veritas 360 Data Management portfolio.
Faster Copy Data Access
Velocity helps organizations consolidate storage, reduce cost, and eliminate copy data management complexity.
Copy Data Virtualization
Velocity automates the creation, consumption, and deletion of copy data thereby empowering authorized end users to instantly access self-service virtual data copies.
Storage Server
The Velocity Storage Server is the heart of the Velocity solution. Ingested databases are stored within the integrated VPFS file system of the Storage Server, which optimizes data storage and generates or expires virtual database copies for authorized Sandbox users.
Velocity Console
The Velocity Console is used by administrators to ingest copies of production databases into the Velocity Storage Server, deploy the Velocity Client, and manage users and the Velocity Storage Server. The Velocity Cloud Console is used by end users to create virtual database copies (sandboxes). The Velocity Cloud Console is based in the Veritas Cloud and can optionally be deployed as an on premise virtual machine.
The Cloud Console is multi-tenant, meaning each customer can create and manage multiple independent tenants. Tenant metadata is not shared with, or accessible by, other tenants of the Cloud Console.
Security Details
Based in the Veritas Cloud platform
User identity & authentication service
NIST 800-53 and 800-56 compliance
PCI/DSS Level 1 certification
SSAE 16 and SOC II compliance
Velocity Console
The Velocity Console is used by administrators to ingest copies of production databases into the Velocity Storage Server, deploy the Velocity Client, and manage users and the Velocity Storage Server. The Velocity Cloud Console is used by end users to create virtual database copies (sandboxes). The Velocity Cloud Console is based in the Veritas Cloud and can optionally be deployed as an on premise virtual machine.
The Cloud Console is multi-tenant, meaning each customer can create and manage multiple independent tenants. Tenant metadata is not shared with, or accessible by, other tenants of the Cloud Console.
Security Details
Based in the Veritas Cloud platform
User identity & authentication service
NIST 800-53 and 800-56 compliance
PCI/DSS Level 1 certification
SSAE 16 and SOC II compliance
The Private Cloud solution will be delivered as a Open Virtual Appliance (OVA) based on RHEL 7. The OVA will have Docker Services installed and will be acting as the docker host.
Our Private Cloud offering will be composed of seven (7) services governing six (6) docker containers, based on six (6) docker images pre-installed on the OVA. Each container provides a component that makes up the Velocity Private Cloud. Data and configuration files are shared between the Docker Host and the individual containers; this includes configuration, logs, and data.
Velocity Client
The Velocity Client is a service that facilitates automated database ingestion and sandbox preparation. To configure the Velocity Client, you must copy an installation package from the Velocity Storage Server. Then, you must install it on any host database servers that are intended to be used for either sandboxes or database ingestion. Finally, each host server must be registered with Velocity before you can ingest databases or create sandboxes.
Database Server (Non-Velocity Component)
A database server is a production server that hosts important database applications such as Oracle and SQL. Velocity interacts with database servers to ingest copies of production databases into the Velocity Storage Server using the Velocity Client.
Application Host (Non-Velocity Component)
An application host is a server that a Sandbox user can leverage to access Virtual Database Copies – also referred to as Sandboxes – mounted from the Velocity Storage Server. The Sandbox user access Virtual Database Copies for a number of business purposes, such as data analysis and data mining.
Preparation Host (Oracle) (Non-Velocity Component)
A preparation host is a server used by a Database Administrator to access a ‘raw’ sandbox for the purposes of data preparation or masking. The Database Administrator runs scripts or other processes to mask or remove sensitive information from a ‘raw’ sandbox in order for the Sandbox to be safely accessed by a Sandbox user without exposing sensitive information.
Process Flow
Velocity captures copies of production databases and moves them to the Velocity Storage Server through a process called ingestion.
After ingestion, users can quickly and easily create virtual copies of a database for various business purposes through a process called provisioning.
Benefits
Users create their own virtual database copies – self service
Data provisioning is instantaneous and does not impact production resources
Administrator controls user access permissions to ingested databases
Storage server deployment flexibility – storage server can be deployed as an enterprise-grade physical appliance or as a virtual machine
Velocity Administrator
When an organization is created in the Veritas Application Portal, the initial user is created as a Velocity administrator. Only Velocity administrators can assign additional users to the Velocity administrator role.
Each organization has at least one Velocity administrator who oversees all users and data in the system. This includes creating, listing, editing, and deleting users, and managing Velocity Storage Servers. The Velocity administrator can access all dialogs in the Velocity Cloud Console and can view all sandboxes, database sources, and database copies.
Velocity requires at least one Velocity administrator. You cannot remove all of your organization's users from this role. Only Velocity administrators can set up the Velocity Storage Server or instances of the Velocity Client.
Database Administrator
The Velocity administrator assigns users to this role.
The database administrator creates database sources in Velocity by adding databases. When a database is added, the database administrator becomes the owner of the database. The database can now be ingested.
After the database is ingested, the database administrator can use a role to grant users access to the database source. Users with the appropriate permissions can create sandboxes from the database source. The database administrator can use multiple roles to define the groups of sandbox users that are authorized to access specific databases.
Sandbox User
The Velocity administrator or the database administrator assigns users to this role.
A sandbox user has access to specific databases and database copies. Sandbox users can create sandboxes based on the databases that are available to them.
Velocity Supported Workloads
Database users ingest databases from a source server using the Velocity console
Ingested databases are stored on the Velocity Storage Server
Sandbox users provision virtual database copies using the Velocity console
Sandbox users access sandboxes using an application host server
Note: Refer to the Velocity Help Center for the latest information on supported database applications and versions.
Ingesting Oracle Data Process
As part of ingesting data into the Velocity Storage Server, Velocity creates a logical copy of the data as it existed at that point in time. RMAN is leveraged to perform the ingest by copying data to an NFS share on the Velocity Storage Server.
The interaction with Oracle is very similar to the interaction a traditional backup application such as NetBackup will have with Oracle where RMAN controls the actual creation of a copy. In addition for ASM support, Oracle utilities are used to create ASM volumes on the NFS backup destination.
Ingesting Oracle Data Using NetBackup
An Oracle database can also be ingested through integration with NetBackup, allowing both NetBackup and Velocity to catalog and use the data for backup and data provisioning respectively.
Ingesting Oracle Data - Notes
Velocity ingestion - Process is initiated by the Velocity Client on the Oracle server, after receiving commands from the Velocity Storage Server
NetBackup ingestion – Process is initiated by the NetBackup Client on the Oracle server, after receiving policy commands from the NetBackup Master Server
When using NetBackup ingestion, both RMAN and NetBackup catalogs are notified of the ingestion event and updated
With either ingestion process, ingestion is performed by Oracle RMAN
First ingestion is a full copy of the database; subsequent ingestions are incremental
Feature Summary
Essentially, Velocity remotely exposes (NFS mount) an ingested database point-in-time (PIT) to a separate Masking Server and allow the DBA to run their own masking scripts or preparation processes against that database from the Masking Server. Once the data masking/preparation processes have finished, we create a new PIT snapshot from which sandbox users can provision virtual data copies.
Feature Details/Steps
Database is ingested to the Velocity Storage Server. This is a data movement event driven by RMAN.
Database owner can now access database copy. Sandbox users cannot access this database copy.
Sandbox is provisioned by database owner and exposed to Preparation (masking) Server via NFS.
Database owner executes masking scripts and sandbox copy is properly prepared (masked).
Sandbox owner runs Velocity snapshot tool and a new database copy is created from the snapshot process.
The new database copy is ready for sandbox users to access. Sandbox users can now generate virtual data copies.
Support for Oracle 12c CDB/PDB Databases
Velocity 2.8 supports ingesting Oracle container databases (CDBs) and pluggable databases (PDBs)
Databases ingested at the CDB level, and includes all pluggable databases within
Processes for ingestion Oracle remains the same
Process for provisioning virtual CDB/PDB database copies remains the same
For more information on Oracle CDBs and PDBs:
https://docs.oracle.com/database/121/ADMQS/GUID-0FEBEF5F-DF3E-4101-B18B-84921E2F6AA2.htm#ADMQS12498
Velocity Ingestion Schedules
2.7 includes scheduled ingestion and scheduled deletion (retention) for Oracle
2.8 includes scheduled ingestion for SQL
Velocity storage server must be at least 2.7 to support new scheduling and retention features; if storage server is a previous version, user can’t configure or edit schedules
Cloud console requests to create/delete schedule objects handled by vsched service on the storage server; schedule executions done by storage server (jobs run even if cloud console contact is lost)
Existing schedules can be viewed/edited by selecting the ‘Database Sources’ object in the cloud console; schedules are an attribute or component of database sources
Schedules support days/weeks/months frequencies; schedules can be given optional names and descriptions
Option to set specific days on which ingest should occur
Scheduling Notes
One schedule per Database Source
Scheduling mechanism on appliance is ‘fire and forget’ and does not attempt to monitor success or retry
Schedules are driven by the Storage Server and will continue even if connection to Cloud Console is lost; metadata will be delivered back to Cloud Console once connection is re-established
If Storage Server goes down, missed schedules are ignored; will not try to run missed scheduled ingestions
Ingesting SQL Data Process
As part of ingesting data into the Velocity Storage Server, Velocity creates a logical copy of the data as it existed at that point in time. VSS Copy is leveraged to perform the ingest by copying data to a CIFS share on the Velocity Storage Server.
Ingesting SQL Data – Velocity Client Notes
For Windows servers hosting SQL, the Velocity Client is an MSI package
Client must be installed manually or using a 3rd-party software management system; silent install is supported
Installing the Velocity Client creates a service with Local System Account privileges; service facilitates execution of commands received from Velocity Storage Server
Commands are received using an HTTPS pull request from Velocity Client
Connections initiated from Velocity Storage Server (no need to open additional ports)
Ingesting SQL Data - Notes
Process is initiated by the Velocity Client on the SQL server, after receiving commands from the Velocity Storage Server
Ingestion uses SQL VSS Writer which captures a snapshot of the SQL database
Snapshot captured using the VSS Copy method; logs are not truncated
Ingestion moves copy of SQL database to CIFS share on Velocity Storage Server
Ingestion events are full database copies, and are supported for standalone SQL configurations only
Ingesting SQL Data - Notes
Ingestion can be initiated immediately or at a later time
Each SQL ingestion is a data movement event which captures a full copy of the source database
Ingestion of SQL databases uses ‘VSS Copy’ method; logs are not truncated during ingestion
Velocity Ingestion Schedules
2.8 includes scheduling support for SQL databases
Velocity storage server must be at least 2.8 to support new SQL scheduling features
Console requests to create/delete schedule objects handled by vsched service on the storage server; schedule executions done by storage server (jobs run even if cloud console contact is lost)
Existing schedules can be viewed/edited by selecting the ‘Database Sources’ object in the cloud console; schedules are an attribute or component of database sources
Schedules support days/weeks/months frequencies; schedules can be given optional names and descriptions
Option to set specific days on which ingest should occur
Scheduling Notes
One schedule per Database Source
Scheduling mechanism on appliance is ‘fire and forget’ and does not attempt to monitor success or retry
Schedules are driven by the Storage Server and will continue even if connection to Cloud Console is lost; metadata will be delivered back to Console once connection is re-established
If Storage Server goes down, missed schedules are ignored; will not try to run missed scheduled ingestions
Provisioning Virtual Database Copies - Introduction
As part of ingesting data into the Velocity Storage Server, we create a logical copy of the data as it existed at that point in time.
After a user selects a point in time (ingested copy) to provision in the management console, that request is communicated to the storage server. The storage server will prepare a virtual copy for use. This involves duplicating the extent map from the ingested copy. The new extent map for the virtual copy will reference the existing data extents from the ingested copy. VPFS exposes this as a file system and the file system is exported via NFS to the server[s] specified by the user.
Virtual Database Copies – Overview
Consists of a set of data extents and an extent map
Extent map references data extents that represent current copy
As changes are made to virtual database copy, new extents are created and map is updated
Virtual database copies exist within the Storage Server’s VPFS as data blocks and extents, and an extent map that represents the current virtual database copy. The extents can be shared across multiple virtual copies.
Virtual Database Copy Changes
As changes occur over time, data may be overwritten or deleted
Extents for overwritten or deleted data are removed from the map
Deleted extents may still be referenced in saved database copy or other virtual copies
Virtual Database Copy Access
Virtual copy is stored within the Velocity Storage Server’s VPFS
Virtual database copy is exposed transparently via a VPFS interface
The VPFS interface is exported from the Velocity Storage Server via NFS (Oracle) or CIFS (SQL)
Virtual Database Copy Removal and Deletion
Users can delete old or expired sandboxes
Sandboxes can be accessed and deleted from the Sandboxes section of the Velocity Cloud console
Recovering Databases
Velocity supports recovery of Oracle databases; SQL recovery support targeted for a later release
No additional configuration or licenses required for Oracle recovery
Oracle recovery is accomplished through Recovery Sandboxes
Recovery Sandboxes can only be created by DBA and Admin users
Creating a Recovery Sandbox executes an RMAN script which moves data to target Oracle host
Current valid control file must be available on the source Oracle host; Velocity does not restore the control file
Post-recovery operations required by Oracle DBA to finish recovery
Oracle Database Recovery
Requires Velocity 2.7+
Recover Oracle from any database copy in Velocity storage to original/source Oracle server
Can recover from any point-in-time as long as the database copy exists in Velocity storage
Supports recovering to a file system or ASM disk group
Now Velocity can provision + recover; both operations can be performed at the same time
Velocity recovery capability not limited by NBU retention policies – based on Velocity snapshots
One-step recovery – no staging step
Velocity Integration with NetBackup
By provisioning virtual copies from the NetBackup ecosystem, organizations don’t need additional infrastructure or separate processes to create, store, and manage physical copies. Unlike physical copies, virtual copies provisioned from backup storage occupy a fraction of space and consume zero production capacity.
Currently the integration between NetBackup and Velocity applies to Oracle databases only; support for using integration between NetBackup and Velocity for additional workloads – such as SQL – is planned for a future release.