This document discusses how Hitachi provides data storage solutions to optimize oil and gas exploration workflows using the Schlumberger Petrel software platform. Hitachi's storage solutions like the Hitachi NAS Platform and Hitachi Storage Adapter for Petrel help oil and gas companies manage rapidly growing data volumes and collaborate more effectively to make decisions faster. The solutions provide high performance, scalability, data management capabilities, and help companies accelerate the discovery process.
Pervasive analytics through data & analytic centricityCloudera, Inc.
Cloudera and Teradata discuss the best-in-class solution enabling companies to put data and analytics at the center of their strategy, achieve the highest forms of agility, while reducing the costs and complexity of their current environment.
Zero Downtime, Zero Touch Stretch Clusters from Software-Defined StorageDataCore Software
Business continuity, especially across data centers in nearby locations often depends on complicated scripts, manual intervention and numerous checklists. Those error-prone processes are exponentially more difficult when the data storage equipment differs between sites.
Such difficulties force many organizations to settle for partial disaster recovery measures, conceding data loss and hours of downtime during occasional facility outages.
In this webcast and live demo, you’ll learn about:
• Software-defined storage services capable of continuously mirroring data in
real-time between unlike storage devices.
• Non-disruptive failover between stretched cluster requiring zero touch.
• Rapid restoration of normal conditions when the facilities come back up.
Hitachi Unified Storage 100 family systems consolidate and manage block, file and object data on a central platform. For more information on our unified storage please visit: http://www.hds.com/products/storage-systems/hitachi-unified-storage-100-family.html?WT.ac=us_mg_pro_hus100
Multi-tenant Hadoop - the challenge of maintaining high SLASDataWorks Summit
In shared configuration, the same Hadoop environment supports many applications. Each has
specific requirements and criticality (SLA). Yet they all rely on an assembly of shared application
bricks.
At the same time, the life cycle of a cluster is not static in time. It evolves horizontally, with the
arrival of new applications, but also vertically, as the applications grow in load or evolve in
terms of functionality.
With this in mind, a multi-tenant production cluster presents several challenges including and
not limited to:
- Maintain a high level of SLA for a set of use cases with heterogeneous needs
- Plan and implement the architecture evolution of a cluster in production to ensure the
maintenance of SLA throughout the integration of new use cases on it
EDF will present how it manages this heterogeneity of SLA inherent of any Big Data cluster. EDF
is focusing on how it is renovating its cluster, its organization, its processes and its approach in
order to deliver a platform with strong SLA throughout its life cycle.
Speaker
Edouard Rousseaux, Tech Lead, EDF
Continuously improving factory operations is of critical importance to manufacturers. Consider the facts: the total cost of poor quality amounts to a staggering 20% of sales (American Society of Quality) and unplanned downtime costs plants approximately $50 billion per year (Deloitte).
The most pressing questions are: which process variables effect quality and yield and which process variables predict equipment failure? Getting to those answers is providing forward thinking manufacturers a leg up over competitors.
The speakers address the data management challenges facing today's manufacturers, including proprietary systems and silo'ed data sources, as well as an inability to make sensor-based data usable.
Integrating enterprise data from ERP, MES, maintenance systems and other sources with real time operations data from sensors, PLCs, SCADA systems and historians represents a major first step. But how to get started? What is the value of a data lake? How are AI/ML being applied to enable real time action?
Join us for this educational session, which includes a rare view from one of our SWAT team experts into our roadmap for an open source industrial IoT data management platform.
Key Takeaways:
• How to choose an initial project from which to quickly demonstrate high value returns
• Understand the value of multivariate data sources, as opposed to a single sensor on a piece of equipment
• Understand advances in big data management and streaming analytics that are paving the way to next-generation factory performance
MICHAEL GER, General Manager, Manufacturing and Automotive, Hortonworks and RYAN TEMPLETON, Senior Solutions Engineer, Hortonworks
BI on Big Data with instant response times at VerizonDataWorks Summit
As one of the world's largest communications companies, Verizon faced the challenge of analyzing all the video viewership data from its 6+ Million viewers to assess a variety of metrics like viewing patterns, popular channels, popular programs, prime time viewership, VOD / DVR consumption etc. Queries would often take hours or even days to run and more to evaluate, resulting in the company only running the analysis periodically. This led to lag times in Verizon's ability to respond to network, content, customer and company problems. Using Kyvos, Verizon was able to build a BI Consumption Layer that helped them analyze this massive data in its entirety, with the ability to slice and dice and drill down to the lowest levels of granularity, with instantaneous response times. Analysis across mediums, viewers, geographies and diagnostics have enabled Verizon to get faster, deeper insights at a more granular level, and optimize their engagement with partners and customers. ARUN JINDE, Technical Architect, Verizon and AJAY ANAND, VP Products, Kyvos Insights
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...Hitachi Vantara
Hitachi next-generation unified storage solutions meet the challenges of today’s data-intensive oil and gas exploration and production activities. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Pervasive analytics through data & analytic centricityCloudera, Inc.
Cloudera and Teradata discuss the best-in-class solution enabling companies to put data and analytics at the center of their strategy, achieve the highest forms of agility, while reducing the costs and complexity of their current environment.
Zero Downtime, Zero Touch Stretch Clusters from Software-Defined StorageDataCore Software
Business continuity, especially across data centers in nearby locations often depends on complicated scripts, manual intervention and numerous checklists. Those error-prone processes are exponentially more difficult when the data storage equipment differs between sites.
Such difficulties force many organizations to settle for partial disaster recovery measures, conceding data loss and hours of downtime during occasional facility outages.
In this webcast and live demo, you’ll learn about:
• Software-defined storage services capable of continuously mirroring data in
real-time between unlike storage devices.
• Non-disruptive failover between stretched cluster requiring zero touch.
• Rapid restoration of normal conditions when the facilities come back up.
Hitachi Unified Storage 100 family systems consolidate and manage block, file and object data on a central platform. For more information on our unified storage please visit: http://www.hds.com/products/storage-systems/hitachi-unified-storage-100-family.html?WT.ac=us_mg_pro_hus100
Multi-tenant Hadoop - the challenge of maintaining high SLASDataWorks Summit
In shared configuration, the same Hadoop environment supports many applications. Each has
specific requirements and criticality (SLA). Yet they all rely on an assembly of shared application
bricks.
At the same time, the life cycle of a cluster is not static in time. It evolves horizontally, with the
arrival of new applications, but also vertically, as the applications grow in load or evolve in
terms of functionality.
With this in mind, a multi-tenant production cluster presents several challenges including and
not limited to:
- Maintain a high level of SLA for a set of use cases with heterogeneous needs
- Plan and implement the architecture evolution of a cluster in production to ensure the
maintenance of SLA throughout the integration of new use cases on it
EDF will present how it manages this heterogeneity of SLA inherent of any Big Data cluster. EDF
is focusing on how it is renovating its cluster, its organization, its processes and its approach in
order to deliver a platform with strong SLA throughout its life cycle.
Speaker
Edouard Rousseaux, Tech Lead, EDF
Continuously improving factory operations is of critical importance to manufacturers. Consider the facts: the total cost of poor quality amounts to a staggering 20% of sales (American Society of Quality) and unplanned downtime costs plants approximately $50 billion per year (Deloitte).
The most pressing questions are: which process variables effect quality and yield and which process variables predict equipment failure? Getting to those answers is providing forward thinking manufacturers a leg up over competitors.
The speakers address the data management challenges facing today's manufacturers, including proprietary systems and silo'ed data sources, as well as an inability to make sensor-based data usable.
Integrating enterprise data from ERP, MES, maintenance systems and other sources with real time operations data from sensors, PLCs, SCADA systems and historians represents a major first step. But how to get started? What is the value of a data lake? How are AI/ML being applied to enable real time action?
Join us for this educational session, which includes a rare view from one of our SWAT team experts into our roadmap for an open source industrial IoT data management platform.
Key Takeaways:
• How to choose an initial project from which to quickly demonstrate high value returns
• Understand the value of multivariate data sources, as opposed to a single sensor on a piece of equipment
• Understand advances in big data management and streaming analytics that are paving the way to next-generation factory performance
MICHAEL GER, General Manager, Manufacturing and Automotive, Hortonworks and RYAN TEMPLETON, Senior Solutions Engineer, Hortonworks
BI on Big Data with instant response times at VerizonDataWorks Summit
As one of the world's largest communications companies, Verizon faced the challenge of analyzing all the video viewership data from its 6+ Million viewers to assess a variety of metrics like viewing patterns, popular channels, popular programs, prime time viewership, VOD / DVR consumption etc. Queries would often take hours or even days to run and more to evaluate, resulting in the company only running the analysis periodically. This led to lag times in Verizon's ability to respond to network, content, customer and company problems. Using Kyvos, Verizon was able to build a BI Consumption Layer that helped them analyze this massive data in its entirety, with the ability to slice and dice and drill down to the lowest levels of granularity, with instantaneous response times. Analysis across mediums, viewers, geographies and diagnostics have enabled Verizon to get faster, deeper insights at a more granular level, and optimize their engagement with partners and customers. ARUN JINDE, Technical Architect, Verizon and AJAY ANAND, VP Products, Kyvos Insights
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...Hitachi Vantara
Hitachi next-generation unified storage solutions meet the challenges of today’s data-intensive oil and gas exploration and production activities. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
A-B-C Strategies for File and Content BrochureHitachi Vantara
Explains each strategy, including archive 1st, back up less, consolidate more, distributed IT efficiency, enable e-discovery and compliance, and facilitate cloud. For more information on Unstructured Data Management Solutions by HDS please visit: http://www.hds.com/solutions/it-strategies/unstructured-data-management.html?WT.ac=us_mg_sol_udm
As more companies grow their business in global markets, they discover the need to capture new opportunities in a matter of days rather than months to have competitive advantage and to capture new market share. Their machines are producing terabytes of various data types — video, audio, Microsoft® SharePoint®, sensor data, Microsoft Excel® files — and leaders are searching for the right technologies to capture this data and help provide a better understanding of their business. The HDS big data product roadmap will help customers build a big data enterprise plan that ingests data faster and correlate meaningful data sets to create intelligence that’s easy to consume and helps leaders make the right business decisions. View this webcast to learn about Hitachi’s product roadmap to big data. For more information on HDS Big Data Solutions please visit: http://www.hds.com/solutions/it-strategies/big-data/?WT.ac=us_mg_sol_bigdat
Lessons learned processing 70 billion data points a day using the hybrid cloudDataWorks Summit
NetApp receives 70 billion data points of telemetry information each day from its customer’s storage systems. This telemetry data contains configuration information, performance counters, and logs. All of this data is processed using multiple Hadoop clusters, and feeds a machine learning pipeline and a data serving infrastructure that produces insights for customers via an application called Active IQ. We describe the evolution of our Hadoop infrastructure from a traditional on-premises architecture to the hybrid cloud, and lessons learned.
We’ll discuss the insights we are able to produce for our customers, and the techniques used. Finally, we describe the data management challenges with our multi-petabyte Hadoop data lake. We solved these problems by building a unified data lake on-premises and using the NetApp Data Fabric to seamlessly connect to public clouds for data science and machine learning compute resources.
Architecting a truly hybrid cloud implementation allowed NetApp to free up our data scientists to use any software on any cloud, kept the customer log data safe on NetApp Private Storage in Equinix, resulted in faster ability to innovate and release new code and provided flexibility to use any public cloud at the same time with data on NetApp in Equinix.
Speaker
Pranoop Erasani, NetApp, Senior Technical Director, ONTAP
Shankar Pasupathy, NetApp, Technical Director, ACE Engineering
Data analytics, Spark, Hadoop and AI have become fundamental tools to drive digital transformation. A critical challenge is moving from isolated experiments to an organizational or enterprise production infrastructure. In this talk, we break apart the modern data analytics workflow to focus on the data challenges across different phases of the analytics and AI life cycle. By presenting a unified approach to data storage for AI and Analytics, organizations can reduce costs, modernize their data strategy and build a sustainable enterprise data lake. By anticipating how Hadoop, Spark, Tensorflow, Caffe and traditional analytics like SAS, HPC can share data, IT departments and data science practitioners can not only co-exist, but speed time to insight. We will present the tangible benefits of a Reference Architecture using real-world installations that span proprietary and open-source frameworks. Using intelligent software-defined shared storage, users are able to eliminate silos, reduce multiple data copies, and improve time to insight.PALLAVI GALGALI, Offering Manager,IBM and DOUGLAS O'FLAHERTY, Portfolio Product Manager, IBM
Not Just a necessary evil, it’s good for business: implementing PCI DSS contr...DataWorks Summit
For firms in the financial industry, especially within regulated organizations such as credit card processors and banks, PCI DSS compliance has become a business and operational necessity. Although the blueprint of a PCI-compliant architecture varies from organization to organization, the mixture of modern Hadoop-based data lakes and legacy systems are a common theme.
In this talk, we will discuss recent updates to PCI DSS and how significant portions of PCI DSS compliance controls can be achieved using open source Hadoop security stack and technologies for the Hadoop ecosystem. We will provide a broad overview of implementing key aspects of PCI DSS standards at WorldPay such as encryption management, data protection with anonymization, separation of duties, and deployment considerations regarding securing the Hadoop clusters at the network layer from a practitioner’s perspective. The talk will provide patterns and practices map current Hadoop security capabilities to security controls that a PCI-compliant environment requires.
Speaker
David Walker, Enterprise Data Platform Programme Director, Worldpay
Srikanth Venkat, Senior Director Product Management, Hortonworks
Microsoft SQL Server 2012 Data Warehouse on Hitachi Converged PlatformHitachi Vantara
Accelerate breakthrough insights across your organization with Microsoft SQL Server 2012 Data Warehouse running on the mission-critical and ready-to-deploy Hitachi server-storage-networking platform, Hitachi Unified Compute Platform. Amplify infrastructure performance with Hitachi and Microsoft SQL Server 2012 Fast Track Data Warehouse xVelocity in-memory technologies. Learn how your organization can extract 100 million+ records in 2 or 3 seconds versus the 30 minutes required previously. With SQL Server 2012 Fast Track Data Warehouse and Hitachi software, your organization will be able to leverage a data platform that processes any data anywhere. View this webcast and learn:How to reduce deployment time with ready-to-deploy solutions that have been engineered and pre-configured by Hitachi and validated by the Microsoft Fast Track Data Warehouse program. How Hitachi and Microsoft have optimized performance for your data warehouse requirements. How your organization can realize immediate ROI from your data warehouse investment. For more information on Hitachi Unified Compute Platform please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
Face Data Challenges of Life Science Organizations With Next-Generation Hitac...Hitachi Vantara
Hitachi Unified Storage 100 family drives efficiency at reduced costs and improves the discovery-to-market cycle for life sciences organizations. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Big Data – Shining the Light on Enterprise Dark DataHitachi Vantara
Content stored for a business purpose is often without structure or metadata required to determine its original purpose. With Hitachi Data Discovery Suite and Hitachi Content Platform, businesses can uncover dark data that could be leveraged for better business insight and uncover compliance issues that could prevent business risks. View this session and learn: What is enterprise dark data? How can enterprise dark data impact business decisions? How can you augment your underutilized data and deliver more value? How can you decrease the headache and challenges created by dark data? For more information please visit: http://www.hds.com/products/file-and-content/
How to use flash drives with Apache Hadoop 3.x: Real world use cases and proo...DataWorks Summit
Apache Hadoop 3.x ushers in major architectural advances such as erasure coding in HDFS, containerized workload flexibility, GPU resource pooling and a litany of other features. These enhancements help drive real benefits when combined with high-speed, high-capacity solid state drives (SSDs).
Micron is a user of Apache Hadoop as well as an innovator in next-gen IT architecture, pushing the envelope on flash storage with the latest 3D NAND. Micron labs have benchmarks showing that adding a single SSD to existing HDD-based cluster nodes can deliver 200% faster Hadoop, at a fraction of the cost of new nodes.
In this session, Micron and Hortonworks will show real-world results demonstrating the tangible benefits of Apache Hadoop 3.x combined with the latest in non-volatile storage and an updated IT infrastructure with NVMe™ solid state drives in well-designed platforms. We will explore specific workloads and application acceleration by combining Apache Hadoop 3.x with SSDs to build analytics platforms that provide a sustainable competitive advantage for many applications to deliver a combination of low latency, high-performance active archives with better results and reduced storage overhead.
Speakers
Saumitra Buragohain, Hortonworks, Sr. Director, Product Management
Mike Cunliffe, Micron Technology, Data Management Architect
Can data virtualization uphold performance with complex queries?Denodo
Watch full webinar here: https://bit.ly/2JzypTx
There are myths about data virtualization that are based on misconceptions and even falsehoods. These myths can confuse and worry people who - quite rightly - look at data virtualization as a critical technology for a modern, agile data architecture.
We've decided that we need to set the record straight, so we put together this webinar series. It's time to bust a few myths!
In the first webinar of the series, we’ll be busting the 'performance' myth. “What about performance?” is usually the first question that we get when talking to people about data virtualization. After all, the data virtualization layer sits between you and your data, so how does this affect the performance of your queries? Sometimes the myth is perpetuated by people with alternative solutions…the ‘Put all your data in our Cloud and everything will be fine. Data virtualization? Nah, you don’t need that! It can't handle big queries anyway,’ type of thing.
Join us for this webinar to look at the basis of the 'performance' myth and examine whether there is any underlying truth to it.
Integrating and Analyzing Data from Multiple Manufacturing Sites using Apache...DataWorks Summit
In this talk Mark Baker (CSL) will show how CSL Behring is Integrating and Analyzing Data from Multiple Manufacturing Sites using Apache NIFI to a central Hadoop data lake at CSL Behring
The challenge of merging data from disparate systems has been a leading driver behind investments in data warehousing systems, as well as, in Hadoop. While data warehousing solutions are ready-built for RDBMS integration, Hadoop adds the benefits of infinite and economical scale – not to mention the variety of structured and non-structured formats that it can handle. Whether using a data warehouse or Hadoop or both, physical data movement and consolidation is the primary method of integration.
There may also be challenges with synchronizing rapidly changing data from a system of record to a consolidated Hadoop platform .
This introduces the need for “data federation” , where data is integrated without copying data between systems.
For historical/batch data use cases there is a replication of data across remote data hubs into a central data lake using Apache NIFI.
We will demo using Apache Zeppelin for analyzing data using Apache Spark and Apache HIVE.
Simplify Data Center Monitoring With a Single-Pane ViewHitachi Vantara
Keeping IT systems up and well tuned requires constant attention, but the task is too often complicated by separate monitoring tools required to watch applications, servers, networks and storage. This white paper discusses how system administrators can consolidate oversight of these components, particularly where DataCore SANsymphony V storage hypervisor virtualizes the storage resources. Such visibility is made possible through the integration of SANsymphony-V with Hitachi IT Operations Analyzer.
Hitachi Virtual Storage Platform is the only 3-D scaling storage platform designed for all data types. It is the only storage architecture that flexibly adapts for performance and capacity, and virtualizes multivendor storage. With the unique management capabilities of Hitachi Command Suite software, it transforms the data center.
A-B-C Strategies for File and Content BrochureHitachi Vantara
Explains each strategy, including archive 1st, back up less, consolidate more, distributed IT efficiency, enable e-discovery and compliance, and facilitate cloud. For more information on Unstructured Data Management Solutions by HDS please visit: http://www.hds.com/solutions/it-strategies/unstructured-data-management.html?WT.ac=us_mg_sol_udm
As more companies grow their business in global markets, they discover the need to capture new opportunities in a matter of days rather than months to have competitive advantage and to capture new market share. Their machines are producing terabytes of various data types — video, audio, Microsoft® SharePoint®, sensor data, Microsoft Excel® files — and leaders are searching for the right technologies to capture this data and help provide a better understanding of their business. The HDS big data product roadmap will help customers build a big data enterprise plan that ingests data faster and correlate meaningful data sets to create intelligence that’s easy to consume and helps leaders make the right business decisions. View this webcast to learn about Hitachi’s product roadmap to big data. For more information on HDS Big Data Solutions please visit: http://www.hds.com/solutions/it-strategies/big-data/?WT.ac=us_mg_sol_bigdat
Lessons learned processing 70 billion data points a day using the hybrid cloudDataWorks Summit
NetApp receives 70 billion data points of telemetry information each day from its customer’s storage systems. This telemetry data contains configuration information, performance counters, and logs. All of this data is processed using multiple Hadoop clusters, and feeds a machine learning pipeline and a data serving infrastructure that produces insights for customers via an application called Active IQ. We describe the evolution of our Hadoop infrastructure from a traditional on-premises architecture to the hybrid cloud, and lessons learned.
We’ll discuss the insights we are able to produce for our customers, and the techniques used. Finally, we describe the data management challenges with our multi-petabyte Hadoop data lake. We solved these problems by building a unified data lake on-premises and using the NetApp Data Fabric to seamlessly connect to public clouds for data science and machine learning compute resources.
Architecting a truly hybrid cloud implementation allowed NetApp to free up our data scientists to use any software on any cloud, kept the customer log data safe on NetApp Private Storage in Equinix, resulted in faster ability to innovate and release new code and provided flexibility to use any public cloud at the same time with data on NetApp in Equinix.
Speaker
Pranoop Erasani, NetApp, Senior Technical Director, ONTAP
Shankar Pasupathy, NetApp, Technical Director, ACE Engineering
Data analytics, Spark, Hadoop and AI have become fundamental tools to drive digital transformation. A critical challenge is moving from isolated experiments to an organizational or enterprise production infrastructure. In this talk, we break apart the modern data analytics workflow to focus on the data challenges across different phases of the analytics and AI life cycle. By presenting a unified approach to data storage for AI and Analytics, organizations can reduce costs, modernize their data strategy and build a sustainable enterprise data lake. By anticipating how Hadoop, Spark, Tensorflow, Caffe and traditional analytics like SAS, HPC can share data, IT departments and data science practitioners can not only co-exist, but speed time to insight. We will present the tangible benefits of a Reference Architecture using real-world installations that span proprietary and open-source frameworks. Using intelligent software-defined shared storage, users are able to eliminate silos, reduce multiple data copies, and improve time to insight.PALLAVI GALGALI, Offering Manager,IBM and DOUGLAS O'FLAHERTY, Portfolio Product Manager, IBM
Not Just a necessary evil, it’s good for business: implementing PCI DSS contr...DataWorks Summit
For firms in the financial industry, especially within regulated organizations such as credit card processors and banks, PCI DSS compliance has become a business and operational necessity. Although the blueprint of a PCI-compliant architecture varies from organization to organization, the mixture of modern Hadoop-based data lakes and legacy systems are a common theme.
In this talk, we will discuss recent updates to PCI DSS and how significant portions of PCI DSS compliance controls can be achieved using open source Hadoop security stack and technologies for the Hadoop ecosystem. We will provide a broad overview of implementing key aspects of PCI DSS standards at WorldPay such as encryption management, data protection with anonymization, separation of duties, and deployment considerations regarding securing the Hadoop clusters at the network layer from a practitioner’s perspective. The talk will provide patterns and practices map current Hadoop security capabilities to security controls that a PCI-compliant environment requires.
Speaker
David Walker, Enterprise Data Platform Programme Director, Worldpay
Srikanth Venkat, Senior Director Product Management, Hortonworks
Microsoft SQL Server 2012 Data Warehouse on Hitachi Converged PlatformHitachi Vantara
Accelerate breakthrough insights across your organization with Microsoft SQL Server 2012 Data Warehouse running on the mission-critical and ready-to-deploy Hitachi server-storage-networking platform, Hitachi Unified Compute Platform. Amplify infrastructure performance with Hitachi and Microsoft SQL Server 2012 Fast Track Data Warehouse xVelocity in-memory technologies. Learn how your organization can extract 100 million+ records in 2 or 3 seconds versus the 30 minutes required previously. With SQL Server 2012 Fast Track Data Warehouse and Hitachi software, your organization will be able to leverage a data platform that processes any data anywhere. View this webcast and learn:How to reduce deployment time with ready-to-deploy solutions that have been engineered and pre-configured by Hitachi and validated by the Microsoft Fast Track Data Warehouse program. How Hitachi and Microsoft have optimized performance for your data warehouse requirements. How your organization can realize immediate ROI from your data warehouse investment. For more information on Hitachi Unified Compute Platform please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
Face Data Challenges of Life Science Organizations With Next-Generation Hitac...Hitachi Vantara
Hitachi Unified Storage 100 family drives efficiency at reduced costs and improves the discovery-to-market cycle for life sciences organizations. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Big Data – Shining the Light on Enterprise Dark DataHitachi Vantara
Content stored for a business purpose is often without structure or metadata required to determine its original purpose. With Hitachi Data Discovery Suite and Hitachi Content Platform, businesses can uncover dark data that could be leveraged for better business insight and uncover compliance issues that could prevent business risks. View this session and learn: What is enterprise dark data? How can enterprise dark data impact business decisions? How can you augment your underutilized data and deliver more value? How can you decrease the headache and challenges created by dark data? For more information please visit: http://www.hds.com/products/file-and-content/
How to use flash drives with Apache Hadoop 3.x: Real world use cases and proo...DataWorks Summit
Apache Hadoop 3.x ushers in major architectural advances such as erasure coding in HDFS, containerized workload flexibility, GPU resource pooling and a litany of other features. These enhancements help drive real benefits when combined with high-speed, high-capacity solid state drives (SSDs).
Micron is a user of Apache Hadoop as well as an innovator in next-gen IT architecture, pushing the envelope on flash storage with the latest 3D NAND. Micron labs have benchmarks showing that adding a single SSD to existing HDD-based cluster nodes can deliver 200% faster Hadoop, at a fraction of the cost of new nodes.
In this session, Micron and Hortonworks will show real-world results demonstrating the tangible benefits of Apache Hadoop 3.x combined with the latest in non-volatile storage and an updated IT infrastructure with NVMe™ solid state drives in well-designed platforms. We will explore specific workloads and application acceleration by combining Apache Hadoop 3.x with SSDs to build analytics platforms that provide a sustainable competitive advantage for many applications to deliver a combination of low latency, high-performance active archives with better results and reduced storage overhead.
Speakers
Saumitra Buragohain, Hortonworks, Sr. Director, Product Management
Mike Cunliffe, Micron Technology, Data Management Architect
Can data virtualization uphold performance with complex queries?Denodo
Watch full webinar here: https://bit.ly/2JzypTx
There are myths about data virtualization that are based on misconceptions and even falsehoods. These myths can confuse and worry people who - quite rightly - look at data virtualization as a critical technology for a modern, agile data architecture.
We've decided that we need to set the record straight, so we put together this webinar series. It's time to bust a few myths!
In the first webinar of the series, we’ll be busting the 'performance' myth. “What about performance?” is usually the first question that we get when talking to people about data virtualization. After all, the data virtualization layer sits between you and your data, so how does this affect the performance of your queries? Sometimes the myth is perpetuated by people with alternative solutions…the ‘Put all your data in our Cloud and everything will be fine. Data virtualization? Nah, you don’t need that! It can't handle big queries anyway,’ type of thing.
Join us for this webinar to look at the basis of the 'performance' myth and examine whether there is any underlying truth to it.
Integrating and Analyzing Data from Multiple Manufacturing Sites using Apache...DataWorks Summit
In this talk Mark Baker (CSL) will show how CSL Behring is Integrating and Analyzing Data from Multiple Manufacturing Sites using Apache NIFI to a central Hadoop data lake at CSL Behring
The challenge of merging data from disparate systems has been a leading driver behind investments in data warehousing systems, as well as, in Hadoop. While data warehousing solutions are ready-built for RDBMS integration, Hadoop adds the benefits of infinite and economical scale – not to mention the variety of structured and non-structured formats that it can handle. Whether using a data warehouse or Hadoop or both, physical data movement and consolidation is the primary method of integration.
There may also be challenges with synchronizing rapidly changing data from a system of record to a consolidated Hadoop platform .
This introduces the need for “data federation” , where data is integrated without copying data between systems.
For historical/batch data use cases there is a replication of data across remote data hubs into a central data lake using Apache NIFI.
We will demo using Apache Zeppelin for analyzing data using Apache Spark and Apache HIVE.
Simplify Data Center Monitoring With a Single-Pane ViewHitachi Vantara
Keeping IT systems up and well tuned requires constant attention, but the task is too often complicated by separate monitoring tools required to watch applications, servers, networks and storage. This white paper discusses how system administrators can consolidate oversight of these components, particularly where DataCore SANsymphony V storage hypervisor virtualizes the storage resources. Such visibility is made possible through the integration of SANsymphony-V with Hitachi IT Operations Analyzer.
Hitachi Virtual Storage Platform is the only 3-D scaling storage platform designed for all data types. It is the only storage architecture that flexibly adapts for performance and capacity, and virtualizes multivendor storage. With the unique management capabilities of Hitachi Command Suite software, it transforms the data center.
Power the Creation of Great Work Solution ProfileHitachi Vantara
This solution discusses how quality and speed are critical in solving storage and data management bottlenecks, delivering cost-effective solutions that are highly scalable for post-production tasks. Whether CGI animation, rendering, or transcoding, Hitachi Data Systems powers digital workflows, enabling extraordinary creative and business achievements with HUS and HNAS infrastructure offerings. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 Series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Learn more about Hitachi Content Platform Anywhere by visiting http://www.hds.com/products/file-and-content/hitachi-content-platform-anywhere.html
and more information on the Hitachi Content Platform is at http://www.hds.com/products/file-and-content/content-platform
Accelerate Analytics and ML in the Hybrid Cloud EraAlluxio, Inc.
Alluxio Webinar
April 6, 2021
For more Alluxio events: https://www.alluxio.io/events/
Speakers:
Alex Ma, Alluxio
Peter Behrakis, Alluxio
Many companies we talk to have on premises data lakes and use the cloud(s) to burst compute. Many are now establishing new object data lakes as well. As a result, running analytics such as Hive, Spark, Presto and machine learning are experiencing sluggish response times with data and compute in multiple locations. We also know there is an immense and growing data management burden to support these workflows.
In this talk, we will walk through what Alluxio’s Data Orchestration for the hybrid cloud era is and how it solves the performance and data management challenges we see.
In this tech talk, we'll go over:
- What is Alluxio Data Orchestration?
- How does it work?
- Alluxio customer results
Data Orchestration for the Hybrid Cloud EraAlluxio, Inc.
Alluxio Community Office Hour
October 20, 2020
For more Alluxio events: https://www.alluxio.io/events/
Speaker(s):
Alex Ma, Alluxio
Peter Behrakis, Alluxio
Many companies we talk to have on premises data lakes and use the cloud(s) to burst compute. Many are now establishing new object data lakes as well. As a result, running analytics such as Hive, Spark, Presto and machine learning are experiencing sluggish response times with data and compute in multiple locations. We also know there is an immense and growing data management burden to support these workflows.
In this talk, we will walk through what Alluxio’s Data Orchestration for the hybrid cloud era is and how it solves the performance and data management challenges we see.
In this tech talk, we'll go over:
- What is Alluxio Data Orchestration?
- How does it work?
- Alluxio customer results
Accelerate Analytics and ML in the Hybrid Cloud EraAlluxio, Inc.
Alluxio Webinar
September 22, 2020
For more Alluxio events: https://www.alluxio.io/events/
Speakers:
Alex Ma, Alluxio
Peter Behrakis, Alluxio
Many companies we talk to have on premises data lakes and use the cloud(s) to burst compute. Many are now establishing new object data lakes as well. As a result, running analytics such as Hive, Spark, Presto and machine learning are experiencing sluggish response times with data and compute in multiple locations. We also know there is an immense and growing data management burden to support these workflows.
In this talk, we will walk through what Alluxio’s Data Orchestration for the hybrid cloud era is and how it solves the performance and data management challenges we see.
In this tech talk, we'll go over:
- What is Alluxio Data Orchestration?
- How does it work?
- Alluxio customer results
Data lakes are central repositories that store large volumes of structured, unstructured, and semi-structured data. They are ideal for machine learning use cases and support SQL-based access and programmatic distributed data processing frameworks. Data lakes can store data in the same format as its source systems or transform it before storing it. They support native streaming and are best suited for storing raw data without an intended use case. Data quality and governance practices are crucial to avoid a data swamp. Data lakes enable end-users to leverage insights for improved business performance and enable advanced analytics.
Verizon Centralizes Data into a Data Lake in Real Time for AnalyticsDataWorks Summit
Verizon – Global Technology Services (GTS) was challenged by a multi-tier, labor-intensive process when trying to migrate data from disparate sources into a data lake to create financial reports and business insights. Join this session to learn more about how Verizon:
• Easily accessed data from multiple sources including SAP data
• Ingested data into major targets including Hadoop
• Achieved real-time insights from data leveraging change data capture (CDC) technology
• Reduced costs and labor
Oracle Big Data Appliance and Big Data SQL for advanced analyticsjdijcks
Overview presentation showing Oracle Big Data Appliance and Oracle Big Data SQL in combination with why this really matters. Big Data SQL brings you the unique ability to analyze data across the entire spectrum of system, NoSQL, Hadoop and Oracle Database.
Turning Petabytes of Data into Profit with Hadoop for the World’s Biggest Ret...Cloudera, Inc.
PRGX is the world's leading provider of accounts payable audit services and works with leading global retailers. As new forms of data started to flow into their organizations, standard RDBMS systems were not allowing them to scale. Now, by using Talend with Cloudera Enterprise, they are able to acheive a 9-10x performance benefit in processing data, reduce errors, and now provide more innovative products and services to end customers.
Watch this webinar to learn how PRGX worked with Cloudera and Talend to create a high-performance computing platform for data analytics and discovery that rapidly allows them to process, model, and serve massive amount of structured and unstructured data.
Similar to Hitachi solution-profile-advanced-project-version-management-in-schlumberger-petrel-environments (20)
Hitachi Vantara and our special guest, Dr. Alison Brooks, Research Director at IDC, discuss:
• How video and other IoT data can help your business become smarter, safer and more efficient.
• How to harness IoT data to gain operational intelligence and achieve better business outcomes.
• How Hitachi’s customers are innovating with IoT to excel.
• Which practical applications and best practices will get you started on your own IoT journey to reach your goals and tackle your challenges.
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Hitachi Vantara
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bringing Flexibility, Agility and Readiness to the Real-Time Enterprise. VMworld 2015
Hitachi Virtual Infrastructure Integrator (Virtual V2I) is a VMware vCenter plugin plus associated software. It provides data management efficiency for large VM environments. Specifically, the latest release addresses virtual machine backup and recovery and cloning services. Customer want to leverage storage based snapshots as it is scalable, more granular backup from hours between backups to minutes resulting in improved RPO. VMworld 2015.
Economist Intelligence Unit: Preparing for Next-Generation CloudHitachi Vantara
Preparing for next-generation cloud: Lessons learned and insights shared is an Economist Intelligence Unit (EIU) research programme, sponsored by Hitachi Data Systems. In this report, the EIU looks at companies’ experiences with cloud adoption and assesses whether the technology has lived up to expectations. Where the cloud has fallen short of expectations, we set out to understand why. In cases of seamless implementation, we gather best practices from firms using the cloud successfully.
HDS Influencer Summit 2014: Innovating with Information to Address Business N...Hitachi Vantara
Top Executives at HDS share how the company is Innovating with Information to address business needs. Learn how the company is transforming now and into the future. #HDSday.”
Information Innovation Index 2014 UK Research ResultsHitachi Vantara
Hitachi Data Systems releases insights from its inaugural ‘Information Innovation Index’, a UK research report, conducted by independent UK technology market research agency, Vanson Bourne, in which 200 IT decision-makers were surveyed during April 2014 to provide insights into how current approaches to IT are thwarting companies’ ambitions to leverage data to drive innovation and business growth.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
1. Hitachi Helps Schlumberger Petrel Platform Environ-
ments Win the Race to Discovery With Innovation
Like you, we know that innovation is the key to progress.
Hitachi Data Systems (HDS) and our ecosystem of
partners work closely with you to accelerate the total
workflow. We’re with you from data acquisition, data
management, seismic processing, visual interpretation,
modeling automation, petrophysical analysis and property
modeling, to simulation. By creating high-quality decision-
support information, companies are able to confidently
bid on large contracts with immense responsibilities. Or,
they can decline participation where the information is
less favorable. Either way, companies must arrive at well-
researched decisions as quickly as possible.
Business success depends on keeping data pure and
accessible. Hitachi Data Systems solutions create such
efficiencies and protect your investment.
SOLUTIONPROFILE
Challenges in Oil and Gas
Exploration
■■ Many interconnected, increasingly complex and
rapidly changing workflows.
■■ Exponential growth in the volume of data that must
be analyzed within a shorter timeframe.
■■ Computational demands increasing by orders of
magnitude.
■■ Facilitation of collaborative computing and secure
access to data.
■■ Management of the information lifecycle of all data
associated with a project.
■■ Flexibility to transparently optimize the use of dif-
ferent storage technologies without disrupting the
workflow for end users.
Hitachi Provides a Storage Adapter
for Advanced Project Management
for the Schlumberger Petrel Platform
Optimize Exploration and Development
Operations
The Petrel E&P software platform helps increase
reservoir performance by improving asset team
productivity. Geophysicists, geologists and reservoir
engineers can develop collaborative workflows and
integrate operations to streamline processes.
Advanced Project Version Management With Hitachi Storage
Adapter for Petrel™
2. SOLUTION PROFILE
■■ Unify workflows for E&P teams:
Eliminate the gaps that require handoffs
from one technical domain to the next,
using Petrel model-centric workflows in a
shared earth model.
■■ Manage risk and uncertainty: Easily
test multiple scenarios, analyze risk and
uncertainty, capture data relationships
and parameters to perform rapid updates
as new data arrives, and perform detailed
simulation history matching.
■■ Enable knowledge management and
best practices: Reduce workflow learn-
ing curves by capturing best practices
via the Workflow Editor, providing quick
access to preferred workflows and
increased ease of use.
■■ Expand the Petrel environment: Hitachi
Storage Adapter for Petrel leverages the
Ocean framework, enabling easy installation
and access to the functionality within Petrel.
Take the Petrel Platform to New Levels
With Hitachi Storage Adapter for Petrel
At Hitachi Data Systems, we understand
the value of your data. We believe that
data drives our world and information is
the new currency. To this end, we deliver
enterprise-level data management and
availability. We also deliver exceptional
hardware and software solutions that make
processing massive quantities of data, in
the fastest possible times, appear simple.
We manage the information lifecycle and
ensure secure access to the right infor-
mation via multiple access paths in an
ever-changing environment, as well as
provide cost-efficient and nondisruptive
scaling. Hitachi Storage Adapter for Petrel
supports:
■■ Hierarchical project versions that end
users can manage, limited only by the
amount of disk space available.
■■ “File clones” with only incremental
changes saved, reducing “save time” and
storage space.
■■ Project-level replication for disaster recov-
ery scenarios, according to company-set
policies.
■■ Hierarchical view of project versioning for
easy and intuitive management of copies.
■■ Permission-based access and separate
roles for users and system administration.
■■ Ability to explore multiple paths and to
restore a particular point in time.
■■ Access to advanced NAS data man-
agement and protection for automated
enforcement of company policies.
■■ Ability to deliver the performance and
capacity required in mixed application
workflows.
■■ Optimized storage technology use in
the right place; transparent migration
between technologies without impacting
application workflows.
Maximize Productivity With Seamless
Software and Hardware Integration
Hitachi Storage Adapter for Petrel com-
pletely integrates with Hitachi Data Systems
storage and server solutions, making it pos-
sible to centrally manage data on the server
instead of the workstation. Core data is
served from shared storage with advanced
data management, so the data is protected.
Hitachi Storage Adapter for Petrel links into
Hitachi NAS Platform (HNAS) file clones to
quickly make well-defined local copies. The
copies not only track where the files came
from, but also allow end users to explore
alternative paths. Each project version is
captured and stored transparently to the
central storage repository. At the same
time, it completely maintains information
and access permissions that are viewable
by the system administrator at all times.
By eliminating the need to
save multiple copies in sep-
arate files, Hitachi Storage
Adapter for Petrel enables
consistency, collabora-
tion and data protection.
HNAS file cloning, used by the advanced
project management adapter, only stores
incremental changes, which minimizes
transmission times and required storage
capacity. Therefore, they are extremely
space-efficient.
Hitachi NAS Platform
Hitachi NAS Platform is an advanced,
and integrated, network attached storage
(NAS) solution. It is a powerful tool for file
sharing as well as file server consolidation,
data protection and business-critical NAS
workloads. With HNAS, you can solve chal-
lenges associated with data growth while
achieving a low total cost of ownership.
Features
■■ Powerful hardware-accelerated file
system for multiprotocol file services,
dynamic provisioning, intelligent tiering,
virtualization and cloud infrastructure.
■■ High performance and scalability: up to
2GB/sec and 140,000 input/outputs per
second (IOPS) per node up to 16PB of
usable capacity.
■■ File-level virtualization in a global name-
space avoids technology or vendor
dependencies. It also enables unified
access to data stored on storage systems
from other vendors.
■■ Policy-based, universal file migration
simplifies deploying new technology and
migrating data without impacting applica-
tion workflows.
■■ Seamless integration with Hitachi SAN stor-
age, Hitachi Command Suite and Hitachi
Data Discovery Suite for advanced search
and indexing across HNAS systems.
■■ Integration with Hitachi Content Platform
for active archiving, regulatory compli-
ance and large object storage for cloud
infrastructure.
Benefits
■■ Simplifies your IT infrastructure by allow-
ing you to consolidate NAS devices or file
servers and migrate data by policy across
multiple vendors and technologies.
■■ Reduces the complexity of storage man-
agement and lowers your total cost of
ownership.
■■ Significantly improves efficiency, agility
and utilization across NAS environments
through advanced virtualization and data
protection capabilities.
■■ Offers exceptional performance and
improves productivity for Schlumberger
Petrel environments.
■■ Allows organizations to reach decisions
faster by reducing processing cycle times
required for Schlumberger applications.
■■ Responds rapidly to changing demands
as datasets grow and new analysis meth-
ods are deployed.
■■ Increases the number of simultaneous
users or projects to support larger work-
groups and increasing business.
■■ Optimizes the use of different storage
technologies through intelligent and trans-
parent tiering.
VIEW DEMO
Hitachi
Storage
Adapter
for Petrel
3. 3
www.HDS.com/innovate
Innovation is the engine of change, and
information is its fuel. Innovate intelligently
to lead your market, grow your company,
and change the world. Manage your
information with Hitachi Data Systems.
■■ Implements and enforces data reten-
tion policies without manual intervention
through automated data management.
■■ Scales to the highest capacity with
standard file systems to manage larger
projects.
USE CASES: USER-CREATED FILE CLONE
User Launch
Petrel Platform
Continue Petrel
Initiate
File Clone Request
Move From Petrel Platform
Workstation’s Local
Storage to SMB Share
Start:
Create SMB File Clone
Start:
SMB File Clone SMB Share
File Clone Failed
Retry File Clone to
Petrel Workstation’s
Local Storage
Flag for
SMB Move
Successful File Clone
on SMB Share
Both File Clones Fail
– Report Error
Use Case Prerequisites
1. Hitachi Storage Adapter for Petrel plug-in is installed.
2. SMB share log-ins/AD authentication is available for advanced project management adapter.
(This data will be created during installation and modified and managed using the Hitachi
Storage Adapter for Petrel admin utility.)
Use Case Assumptions
1. End user data is on an SMB share.
2. End user has enough storage on local disk to save local snapshot if SMB snapshot fails.
HNAS = Hitachi NAS Platform
Collaborative Version
Management
Hitachi Storage Adapter for Petrel
facilitates the ability to create unique
point-in-time clones of a project that can
be used to create a hierarchical view of
project versions. These versions can be
evaluated and compared. The results
can be correlated for a defined point in
the workflow process to determine the
most valuable version of that analysis
and continue from those decision points.
Key Differentiators
■■ Provides hierarchical project versions,
which end users manage, limited only
by the amount of disk space available.
■■ Offers a project data management
solution where active data automati-
cally and transparently is synchronized
with central storage.
■■ Relieves end users of the task of
reserving disk space in advance to
create another file clone.
■■ Enables central management of the
project information.
■■ Creates multiple trees and branches
to simultaneously create multiple
versions.