A key reason for using dynamic tiering for mainframe storage is performance. This session will focus on dynamic tiering in mainframe environments and how to configure and control tiering. The session ends with a detailed discussion of performance considerations when using Hitachi Dynamic Tiering. By viewing this webcast, you will: Understand Hitachi Dynamic Tiering and the options for configuring and controlling tiering. Understand the performance considerations and the type of performance improvements you might experience when you implement Hitachi Dynamic Tiering. For more information on Hitachi Dynamic Tiering please visit: http://www.hds.com/products/storage-software/hitachi-dynamic-tiering.html?WT.ac=us_mg_pro_dyntir
Consolidate More: High Performance Primary Deduplication in the Age of Abunda...Hitachi Vantara
Increase productivity, efficiency and environmental savings by eliminating silos, preventing sprawl and reducing complexity by 50%. Using powerful consolidation systems, Hitachi Unified Storage or Hitachi NAS Platform, lets you consolidate existing file servers and NAS devices on to fewer nodes. You can perform the same or even more work with fewer devices and lower overhead, while reducing floor space and associated power and cooling costs. View this webcast to learn how to: Shrink your primary file data without disrupting performance. Increase productivity and utilization of available capacity. Defer additional storage purchases. Save on power, cooling and space costs. For more information please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_inside_rm_htchunfds
Why hitachi virtual storage platform does so well in a mainframe environment ...Hitachi Vantara
Hitachi VSP is a new paradigm in enterprise array performance. In this session we will discuss how the architecture of VSP enhances its box-wide performance. The results of performance testing with synthetic host I/O generators and the PAI/O driver will also be presented.
Capacity Efficiency: Identifying the Right Solutions for the Right ChallengeHitachi Vantara
Justin Augat, Hitachi Data Systems Senior Product Marketing Manager shares strategies to identify current storage costs, measure the unit cost of data storage, and set preliminary plans to reduce the total cost of storage.
Advantages of Mainframe Replication With Hitachi VSPHitachi Vantara
Learn how Hitachi Virtual Storage Platform mainframe replication capabilities can address your business continuity and disaster recovery requirements. Also learn how Brocade switches and directors complement HDS mainframe replication capabilities and add value to HDS solutions. By viewing this webcast, you’ll learn: Trends driving changes to business continuity requirements, and how HDS replication products such as Hitachi Universal Replicator and hyperswap integration capabilities with Hitachi Business Continuity Manager are best positioned to address them. The key features and functions of Brocade FCIP switches and Fibre Channel/FICON director inter-data center connectivity that provide additional value to HDS replication solutions. Examples of how companies have implemented complete HDS solutions to solve their mainframe BC and DR needs. For more information on our mainframe solutions please read: http://www.hds.com/solutions/infrastructure/mainframe/?WT.ac=us_mg_sol_mnfr
Infosys Deploys Private Cloud Solution Featuring Combined Hitachi and Microsoft® Technologies. For more information on Hitachi Unified Compute Platform Solutions please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
Hitachi Virtual Infrastructure Integrator (Virtual V2I) is a VMware vCenter plugin plus associated software. It provides data management efficiency for large VM environments. Specifically, the latest release addresses virtual machine backup and recovery and cloning services. Customer want to leverage storage based snapshots as it is scalable, more granular backup from hours between backups to minutes resulting in improved RPO. VMworld 2015.
Presentation of Hitachi Accelerated Flash Storage. This presentation describes target applications, positioning and unique differentiators relative to MLC SSD.
Consolidate More: High Performance Primary Deduplication in the Age of Abunda...Hitachi Vantara
Increase productivity, efficiency and environmental savings by eliminating silos, preventing sprawl and reducing complexity by 50%. Using powerful consolidation systems, Hitachi Unified Storage or Hitachi NAS Platform, lets you consolidate existing file servers and NAS devices on to fewer nodes. You can perform the same or even more work with fewer devices and lower overhead, while reducing floor space and associated power and cooling costs. View this webcast to learn how to: Shrink your primary file data without disrupting performance. Increase productivity and utilization of available capacity. Defer additional storage purchases. Save on power, cooling and space costs. For more information please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_inside_rm_htchunfds
Why hitachi virtual storage platform does so well in a mainframe environment ...Hitachi Vantara
Hitachi VSP is a new paradigm in enterprise array performance. In this session we will discuss how the architecture of VSP enhances its box-wide performance. The results of performance testing with synthetic host I/O generators and the PAI/O driver will also be presented.
Capacity Efficiency: Identifying the Right Solutions for the Right ChallengeHitachi Vantara
Justin Augat, Hitachi Data Systems Senior Product Marketing Manager shares strategies to identify current storage costs, measure the unit cost of data storage, and set preliminary plans to reduce the total cost of storage.
Advantages of Mainframe Replication With Hitachi VSPHitachi Vantara
Learn how Hitachi Virtual Storage Platform mainframe replication capabilities can address your business continuity and disaster recovery requirements. Also learn how Brocade switches and directors complement HDS mainframe replication capabilities and add value to HDS solutions. By viewing this webcast, you’ll learn: Trends driving changes to business continuity requirements, and how HDS replication products such as Hitachi Universal Replicator and hyperswap integration capabilities with Hitachi Business Continuity Manager are best positioned to address them. The key features and functions of Brocade FCIP switches and Fibre Channel/FICON director inter-data center connectivity that provide additional value to HDS replication solutions. Examples of how companies have implemented complete HDS solutions to solve their mainframe BC and DR needs. For more information on our mainframe solutions please read: http://www.hds.com/solutions/infrastructure/mainframe/?WT.ac=us_mg_sol_mnfr
Infosys Deploys Private Cloud Solution Featuring Combined Hitachi and Microsoft® Technologies. For more information on Hitachi Unified Compute Platform Solutions please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
Hitachi Virtual Infrastructure Integrator (Virtual V2I) is a VMware vCenter plugin plus associated software. It provides data management efficiency for large VM environments. Specifically, the latest release addresses virtual machine backup and recovery and cloning services. Customer want to leverage storage based snapshots as it is scalable, more granular backup from hours between backups to minutes resulting in improved RPO. VMworld 2015.
Presentation of Hitachi Accelerated Flash Storage. This presentation describes target applications, positioning and unique differentiators relative to MLC SSD.
Simplify Data Center Monitoring With a Single-Pane ViewHitachi Vantara
Keeping IT systems up and well tuned requires constant attention, but the task is too often complicated by separate monitoring tools required to watch applications, servers, networks and storage. This white paper discusses how system administrators can consolidate oversight of these components, particularly where DataCore SANsymphony V storage hypervisor virtualizes the storage resources. Such visibility is made possible through the integration of SANsymphony-V with Hitachi IT Operations Analyzer.
Learn the facts about replication in mainframe storage webinarHitachi Vantara
Business continuity is essential for today's enterprise computing environments, and protecting your data and information is key. However, the many myths associated with data replication can be confusing. How do you sort truth from fiction. Join Hitachi solution architect Joe Amato to learn about in-system replication as well as replication to remote locations, both synchronously and asynchronously. You'll come away equipped with valuable insight into the business continuity solutions available for mainframe storage.
In this video from the DDN User Group at SC14, Robert Triendl presents: Optimizing Lustre and GPFS Solutions with DDN.
Learn more: http://www.ddn.com/hpc-matters/
Beginner's Guide to High Availability for PostgresEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- High availability concepts and workings
- RPO, RTO, and uptime in high availability
- Postgres high availability using
- Streaming replication
- Logical replication
- Important high availability parameters in Postgres and options to monitor high availability.
- EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
Global Financial Leader Consolidates Mainframe Storage and Reduces Costs with...Hitachi Vantara
Companies with mainframes and mainframe storage face the same complex issues and desires as other businesses. They need to lower costs, reduce their storage footprint, boost performance and increase scalability, all with flat or declining budgets. And even as they make these improvements, companies also want to reduce operations costs and be freed from the overhead of continually tuning their environments for peak performance. They want and expect
data to be moved to the appropriate tier and both capacity and performance to be optimized automatically.
DataCore Software introduction from my "Meet DataCore" webinar. DataCore products include software-defined storage and hyperconverged infrastructure solutions. Datacore has more than 10K customers and 30K+ implementations world-wide.
Flash for the Real World – Separate Hype from RealityHitachi Vantara
Join us for a live webcast and hear Hu Yoshida, Chief Technology Officer of Hitachi Data Systems, discuss the real world criteria for making an effective decision when evaluating flash storage. With all the noise in the market it can be difficult to separate fact from fiction in order to evaluate the performance, efficiency and economic trade-offs for flash storage.
Specifically, you’ll learn how to determine if flash storage will help you:
Actually achieve the performance you need as you compare technology options.
Realize efficiency gains that extend beyond the promise of flash performance.
Make the economic case for real-world business decisions before taking the leap.
Hitachi Dynamic Tiering: An In-Depth Look at Managing HDT and Best Practices,...Hitachi Vantara
Hitachi Dynamic Tiering (HDT) simplifies storage administration by automatically optimizing data placement on multiple tiers of storage. Optimizing this environment will ensure that applications get the performance requirements expected from the underlying storage.
DDN GS7K - Easy-to-deploy, High Performance Scale-Out Parallel File System Ap...inside-BigData.com
In this deckt, Uday Mohan from DataDirect Networks presents: DDN GS7K - Easy-to-deploy, High Performance Scale-Out Parallel File System Appliance.
High performance computing is critical in commercial markets, spanning a wide range of applications across multiple industries, and this trend is only growing. The GS7K from DDN will help bring the latest high-performance storage technologies to more of these markets, connecting companies to their next innovations faster while satisfying their enterprise standards.”
Watch the video presentation: http://wp.me/p3RLHQ-d99
Overview of Hitachi Dynamic Tiering, Part 1 of 2Hitachi Vantara
Hitachi Dynamic Tiering (HDT) simplifies storage administration by automatically optimizing data placement in 1, 2 or 3 tiers of storage that can be defined and used within a single virtual volume. Tiers of storage can be made up of internal or external (virtualized) storage, and use of HDT can lower capital costs. Simplified and unified management of HDT allows for lower operational costs and reduces the challenges of ensuring applications are placed on the appropriate classes of storage.
Jean Thomas Acquaviva from DDN present this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer."
Learn more: http://ddn.com
Watch the video presentation: http://wp.me/p3RLHQ-f7J
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
During this webinar, we will review best practices and lessons learned from working with large and mid-size companies on their deployment of PostgreSQL. We will explore the practices that helped industry leaders move through these stages quickly, and get as much value out of PostgreSQL as possible without incurring undue risk.
We have identified a set of levers that companies can use to accelerate their success with PostgreSQL:
- Application Tiering
- Collaboration between DBAs and Development Teams
- Evangelizing
- Standardization and Automation
- Balance of Migration and New Development
Big Data – Shining the Light on Enterprise Dark DataHitachi Vantara
Content stored for a business purpose is often without structure or metadata required to determine its original purpose. With Hitachi Data Discovery Suite and Hitachi Content Platform, businesses can uncover dark data that could be leveraged for better business insight and uncover compliance issues that could prevent business risks. View this session and learn: What is enterprise dark data? How can enterprise dark data impact business decisions? How can you augment your underutilized data and deliver more value? How can you decrease the headache and challenges created by dark data? For more information please visit: http://www.hds.com/products/file-and-content/
Hitachi Unified Storage 100 family systems consolidate and manage block, file and object data on a central platform. For more information on our unified storage please visit: http://www.hds.com/products/storage-systems/hitachi-unified-storage-100-family.html?WT.ac=us_mg_pro_hus100
Simplify Data Center Monitoring With a Single-Pane ViewHitachi Vantara
Keeping IT systems up and well tuned requires constant attention, but the task is too often complicated by separate monitoring tools required to watch applications, servers, networks and storage. This white paper discusses how system administrators can consolidate oversight of these components, particularly where DataCore SANsymphony V storage hypervisor virtualizes the storage resources. Such visibility is made possible through the integration of SANsymphony-V with Hitachi IT Operations Analyzer.
Learn the facts about replication in mainframe storage webinarHitachi Vantara
Business continuity is essential for today's enterprise computing environments, and protecting your data and information is key. However, the many myths associated with data replication can be confusing. How do you sort truth from fiction. Join Hitachi solution architect Joe Amato to learn about in-system replication as well as replication to remote locations, both synchronously and asynchronously. You'll come away equipped with valuable insight into the business continuity solutions available for mainframe storage.
In this video from the DDN User Group at SC14, Robert Triendl presents: Optimizing Lustre and GPFS Solutions with DDN.
Learn more: http://www.ddn.com/hpc-matters/
Beginner's Guide to High Availability for PostgresEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- High availability concepts and workings
- RPO, RTO, and uptime in high availability
- Postgres high availability using
- Streaming replication
- Logical replication
- Important high availability parameters in Postgres and options to monitor high availability.
- EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
Global Financial Leader Consolidates Mainframe Storage and Reduces Costs with...Hitachi Vantara
Companies with mainframes and mainframe storage face the same complex issues and desires as other businesses. They need to lower costs, reduce their storage footprint, boost performance and increase scalability, all with flat or declining budgets. And even as they make these improvements, companies also want to reduce operations costs and be freed from the overhead of continually tuning their environments for peak performance. They want and expect
data to be moved to the appropriate tier and both capacity and performance to be optimized automatically.
DataCore Software introduction from my "Meet DataCore" webinar. DataCore products include software-defined storage and hyperconverged infrastructure solutions. Datacore has more than 10K customers and 30K+ implementations world-wide.
Flash for the Real World – Separate Hype from RealityHitachi Vantara
Join us for a live webcast and hear Hu Yoshida, Chief Technology Officer of Hitachi Data Systems, discuss the real world criteria for making an effective decision when evaluating flash storage. With all the noise in the market it can be difficult to separate fact from fiction in order to evaluate the performance, efficiency and economic trade-offs for flash storage.
Specifically, you’ll learn how to determine if flash storage will help you:
Actually achieve the performance you need as you compare technology options.
Realize efficiency gains that extend beyond the promise of flash performance.
Make the economic case for real-world business decisions before taking the leap.
Hitachi Dynamic Tiering: An In-Depth Look at Managing HDT and Best Practices,...Hitachi Vantara
Hitachi Dynamic Tiering (HDT) simplifies storage administration by automatically optimizing data placement on multiple tiers of storage. Optimizing this environment will ensure that applications get the performance requirements expected from the underlying storage.
DDN GS7K - Easy-to-deploy, High Performance Scale-Out Parallel File System Ap...inside-BigData.com
In this deckt, Uday Mohan from DataDirect Networks presents: DDN GS7K - Easy-to-deploy, High Performance Scale-Out Parallel File System Appliance.
High performance computing is critical in commercial markets, spanning a wide range of applications across multiple industries, and this trend is only growing. The GS7K from DDN will help bring the latest high-performance storage technologies to more of these markets, connecting companies to their next innovations faster while satisfying their enterprise standards.”
Watch the video presentation: http://wp.me/p3RLHQ-d99
Overview of Hitachi Dynamic Tiering, Part 1 of 2Hitachi Vantara
Hitachi Dynamic Tiering (HDT) simplifies storage administration by automatically optimizing data placement in 1, 2 or 3 tiers of storage that can be defined and used within a single virtual volume. Tiers of storage can be made up of internal or external (virtualized) storage, and use of HDT can lower capital costs. Simplified and unified management of HDT allows for lower operational costs and reduces the challenges of ensuring applications are placed on the appropriate classes of storage.
Jean Thomas Acquaviva from DDN present this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer."
Learn more: http://ddn.com
Watch the video presentation: http://wp.me/p3RLHQ-f7J
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
During this webinar, we will review best practices and lessons learned from working with large and mid-size companies on their deployment of PostgreSQL. We will explore the practices that helped industry leaders move through these stages quickly, and get as much value out of PostgreSQL as possible without incurring undue risk.
We have identified a set of levers that companies can use to accelerate their success with PostgreSQL:
- Application Tiering
- Collaboration between DBAs and Development Teams
- Evangelizing
- Standardization and Automation
- Balance of Migration and New Development
Big Data – Shining the Light on Enterprise Dark DataHitachi Vantara
Content stored for a business purpose is often without structure or metadata required to determine its original purpose. With Hitachi Data Discovery Suite and Hitachi Content Platform, businesses can uncover dark data that could be leveraged for better business insight and uncover compliance issues that could prevent business risks. View this session and learn: What is enterprise dark data? How can enterprise dark data impact business decisions? How can you augment your underutilized data and deliver more value? How can you decrease the headache and challenges created by dark data? For more information please visit: http://www.hds.com/products/file-and-content/
Hitachi Unified Storage 100 family systems consolidate and manage block, file and object data on a central platform. For more information on our unified storage please visit: http://www.hds.com/products/storage-systems/hitachi-unified-storage-100-family.html?WT.ac=us_mg_pro_hus100
How and why to upgrade to hitachi device manager v7 webinarHitachi Vantara
Hitachi Device Manager v7 lets you simplify and control all your storage assets from a centralized console with improved usability, workflow, speed, scalability and task management. Whether you have already upgraded or are considering an upgrade to v7, please join us for this informative webtech session to learn the best practices for upgrading.
Explains how backup-free storage reduces cost and complexity; provides benefits of Hitachi Content Platform; includes brief HDS backup use cases.
For more information on our Unstructured Data Management Solutions please check: http://www.hds.com/go/hitachi-abc-ebook-managing-data/
Comprehensive and Simplified Management for VMware vSphere environmentsHitachi Vantara
Learn how to gain velocity and agility within your VMware vSphere environments while reducing costs and simplifying the management of your server, network and storage infrastructure. You will also learn how to leverage a unified, converged infrastructure to more quickly deploy business-critical workloads within a private cloud environment. View this webcast and learn how to: Increase IT efficiency and gain business velocity by leveraging a unified and converged infrastructure solution from Hitachi. Enable both physical and virtual infrastructure consolidation while supporting thousands of VMs across the data center. Achieve cost reductions through automation and orchestration of your VMware vSphere environment across server, network and storage tiers. For more information on Hitachi Solutions for VMware visit: http://www.hds.com/solutions/applications/vmware/?WT.ac=us_mg_sol_vmw
Hortonworks Technical Workshop: What's New in HDP 2.3Hortonworks
The recently launched HDP 2.3 is a major advancement of Open Enterprise Hadoop. It represents the best of community led development with innovations spanning Apache Hadoop, Apache Ambari, Ranger, HBase, Spark and Storm. In this session we will provide an in-depth overview of new functionality and discuss it's impact on new and ongoing big data initiatives.
Promise - Rich media storage total solution.
Open storage platform for Rich Media Solutions
Thunderbolt3 storage solution - Pegasus3 family
-----------
Liên hệ http://promise.com.vn /
Best Practices for Migrating your Data Warehouse to Amazon RedshiftAmazon Web Services
You can gain substantially more business insights and save costs by migrating your existing data warehouse to Amazon Redshift. This session will cover the key benefits of migrating to Amazon Redshift, migration strategies, and tools and resources that can help you in the process.
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...brettallison
Scope - The primary focus of this presentation is on the methodology we use for managing performance in a very large shared Storage Area Network environment with a Primary focus on Distributed Systems and IBM Enterprise Storage Server. The focus on this presentation is methodology and NOT measurement. There are numerous excellent presentations already out there on measurement. However, there are several references in the back of the presentation to measurement tools.
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...NetAppUK
Flash Storage technologies are opening up a wealth of new opportunities for improving the optimisation of applications, data and storage, as well as reducing costs. In this session, Peter Mason, NetApp Consulting Systems Engineer, shares his experiences and discusses the use and impact of different Flash technologies.
Pilot Hadoop Towards 2500 Nodes and Cluster RedundancyStuart Pook
Hadoop has become a critical part of Criteo's operations. What started out as a proof of concept has turned into two in-house bare-metal clusters of over 2200 nodes. Hadoop contains the data required for billing and, perhaps even more importantly, the data used to create the machine learning models, computed every 6 hours by Hadoop, that participate in real time bidding for online advertising.
Two clusters do not necessarily mean a redundant system, so Criteo must plan for any of the disasters that can destroy a cluster.
This talk describes how Criteo built its second cluster in a new datacenter and how to do it better next time. How a small team is able to run and expand these clusters is explained. More importantly the talk describes how a redundant data and compute solution at this scale must function, what Criteo has already done to create this solution and what remains undone.
Learn about the features that can help you modernize your mission critical applications, where security and performance can go hand in hand. From the wide range of SQL Server features available, we will take a closer look at In-Memory performance, Automatic Tuning, Advanced Security Features like Always Encrypted, Polybase and integration with Machine Learning through R and Python.
Similar to VSP Mainframe Dynamic Tiering Performance Considerations (20)
Hitachi Vantara and our special guest, Dr. Alison Brooks, Research Director at IDC, discuss:
• How video and other IoT data can help your business become smarter, safer and more efficient.
• How to harness IoT data to gain operational intelligence and achieve better business outcomes.
• How Hitachi’s customers are innovating with IoT to excel.
• Which practical applications and best practices will get you started on your own IoT journey to reach your goals and tackle your challenges.
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Hitachi Vantara
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bringing Flexibility, Agility and Readiness to the Real-Time Enterprise. VMworld 2015
Economist Intelligence Unit: Preparing for Next-Generation CloudHitachi Vantara
Preparing for next-generation cloud: Lessons learned and insights shared is an Economist Intelligence Unit (EIU) research programme, sponsored by Hitachi Data Systems. In this report, the EIU looks at companies’ experiences with cloud adoption and assesses whether the technology has lived up to expectations. Where the cloud has fallen short of expectations, we set out to understand why. In cases of seamless implementation, we gather best practices from firms using the cloud successfully.
HDS Influencer Summit 2014: Innovating with Information to Address Business N...Hitachi Vantara
Top Executives at HDS share how the company is Innovating with Information to address business needs. Learn how the company is transforming now and into the future. #HDSday.”
Information Innovation Index 2014 UK Research ResultsHitachi Vantara
Hitachi Data Systems releases insights from its inaugural ‘Information Innovation Index’, a UK research report, conducted by independent UK technology market research agency, Vanson Bourne, in which 200 IT decision-makers were surveyed during April 2014 to provide insights into how current approaches to IT are thwarting companies’ ambitions to leverage data to drive innovation and business growth.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
1. VSP MAINFRAME DYNAMIC
TIERING PERFORMANCE
CONSIDERATIONS
STEVE RICE, MASTER PERFORMANCE
CONSULTANT, MAINFRAME, HITACHI DATA
SYSTEMS
SEPTEMBER 12, 2012
2. WEBTECH EDUCATIONAL SERIES
VSP Mainframe Dynamic Tiering Performance Considerations
A key reason for using dynamic tiering for mainframe storage is
performance. This session will focus on dynamic tiering in mainframe
environments and how to configure and control tiering. The session
ends with a detailed discussion of performance considerations when
using Hitachi Dynamic Tiering.
By attending this webcast, you will
• Understand Hitachi Dynamic Tiering and the options for configuring
and controlling tiering
• Understand the performance considerations and the type of
performance improvements you might experience when you
implement Hitachi Dynamic Tiering
3. UPCOMING WEBTECHS
Mainframe Series
Mainframe Replication, Sept 19, 9 a.m. PT, 12 p.m. ET
Why Networked FICON Storage Is Better than Direct-attached
Storage, Oct 3, 9 a.m. PT, 12 p.m. ET
Other
Storage Analytics, Sept 20, 9 a.m. PT, 12 p.m. ET
Maximize Availability and Uptime by Clustering your Physical
Datacenters within Metro Distances, Oct 24, 9 a.m. PT, 12 p.m. ET
Check www.hds.com/webtech for
Links to the recording, the presentation and Q&A (available next week)
Schedule and registration for upcoming WebTech sessions
4. AGENDA
Brief description of tiering
‒ MK-90RD7021 Hitachi Virtual Storage Platform Provisioning Guide for Mainframe
Review RAID terminology
Hitachi Dynamic Provisioning (HDP) terminology and methodology
Discuss Hitachi Dynamic Tiering (HDT) dynamic parameter
Experiments 1 and 2
First steps with HDT and hierarchical storage management
LET TIERING LEARN YOUR WORKLOAD
5. HDD 3
Continues
HDD 2
Continues
HDD 1
Continues
HDD 0
Continues
RAID TERMINOLOGY:
RAID 5 – 3D + 1P LDEV
Parity
Next
8
Tracks
Next
8
Tracks
Track
0-7
Next
8
Tracks
Parity
Next
8
Tracks
Track
8-15
Next
8
Tracks
Next
8
Tracks
Parity
Track
16-23
Next
8
Tracks
Next
8
Tracks
Next
8
Tracks
Parity
RAID Chunk 0
of RAID Stripe 0
RAID Chunk 1
of RAID Stripe 0
RAID Chunk 2
of RAID Stripe 0
RAID Chunk 3
of RAID Stripe 0
RAID Stripe 0
RAID Stripe 1
RAID Stripe 2
RAID Stripe 3
RAID Stripes
Continue
6. PHYSICAL TO LOGICAL:
HDD TO 3390-X HOST-ADDRESSABLE DEVICE
LET HDP SHARE THE LOAD
Hard Disk Drive
(HDD)
Parity Group
(PG)
3390-3/9/27/54/A(EAV)
Logical Device
(LDEV)
3390-V
MF-HDP Pool Volume
(Pool-VOL)
3390 –A(EAV)
HDP Volume
(DP-VOL)
3390 –A(EAV)
Track Space-Efficient Volume
(TSE-VOL)
Physical Layer
HDS Storage
Subsystem
Abstraction Layer
Host Abstraction Layer or HCD / IOCDS Addressable
7. HDP PAGE ALLOCATION:
38MB (672 TRACKS)
1 32
5
Pool-VOL2
3390-V
(Parity Group 2)
Pool-VOL3
3390-V
(Parity Group 3)
Host
32
DP-VOL
3390-A EAV
R/W
4
4
Pool-VOL4
3390-V
(Parity Group 4)
Pool-VOL1
3390-V
(Parity Group 1)
1 5
HDP Pool 7
HDP wide stripes across parity groups
Current implementation of 3390-A(EAV) can have from 1 to 262,668 cylinders
8. 7.5K RPM
SAS or SATA
62%
10K RPM
SAS
36%
HDT PYRAMID:
ENGINEERING RECOMMENDATIONS
HDT pyramid within an HDP pool
SSD
2%
LET TIERING LEARN YOUR WORKLOAD
9. Lo
MULTIPLE TIERS WITHIN A SINGLE HDP POOL
Tier 1 - SSD
Tier 2 – SAS 10K or SAS 15K
Tier 3 – SAS 7.5K or External
A SINGLE HDP POOL
EVERY HDP POOL HAS AT LEAST 1 TIER
DP volumes
live within the
HDP pool
DP volume pages
live within 1 or
more tiers
10. DEFAULT TIER BUFFER SPACE
SAVE A LITTLE SPACE FOR PAGE RELOCATION
Hard Disk Type Buffer area for
Tier Relocation
Buffer Area for New
Page Assignment
Total
SSD 2% 0% 2%
Non-SSD
- SAS 15K HDD
- SAS 10K HDD
- SAS 7.5K HDD
- External
2% 8% 10%
Tier buffer space set at the HDP pool level
11. HDT CYCLE
Cycle time set at the HDP pool level
Manual mode
‒ User can start and stop performance monitoring using any interval up to 7
days
Automatic mode
‒ Continuous monitoring followed by relocation cycles
‒ Monitor interval from 30 minutes to 1, 2, 4, 8 or 24 hours (default)
LET TIERING LEARN YOUR WORKLOAD
Monitor 1
Calc
1
Relocate
1
Monitor 2
Calc
2
Relocate
2
Monitor 3
Calc
3
Relocate
3
Monitor 4
Calc
4
Relocate
4
TIME
T0 T1 T2 T3 T4
12. Tier 2
Tier 1
HDT MONITORING MODES
LET TIERING LEARN YOUR WORKLOAD
Monitoring modes set at the
HDP pool level
Period
‒ The value used in the
calculation cycle is the actual
IO load on DP volume page
from previous monitoring
cycle
Continuous
‒ The value used in the
calculation cycle is the
weighted average of multiple
previous monitoring cycles
for DP volume page
‒ Reduces page trashing
‒ May slow migration to upper
tiers
0
10
20
30
40
50
60
70
80
90
100
110
120
1,005 1,006 1,007 1,008 1,009 1,010
DP-VolumePageIOPH
Tiering Cycle Number
Period vs. Continuous
Period Mode Continuous Mode
13. RELATIONSHIP BETWEEN NUMBER OF TIERS
AND TIERING POLICY
DP VOLUME TIERING POLICIES
Tiering Number of Tiers in Pool Note
Policy 1 Tier 2 Tier 3 Tiers
All All tiers All tiers All tiers Default value; data is
stored in all tiers.
Level 1 All tiers* Tier 1 Tier 1 Data is always stored in
the highest-speed tier
Level 2 All tiers* All tiers* Tier 1 and
Tier 2
Level 3 All tiers* All tiers* Tier 2
Level 4 All tiers* All tiers* Tier 2 and
Tier 3
Level 5 All tiers* Tier 2 Tier 3 Data is always stored in
the lowest-speed tier
* Data is stored in all tiers as in the case of All specified in the tiering policy
14. NEW PAGE ASSIGNMENT POLICY
TWO TIER TABLE FROM MANUAL
Tiering Level Policy Description
High The new page is assigned from the higher tier of tiers set in the tiering policy
Middle The new page is assigned from the middle tier of tiers set in the tiering policy
Low The new page is assigned from the lower tier of tiers set in the tiering policy
Tiering Level
Policy
When
Specifying
High
When
Specifying
Middle
When
Specifying
Low
Notes
All From Tier 1 to
2
From Tier 1 to
2
From Tier 2 to
1
In the Low setting, tier 2 is given
priority over tier 1
Level 1 From Tier 1 to
2
From Tier 1 to
2
From Tier 1 to
2
Every assignment sequence is the
same as when All is specified as the
tiering level
Level 2
Level 3
Level 4
From Tier 1 to 2 From Tier 1 to 2 From Tier 2 to 1 Every assignment sequence is the same
as when All is specified as the tiering
level
Level 5 From Tier 2 to
1
From Tier 2 to
1
From Tier 2 to
1
Assignment sequences when
High, Middle, and Low are the same
15. RELOCATION PRIORITY
Use the relocation priority function to set the selection
priority of a DP-VOL when performing relocation – a
prioritized DP-VOL can be relocated earlier during a
relocation cycle
For most effectiveness, use sparingly
“Level 1”?
‒ (NO)
“Level 5”?
‒ (NO)
“ALL”
‒ (Yes, sparingly)
DP VOLUME RELOCATION PRIORITY
16. HDT EXAMPLE 1
This quick storyboard shows preliminary results from HDT testing
‒ It attempts to show how HDT learns your workload
Scenario: Customer reluctant to upgrade from 300GB to 600GB HDD
Same capacity of HDD (not Including SSD)
‒ (128) 300GB SAS
‒ (64) 600GB SAS + (8) 400GB SSD
IMPORTANT NOTE: SSD drives are added to the pool after all data sets
are created
17. BASIC CONFIGURATION
Config.
Name
RAID
Type
LCU DP-
VOL
per
Pool
PAIO
Data
set
Base /
Alias
Dev.
Num
.
Desc.
HDT3HF RAID-
6(6D+2P)
00 - 03 256 1024 64/192 70xx (128) 300GB
SAS HDP pool
HDT6HF RAID-
6(6D+2P)
08 – 0B 256 1024 64/192 72xx (64) 600GB
SAS HDP pool
HDT6HF
Run 1
through
Run 4
RAID-
6(6D+2P)
08 – 0B 256 1024 64/192 72xx HDT pool
(8) 400GB
SSD
(64) 600GB
SAS
18. 300GB AND 600GB HDP BASELINES HAVE
BEEN RUN (NO SSD DRIVES)
HDP (128)
300GB HDD
HDP (64)
600GB HDD
19. FIRST RUN: 600GB TIER 2 + SSD TIER 1 –
0 MINUTES – NO LEARNING
LET TIERING LEARN YOUR WORKLOAD
HDP (128)
300GB HDD
HDP (64)
600GB HDD
Tiering
(64) 600GB:
No learning,
Same as HDP
20. SECOND RUN: 600GB TIER 2 + SSD TIER 1 –
30 MINUTES OF REST AFTER RUN 1
LET TIERING LEARN YOUR WORKLOAD
HDP (128)
300GB HDD
HDP (64)
600GB HDD
Tiering – Run 1:
No learning,
Same as HDP
Tiering – Run 2:
After 30 minutes
of migration
21. FOURTH RUN: 600GB TIER 2 + SSD TIER 1 –
30 MINUTES OF REST AFTER RUN 3
LET TIERING LEARN YOUR WORKLOAD
HDP (128)
300GB HDD
HDP (64)
600GB HDD
Tiering – Run 1:
No learning,
Same as HDP
Tiering – Run 2:
After 30 minutes
of migration
Tiering – Run 3:
After 30 minutes
of migration
Tiering – Run 4:
After 30 minutes
of migration
22. SUMMARY EXPERIMENT 1
After HDT had a chance to “learn” the workload, it
achieved a better response time and more throughput
Other interesting results
‒ After several cycles, HDT migrated 90% of the active
PAIO data sets to tier 1, utilizing ONLY 10% of tier 1 –
HDT did NOT migrate 100% of the active data sets
In other words, not all of the PAIO data sets deserved to be in
the SSD tier, even though PAIO was the only thing active on the
system for several hours
‒ If a customer has a performance issue in an HDT
environment, more SSD capacity could be installed,
increasing residency of active pages in tier 1
23. HDT EXAMPLE 2
This quick storyboard is another attempt to show how
HDT learns your workload
Same 600GB tier as previous experiment except at a
steady state of 24K IOPS
‒ (64) 600GB SAS drives + (8) 400GB SSD
IMPORTANT NOTE: SSDs are added to the pool after all
data sets are created.
25. FIRST STEPS FOR HDT AND HIERARCHICAL
STORAGE MANAGEMENT (HSM)
“Hierarchical storage management (HSM) is a data
storage technique which automatically moves data
between high-cost and low-cost storage media”
–Wikipedia
The following slide is an simple example of introducing
HDT in a mainframe environment that utilizes IBM
DFSMShsm
LET TIERING LEARN YOUR WORKLOAD
26. FIRST STEPS FOR HDT AND HSM
HDP (Pool 1)
Primary Space
Level 0
HDP (Pool 2) HSM ML1 and Low-I/O Density DP Volumes
HDP (Pool 3) HSM ML2 and Very Low-I/O Density DP Volumes
Tier 1 - SSD
Tier 2 – SAS 10K or SAS 15K
Near-Line SAS 7.5K
External Volumes
“Level 1”
Adored
“All”
Loved
“Level 5”
Liked
Unliked
Unloved
“All” with “Relocation
Priority” Most Loved
27. HDT FINAL EXAM
Question
‒ If you utilize all HDT dynamic tiering policies, how many
levels of service can be defined in a 3-level tier?
‒ Hint: I did not talk about all of the possible levels, but you
can figure it out
28. Lo
Tier 2 – SAS 10K or SAS 15K
WITH A 3-LEVEL TIER, 7 LEVELS OF SERVICE
Tier 1 - SSD
Tier 3 – SAS 7.5K or External
A SINGLE HDP POOL
AN EXAMPLE OF UTILIZING DYNAMIC TIERING POLICIES
“Level 1”
“Level 4”
“Level 3”
“Level 2”
“Level 5”
“ALL”
“All” with
“Relocation Priority”
29. SUMMARY
LET TIERING LEARN YOUR WORKLOAD
Brief description of tiering
Discuss HDT dynamic parameters
‒ HDT cycle
‒ HDT tiering policy
‒ All
‒ Level 1
‒ Level 5
‒ HDT default tier buffer space
‒ HDT new page assignment policy
‒ ALL (Low) for 2 tier, ALL (Middle) for 3 tier
Experiments 1 and 2
First steps with HDT
31. UPCOMING WEBTECHS
Mainframe Series
Mainframe Replication, Sept 19, 9 a.m. PT, 12 p.m. ET
Why Networked FICON Storage Is Better than Direct-attached
Storage, Oct 3, 9 a.m. PT, 12 p.m. ET
Other
Storage Analytics, Sept 20, 9 a.m. PT, 12 p.m. ET
Maximize Availability and Uptime by Clustering your Physical
Datacenters within Metro Distances, Oct 24, 9 a.m. PT, 12 p.m. ET
Check www.hds.com/webtech for
Links to the recording, the presentation and Q&A (available next week)
Schedule and registration for upcoming WebTech sessions