This document provides an overview of a new CPU capability called Intel® Speed Select
Technology – Base Frequency (Intel® SST-BF), which is available on select SKUs of 2nd
generation Intel® Xeon® Scalable processor (formerly codenamed Cascade Lake). The
document also includes benchmarking data and instructions on how to enable the
capability.
Value propositions of this capability include:
• Select SKUs of 2nd generation Intel® Xeon® Scalable processor (5218N, 6230N, and
6252N) offer a new capability called Intel® SST-BF.
• Intel® SST-BF allows the CPU to be deployed with an asymmetric core frequency
configuration.
• The placement of key workloads on higher frequency Intel® SST-BF enabled cores
can result in an overall system workload increase and potential overall energy
savings when compared to deploying the CPU with symmetric core frequencies
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
Evan Powell presented this deck at the MSST 2107 Mass Storage Conference.
"What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment."
Watch the video: http://wp.me/p3RLHQ-gPs
Learn more: blog.openebs.io
and
http://storageconference.us
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Haodong Tang from Intel gave this talk at the 2018 Open Fabrics Workshop.
"Efficient network messenger is critical for today’s scale-out storage systems. Ceph is one of the most popular distributed storage system providing a scalable and reliable object, block and file storage services. As the explosive growth of Big Data continues, there're strong demands leveraging Ceph build high performance & ultra-low latency storage solution in the cloud and bigdata environment. The traditional TCP/IP cannot satisfy this requirement, but Remote Direct Memory Access (RDMA) can.
"In this session, we'll present the challenges in today's distributed storage system posed by network messenger with the profiling results of Ceph All Flash Array system showing the networking already become the bottleneck and introduce how we achieved 8% performance benefit with Ethernet RDMA protocol iWARP. We'll first present the design of integrating iWARP to Ceph networking module together with performance characterization results with iWARP enabled IO intensive workload. The send part, we will explore the proof-of-concept solution of Ceph on NVMe over iWARP to build high-performance and high-density storage solution. Finally, we will showcase how these solutions can improve OSD scalability, and what’s the next optimization opportunities based on current analysis."
Watch the video: https://wp.me/p3RLHQ-ikV
Learn more: http://intel.com
and
https://insidehpc.com/2018/04/amazon-libfabric-case-study-flexible-hpc-infrastructure/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This document provides an overview of a new CPU capability called Intel® Speed Select
Technology – Base Frequency (Intel® SST-BF), which is available on select SKUs of 2nd
generation Intel® Xeon® Scalable processor (formerly codenamed Cascade Lake). The
document also includes benchmarking data and instructions on how to enable the
capability.
Value propositions of this capability include:
• Select SKUs of 2nd generation Intel® Xeon® Scalable processor (5218N, 6230N, and
6252N) offer a new capability called Intel® SST-BF.
• Intel® SST-BF allows the CPU to be deployed with an asymmetric core frequency
configuration.
• The placement of key workloads on higher frequency Intel® SST-BF enabled cores
can result in an overall system workload increase and potential overall energy
savings when compared to deploying the CPU with symmetric core frequencies
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
Evan Powell presented this deck at the MSST 2107 Mass Storage Conference.
"What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment."
Watch the video: http://wp.me/p3RLHQ-gPs
Learn more: blog.openebs.io
and
http://storageconference.us
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Haodong Tang from Intel gave this talk at the 2018 Open Fabrics Workshop.
"Efficient network messenger is critical for today’s scale-out storage systems. Ceph is one of the most popular distributed storage system providing a scalable and reliable object, block and file storage services. As the explosive growth of Big Data continues, there're strong demands leveraging Ceph build high performance & ultra-low latency storage solution in the cloud and bigdata environment. The traditional TCP/IP cannot satisfy this requirement, but Remote Direct Memory Access (RDMA) can.
"In this session, we'll present the challenges in today's distributed storage system posed by network messenger with the profiling results of Ceph All Flash Array system showing the networking already become the bottleneck and introduce how we achieved 8% performance benefit with Ethernet RDMA protocol iWARP. We'll first present the design of integrating iWARP to Ceph networking module together with performance characterization results with iWARP enabled IO intensive workload. The send part, we will explore the proof-of-concept solution of Ceph on NVMe over iWARP to build high-performance and high-density storage solution. Finally, we will showcase how these solutions can improve OSD scalability, and what’s the next optimization opportunities based on current analysis."
Watch the video: https://wp.me/p3RLHQ-ikV
Learn more: http://intel.com
and
https://insidehpc.com/2018/04/amazon-libfabric-case-study-flexible-hpc-infrastructure/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Virtualization with KVM (Kernel-based Virtual Machine)Novell
As a technical preview, SUSE Linux Enterprise Server 11 contains KVM, which is the next-generation virtualization software delivered with the Linux kernel. In this technical session we will demonstrate how to set up SUSE Linux Enterprise Server 11 for KVM, install some virtual machines and deal with different storage and networking setups.
To demonstrate live migration we will also show a distributed replicated block device (DRBD) setup and a setup based on iSCSI and OCFS2, which are included in SUSE Linux Enterprise Server 11 and SUSE Linux Enterprise 11 High Availability Extension.
Red Bend Software: Separation Using Type-1 Virtualization in Vehicles and Aut...Red Bend Software
Satish Varma from Red Bend Software presents at TI Tech Day Detroit 2013 on how to use Type-1 virtualization to consolidate hardware in automotive ECUs. Panelists included QNX Software Systems and Crank Software.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Amazon AWS basics needed to run a Cassandra Cluster in AWSJean-Paul Azar
There is a lot of advice on how to configure a Cassandra cluster on AWS. Not every configuration meets every use case.
Best way to know how to deploy Cassandra on AWS is to know the basics of AWS. Part 1: We start covering AWS (as it applies to Cassandra). Later we go into detail with AWS Cassandra specifics.
Network Infrastructure Virtualization Case StudyCisco Canada
This session focuses on a customer case study in which Network Virtualization has been deployed. The focus of this session is to cover the actual business requirements of the customer involved, how Network Virtualization met those requirements, the network design that was employed, and the benefits that were derived. Introducing the session will be a brief outline of Cisco's approach to Network Virtualization design methodology. The customer case study itself will focus on a Campus Network Virtualization deployment. Presenting this case study will be Dave Zacks, a Technical Solution Architect with Cisco Systems. Attendees at this session will learn about virtualized network deployments, and how these can be used to provide unique and compelling architectural solutions, addressing both business and technical requirements.
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...Anne Nicolas
Understanding how Linux kernel IO subsystem works is a key to analysis of a wide variety of issues occurring when running a Linux system. This talk is aimed at helping Linux users understand what is going on and how to get more insight into what is happening.
First we present an overview of Linux kernel block layer including different IO schedulers. We also talk about a new block multiqueue implementation that gets used for more and more devices.
After surveying the basic architecture we will be prepared to talk about tools to peek into it. We start with lightweight monitoring like iostat and continue with more heavy blktrace and variety of tools that are based on it. We demonstrate use of the tools on analysis of real world issues.
Jan Kara, SUSE
Overview of Hitachi Dynamic Tiering, Part 1 of 2Hitachi Vantara
Hitachi Dynamic Tiering (HDT) simplifies storage administration by automatically optimizing data placement in 1, 2 or 3 tiers of storage that can be defined and used within a single virtual volume. Tiers of storage can be made up of internal or external (virtualized) storage, and use of HDT can lower capital costs. Simplified and unified management of HDT allows for lower operational costs and reduces the challenges of ensuring applications are placed on the appropriate classes of storage.
Virtualization with KVM (Kernel-based Virtual Machine)Novell
As a technical preview, SUSE Linux Enterprise Server 11 contains KVM, which is the next-generation virtualization software delivered with the Linux kernel. In this technical session we will demonstrate how to set up SUSE Linux Enterprise Server 11 for KVM, install some virtual machines and deal with different storage and networking setups.
To demonstrate live migration we will also show a distributed replicated block device (DRBD) setup and a setup based on iSCSI and OCFS2, which are included in SUSE Linux Enterprise Server 11 and SUSE Linux Enterprise 11 High Availability Extension.
Red Bend Software: Separation Using Type-1 Virtualization in Vehicles and Aut...Red Bend Software
Satish Varma from Red Bend Software presents at TI Tech Day Detroit 2013 on how to use Type-1 virtualization to consolidate hardware in automotive ECUs. Panelists included QNX Software Systems and Crank Software.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Amazon AWS basics needed to run a Cassandra Cluster in AWSJean-Paul Azar
There is a lot of advice on how to configure a Cassandra cluster on AWS. Not every configuration meets every use case.
Best way to know how to deploy Cassandra on AWS is to know the basics of AWS. Part 1: We start covering AWS (as it applies to Cassandra). Later we go into detail with AWS Cassandra specifics.
Network Infrastructure Virtualization Case StudyCisco Canada
This session focuses on a customer case study in which Network Virtualization has been deployed. The focus of this session is to cover the actual business requirements of the customer involved, how Network Virtualization met those requirements, the network design that was employed, and the benefits that were derived. Introducing the session will be a brief outline of Cisco's approach to Network Virtualization design methodology. The customer case study itself will focus on a Campus Network Virtualization deployment. Presenting this case study will be Dave Zacks, a Technical Solution Architect with Cisco Systems. Attendees at this session will learn about virtualized network deployments, and how these can be used to provide unique and compelling architectural solutions, addressing both business and technical requirements.
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...Anne Nicolas
Understanding how Linux kernel IO subsystem works is a key to analysis of a wide variety of issues occurring when running a Linux system. This talk is aimed at helping Linux users understand what is going on and how to get more insight into what is happening.
First we present an overview of Linux kernel block layer including different IO schedulers. We also talk about a new block multiqueue implementation that gets used for more and more devices.
After surveying the basic architecture we will be prepared to talk about tools to peek into it. We start with lightweight monitoring like iostat and continue with more heavy blktrace and variety of tools that are based on it. We demonstrate use of the tools on analysis of real world issues.
Jan Kara, SUSE
Overview of Hitachi Dynamic Tiering, Part 1 of 2Hitachi Vantara
Hitachi Dynamic Tiering (HDT) simplifies storage administration by automatically optimizing data placement in 1, 2 or 3 tiers of storage that can be defined and used within a single virtual volume. Tiers of storage can be made up of internal or external (virtualized) storage, and use of HDT can lower capital costs. Simplified and unified management of HDT allows for lower operational costs and reduces the challenges of ensuring applications are placed on the appropriate classes of storage.
Presentation of Hitachi Accelerated Flash Storage. This presentation describes target applications, positioning and unique differentiators relative to MLC SSD.
A storage analysis based on a VMware P2V project.
This analysis looks at the necessary storage infrastructure required to support a 500 VM environment on EMC LUNs and NetApp NFS volumes.
Data is being generated at rates never before encountered. The explosion of data threatens to consume all of our IT resources: People, budget, power, cooling and data center floor space. Are your systems coping with your data now? Will they continue to deliver as the stress on data centers increases and IT budgets dwindle?
Imagine if you could be ahead of the data explosion by being proactive about your storage instead of reactive. Now you can be, with NetApp's approach to the designs and deployment of storage systems. With it, you can take advantage of NetApp's latest storage enhancements and take control of your storage. This will allow you to focus on gathering more insights from your data and deliver more value to your business.
NetApp's most advanced storage solutions are NetApp Virtualization & scale out. By taking control of your existing storage platform with either solution, you get:
• Immortal Storage system
• Infinite scalability
• Best possible ROI from existing environment
How to shutdown and power up of the netapp cluster mode storage systemSaroj Sahu
This slide will guide you how to shutdown and power up of the Netapp cluster mode storage system in command mode. (It will depict you environmental shutdown process (SAN environment in a DataCenter)
Presentation on 1G/2G/3G/4G/5G/Cellular & Wireless TechnologiesKaushal Kaith
This Presentation is explaining all about the Generations of Mobile or Cellular Technology (1G/2G/2.5/ 3G/4g/5G). This explain the invented details ,features,drawbacks,look of wireless models and comparison and evolution of technology from 1G to 5G and also explaining about wireless application and their services.
Build Converged Infrastructures With True Systems ManagementHitachi Vantara
Converged infrastructures, such as the Hitachi Unified Compute Platform, can help drive down operational IT costs if implemented and used properly. In this presentation, we'll explore how converged infrastructures can be deployed flexibly with fast provisioning of IT resources for a wide variety of applications.
Breakout session during Proact's SYNC 2013, 18 september 2013.
Software Defined StorageClustered Data ONTAP: The Storage Hypervisor by Wessel Gans, NetApp
Strange but true: most infrastructure architectures are deliberately designed from the outset to need little or no change over their lifetimes. There are two main reasons for this:
1. Change often means outages and customer impact and must be avoided
2. Budgets are set at the beginning of a project and getting more cash later is tough
Typically, then, applications are configured with all of the storage capacity they need to support the wildest dreams of their business sponsors (and then some extra is added for contingency by IT). Equally, storage is always configured with the performance level (storage tier) set to cope with the wildest transactional dreams of the business sponsor (and guess what? IT generally adds a bit more for good measure.).
No wonder storage is now one of the largest cost components involved in delivering and running a business application.
This is a paper was written by David Reine, an IT analyst for The Clipper Group, and highlights IBM’s SAN Volume Controller new features, capabilities and benefits. These new capabilities were announced on October 20, 2009Virtualization is at the center of all 21st Century IT systems, yet many CIOs fail to fully understand all of the benefits it can deliver to the data center operation. When we think of virtualization, we think compute, network, and storage—and we mostly think about driving up utilization on each. Storage controllers have always offered the ability to carve out pieces of real storage from a large pool and deliver them efficiently to a number of hosts, but it is storage virtualization itself that offers improvements that drive operational efficiency. IBM has been quietly addressing storage virtualization with SAN Volume Controller (SVC) for the last six years, building up a significant technical lead in this space.
Storage virtualization: deliver storage as a utility for the cloud webinarHitachi Vantara
What are the requirements for cloud storage? You need agile systems and management solutions to meet changing business requirements over time. You need to segregate or compartmentalize storage for multitenancy. And you need to be able to flexibly deliver specified service levels to individual departments and applications. When you virtualize storage with Hitachi block virtualization, you can use any of your storage for any system or application. Plus you can move data throughout the Hitachi Dynamic Storage infrastructure without disrupting operations. Attend this informative session to learn how Hitachi Command Suite can help you meet the demanding storage requirements of private cloud computing.
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...Louis Göhl
Learn about the Microsoft virtualisation strategy from the datacenter, to the desktop, to the cloud--and how it will help you cut costs and build value. In this session we review and demonstrate Microsoft virtualisation products and discuss how you can use them to solve today's IT issues (cost cutting, consolidation, business continuity, green IT), develop new computing solutions (VDI) and build a foundation for a more dynamic IT environment, including cloud computing. The session reviews all of the latest Microsoft virtualisation products, including Application Virtualization (App-V), Microsoft Enterprise Desktop Virtualization (MED-V), Windows Server 2008 with Hyper-V, and Microsoft Hyper-V Server, as well as the System Center management platform (including Virtual Machine Manager 2008). Learn about the innovative pricing and licensing structure that allows further savings to lower both acquisition and ongoing ownership costs. Learn how you can enable IT to become a cost cutting mechanism with Microsoft virtualisation and management technologies.
Unified Compute Platform Pro for VMware vSphereHitachi Vantara
Relentless trends of increasing data center complexity
and massive data growth have companies seeking new,
reliable ways to deliver IT services in an on-demand,
rapid, flexible and scalable fashion. Many data centers
now face growing demands for faster delivery of
business services, serious resource contentions and
trade-offs between IT agility and vendor lock-in. They
also have mounting complications and rising costs in
managing disparate islands of technology resources.
HDS Influencer Summit 2014: Innovating with Information to Address Business N...Hitachi Vantara
Top Executives at HDS share how the company is Innovating with Information to address business needs. Learn how the company is transforming now and into the future. #HDSday.”
Storage Analytics: Transform Storage Infrastructure Into a Business EnablerHitachi Vantara
View this webinar session to learn how you can transform your storage infrastructure into a business enabler. You will learn: Tips and tricks to streamline storage performance monitoring across your Hitachi environment. How to define and enforce performance and capacity objectives for key business applications by establishing storage service level management. How to create storage service level management reports that satisfy the needs of multiple IT stakeholders (that is, CIO, architect, administrator). For more information on controlling costs of sprawling storage with storage analytics white paper: http://www.hds.com/assets/pdf/hitachi-white-paper-control-costs-and-sprawling-storage-with-storage-analytics.pdf
Hitachi Vantara and our special guest, Dr. Alison Brooks, Research Director at IDC, discuss:
• How video and other IoT data can help your business become smarter, safer and more efficient.
• How to harness IoT data to gain operational intelligence and achieve better business outcomes.
• How Hitachi’s customers are innovating with IoT to excel.
• Which practical applications and best practices will get you started on your own IoT journey to reach your goals and tackle your challenges.
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bring...Hitachi Vantara
Virtualizing SAP HANA with Hitachi Unified Compute Platform Solutions: Bringing Flexibility, Agility and Readiness to the Real-Time Enterprise. VMworld 2015
Hitachi Virtual Infrastructure Integrator (Virtual V2I) is a VMware vCenter plugin plus associated software. It provides data management efficiency for large VM environments. Specifically, the latest release addresses virtual machine backup and recovery and cloning services. Customer want to leverage storage based snapshots as it is scalable, more granular backup from hours between backups to minutes resulting in improved RPO. VMworld 2015.
Economist Intelligence Unit: Preparing for Next-Generation CloudHitachi Vantara
Preparing for next-generation cloud: Lessons learned and insights shared is an Economist Intelligence Unit (EIU) research programme, sponsored by Hitachi Data Systems. In this report, the EIU looks at companies’ experiences with cloud adoption and assesses whether the technology has lived up to expectations. Where the cloud has fallen short of expectations, we set out to understand why. In cases of seamless implementation, we gather best practices from firms using the cloud successfully.
Information Innovation Index 2014 UK Research ResultsHitachi Vantara
Hitachi Data Systems releases insights from its inaugural ‘Information Innovation Index’, a UK research report, conducted by independent UK technology market research agency, Vanson Bourne, in which 200 IT decision-makers were surveyed during April 2014 to provide insights into how current approaches to IT are thwarting companies’ ambitions to leverage data to drive innovation and business growth.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Hitachi Virtual Storage Platform and Storage Virtualization Operating System Slidecast
1. Bob Madaio, Sr. Director, Product Marketing
April 2014
Hitachi Virtual Storage Platform G1000 and
Storage Virtualization Operating System
2. Continuous Cloud Infrastructure:
Powered by Hitachi
Redefining Mission Critical
Storage Virtualization
Hitachi
Storage Virtualization
Operating System (SVOS)
Hitachi
Virtual Storage Platform
(VSP) G1000
Hitachi
Command Suite
(HCS) V8
3. STORAGE VIRTUALIZATION OPERATING SYSTEMData
Abstraction
A Better I.T. Experience Through
Advanced Software
VM images
Continuous
Access
Mobile Apps
Hitachi Content
Platform Anywhere
VIRTUAL STORAGE MACHINE
System
Abstraction
4. Achieve Continuous Operations
Customer requirements
Nonstop data access without interruption
Workload mobility across data centers
Hitachi advantages
Simplification of high-availability operations
Native architecture eliminates “appliance tax”
No special server software required
Simple management through consolidated view
Site A Site B
*Separately licensed feature available after initial release. Ask your HDS representative or partner for more information.
AVAILABLE Clustered, Active-Active Systems*
5. Respond to Any Application Requirement
“HDS's HAF approach is unique…That gives it a performance advantage,
data management software advantages inherited from the parent array,
and, we shall see, a possible total cost of ownership edge.”
Supports Tiered Flash or >600TB All Flash
100% Random Reads Expected SPEC SFS results, single system with 8-Node HNAS
3M+ IOPS 1.2M+ NFS OPS
AGILE Agile Business Demands Instant Performance
6. Simplified Data Center Planning
Traditional
System
Layout
Today:
VSP G1000
Flexible
Deployment
5M, 30M,
100M Options
Separate
Storage
Controllers
Coming Soon:
Ultimate Deployment
Flexibility
Separate Controller Racks
and Disks Racks
Increase floor space efficiency
Eliminate data center hotspots
Flexibly scale performance and capacity
AGILE Deploy to Fit Data Center Requirements
7. Streamline Management
One view of all virtualized storage assets
Unifies management and provisioning for both
structured and unstructured data
Virtualized
Heterogeneous
Storage Capacity
Hitachi advantages
Central control for virtual storage machine configuration, management, and monitoring
Integrated management framework enables advanced management functionality,
automation, and quicker deployments
AUTOMATED Integrated Global Storage Virtualization Management
8. Ensure Service Level Attainment
Accurately monitor application storage service
levels 24/7 and identify applications at risks
Service level objective profile recommendations
streamline establishing SLAs
Hitachi advantages
‒ Only vendor to offer application service level
monitoring for both performance and capacity
‒ Customize storage service level objectives by
application’s business criticality
AUTOMATED Improve Application Storage Service Levels
9. Technology Refresh Challenges Persist
Forecast Analysis: External Controller-Based Disk Storage, Worldwide 3Q13 Update,
10/31/13, Cox/Chang
“Five- to seven-year
[external] disk storage
infrastructure refresh
cycles will become the
new normal.”
Why?
10. MOST STORAGE PLATFORMS
2X Normal Life
ChangeandEffort
Migration Planning
Array Design and Build
Host
remediation
Upgrade and
Migration
Migration
Cleanup
Redefine Enterprise Tech-Refresh Cycles
Normal Life
Y
months
X
years
X
years
Y
months
X
years
Y
months
In-Place
Upgrade
Nondisruptive Mobility
What if you could double the useful life of your infrastructure…
...and refreshing the technology was quick and easy?
Hitachi Virtual Storage Platform G1000
11. Server With
Your Apps
Global Storage
Virtualization
Migrate Without Disruption
Through Global Storage Virtualization
Always Maintain
the Relationships
Between Your
Applications
and Their Data
VSP
G1000
Migrates Virtualized
Storage Capacity and
Identity
Simplifies Migration of
Systems and Paired
Devices
(Hitachi Source and Target)
Reduces Migration
Exposure
Virtual Storage
Identity
Storage
Identity
AVAILABLE Move Data and Refresh Systems as Needed
12. Continuous Cloud Infrastructure:
Powered by Hitachi
Redefining Mission Critical
Storage Virtualization
Hitachi
Storage Virtualization
Operating System (SVOS)
Hitachi
Virtual Storage Platform
(VSP) G1000
Hitachi
Command Suite
(HCS) V8
To help our customers achieve this vision, Hitachi has introduced significant new technology in the areas storage virtualization software with our Hitachi Storage Virtualization Operating System, Enterprise Storage hardware with the new Virtual Storage Platform G1000, and significant improvements in our integrated management with Hitachi Command Suite v8.We believe that these technologies come together to fundamentally redefine our customer’s ideas of what mission critical storage virtualization can be.
Presenting this slide…Practice the build.Basically it shows how we build on our history of virtualization external storage systems. And can abstract that data. Those systems and the primary storage systems then get the value out of the Storage Virtualization Operating System and all it’s functionality.Then SVOS can create virtual storage machines and virtual machines where system information is abstracted from the specific hardware and managed like a hardware device. This means it can move from system to system, and can span physical systems in an active/active manner, meaning many new manners to provide continuous access to all the environments that leverage its capacity.All of this infrastructure capability can underpin any and all enterprise applications and environments across key business apps, server/virtualization platforms and mobild deployments.
Presenting this slide…The ultimate availability is ensuring continuous operations for key applications.The idea of an appliance tax is introduced. The thought here is that our primary storage virtualization and active/active competition are separate appliance solutions. But every appliance solution imposes a tax on a customer environment. Solutions like VPLEX force a SIGNIFICANT scaling and complication of san connectivity. To get full throughput from a connectivity standpoint, many IBM appliances would be needed. You introduce an extra “stop” in the environment. Etc. All of these are “taxes” that an external device makes a user pay.A native solution like our global active device capability removes that tax and drastically simplifies the environment, especially as you scale.
Agile business have highly responsive application environments. In some businesses immediate response is critical to capitalizing on opportunity and making decisions.The VSP G1000 offers remarkable system performance, and is able to handle the most strenuous workloads. Whether in block storage environments of high-performance NAS environments, VSP G1000 performance is the envy of the industry.Of course, the system leverages the unique Hitachi Accelerated Flash Storage, and can support tiered configurations or more than 600TB of Flash in an all flash configuration.
Sometimes very tactical data center needs can slow our ability to bring on or add onto our infrastructure. Whether it is an unbalanced environmental issue (with hotspots) or a simple space/rack concern. The new ability for the VSP G1000 to be installed in a physically separated manner is a huge benefit to our customers.Today, controllers can be separated up to 100meters, each with their own connected disks. In the future support will be added for more granular storage system separation.
Hitachi Command Suite provides a common management framework enabling users to properly inventory their existing storage resources while simplifying the challenges of managing complex storage environments. By enabling efficient administrative practices, organizations can dynamically align storage resources to changing business needs while maximizing return on storage investments. This lets them control storage costs, lower operational costs, and reduce risks.Logical group constructs are highly configurable providing the ability to logically and hierarchically group storage resources, i.e. logical devices or LDEVs, with the business applications\units that rely on them. This make it easier to manage all the logical storage resources associated with a business application for configuration, reporting, replication or migration purposes.Hitachi Command Suite provide the foundation for the following set of advanced management capabilities: Centrally discover and manage all Hitachi storage systems and virtualized storage assetsProvide essential management of heterogeneous storage resources in a virtualized storage environmentAccelerate storage provisioning for new applications while reducing over provisioningFacilitate data volume movement to match storage tier attributes (performance, cost)to business application requirementsAccurately monitor performance and capacity usage for improved service level managementCentrally administer replication and data protection requirementsTrack and control storage infrastructure costs
Converged data center infrastructure platform.UCP addresses key software defined data center requirements for mobility and portability.UCP Pro for VMware pre-configured, integrated with vSphere.RESTful API for cloud deploymentEmphasis: UCP Director for UCP Pro for VMware vSphere is tightly integrated with VMware vCenter and Hitachi Command Suite and provides unified orchestration and monitoring of your VMware environment with UCP Pro. The guiding design principles for UCP Director are simple: Give us choice (APIs to implement things the way it’s right for us)Don’t reinvent what’s already there (VMware vCenter)Provide seamless experience inside the virtualization management tool of choice. Bring everything under one view (increase hardware visibility)UCP Director provides simple monitoring of all elements of the UCP Pro for VMware vSphere solution under a single unified view. UCP Director provides overall health status for all solution elements and indicates proactively of any impending device failures. This helps to minimize solution and application downtime by directing the appropriate actions to be taken to avoid failures, or minimize system downtime should a failure occur.
The combined capabilities of the VSP G1000 hardware and Storage Virtualization Operating System also change entire technology refresh paradigm. The process followed to retire or upgrade most storage platforms is well known. First you plan out the migration and new system design, you understand host requirements, connectivity and required changes, you then physically copy the data and run through a variety of checks and tests to make sure everything happened to plan, all in all a lengthy process that takes us a significant part of the installed asset’s useful life. Of course, once it’s done, it feels like it is time to start all over planning for the next migration, and it seems there’s no end in sight. With our new technologies, there is a better way. Two major factors are at play. First the hardware lasts significant longer because of its performance, scalability and support of a variety of platforms and options. Second, the use of virtual storage machines and non-disruptive migration vastly simplifies a storage migration allowing far less time of the asset’s useful life lost to planning and migration. Analysts already see this change coming, as Gartner as show that customers will start expecting ECB, or in their words “External Controller Based” storage system refreshes to be longer than 5 years moving forward. With the advent of Virtual Storage Machines that live forever and facilitate easier migrations and planned-for in-place hardware and software upgrades without disruption and extend the useful life, you can reset expectations on asset useful life and data center productivity. Now, let’s look a little deeper at the different technology areas involved.
The combined capabilities of the VSP G1000 hardware and Storage Virtualization Operating System also change entire technology refresh paradigm. The process followed to retire or upgrade most storage platforms is well known. First you plan out the migration and new system design, you understand host requirements, connectivity and required changes, you then physically copy the data and run through a variety of checks and tests to make sure everything happened to plan, all in all a lengthy process that takes us a significant part of the installed asset’s useful life. Of course, once it’s done, it feels like it is time to start all over planning for the next migration, and it seems there’s no end in sight. With our new technologies, there is a better way. Two major factors are at play. First the hardware lasts significant longer because of its performance, scalability and support of a variety of platforms and options. Second, the use of virtual storage machines and non-disruptive migration vastly simplifies a storage migration allowing far less time of the asset’s useful life lost to planning and migration. Analysts already see this change coming, as Gartner as show that customers will start expecting ECB, or in their words “External Controller Based” storage system refreshes to be longer than 5 years moving forward. With the advent of Virtual Storage Machines that live forever and facilitate easier migrations and planned-for in-place hardware and software upgrades without disruption and extend the useful life, you can reset expectations on asset useful life and data center productivity. Now, let’s look a little deeper at the different technology areas involved.
Presenting this slide…Another critical element to maintaining high availability is to remove the downtime associated with data migration and storage system upgrades.This slide shows how our advanced storage virtualization capabilities vastly simplify and speed this operation, ensuring no application downtime is required.
To help our customers achieve this vision, Hitachi has introduced significant new technology in the areas storage virtualization software with our Hitachi Storage Virtualization Operating System, Enterprise Storage hardware with the new Virtual Storage Platform G1000, and significant improvements in our integrated management with Hitachi Command Suite v8.We believe that these technologies come together to fundamentally redefine our customer’s ideas of what mission critical storage virtualization can be.