Transcript of a discussion on how HudsonAlpha is testing a new Hewlett Packard Enterprise solution, OneSphere, to gain a simple and more common interface to manage hybrid computing.
The Long Road of IT Systems Management Enters the Domain of AIOps-Fueled Auto...Dana Gardner
This document provides a summary of a podcast discussion between Dana Gardner and Doug de Werd about the evolution of IT management. Some key points:
- IT management has evolved over 30 years from managing heterogeneity to now managing complexity across hybrid cloud, multicloud, and SaaS environments.
- Automation is getting a boost from new ML and AI capabilities like AIOps, just as multicloud deployments increase demands.
- HPE OneView provides a core infrastructure management solution that is extending its capabilities through partnerships to integrate with DevOps tools and cloud platforms.
- Intelligence from tools like HPE InfoSight is providing more insights and enabling more autonomous computing models that can self-optimize
Converged IoT Systems: Bringing the Data Center to the Edge of EverythingDana Gardner
Transcript of a discussion on the rapidly evolving architectural shift of moving advanced information technology (IT) capabilities to the edge to support Internet of Things (IoT) requirements for operational integrity benefits.
Big Data Meets HCI—How South African Insurance Provider King Price Gives Deve...Dana Gardner
Transcript of a discussion on how an insurance innovator built a modern hyperconverged infrastructure environment that rapidly replicates databases to accelerate developer agility.
Microsoft cloud migration and modernization playbook 031819 (1) (2)didicadoida
This section discusses defining a strategy for building a cloud migration and modernization practice. It outlines the benefits customers seek from moving to the cloud, including cost savings, agility, improved service quality, and access to new technologies. It also covers developing a value proposition, service offerings, pricing models, and leveraging Microsoft incentive programs to build a successful cloud migration practice.
How Global Data Availability Accelerates Collaboration And Delivers Business ...Dana Gardner
A transcript of a discussion that explores how comprehensive and global data storage access delivers the rapid insights businesses need for digital business transformation.
Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...Dana Gardner
Transcript of a discussion on why new ways of thinking are demanded if comprehensive analysis of relevant data can become practical across a multi- and hybrid-cloud deployments world.
This is case-study of Yale-NUS College who built a hybrid-cloud solution on Software-Defined Datacenter architecture using Red Hat Cloud Infrastructure.
The Long Road of IT Systems Management Enters the Domain of AIOps-Fueled Auto...Dana Gardner
This document provides a summary of a podcast discussion between Dana Gardner and Doug de Werd about the evolution of IT management. Some key points:
- IT management has evolved over 30 years from managing heterogeneity to now managing complexity across hybrid cloud, multicloud, and SaaS environments.
- Automation is getting a boost from new ML and AI capabilities like AIOps, just as multicloud deployments increase demands.
- HPE OneView provides a core infrastructure management solution that is extending its capabilities through partnerships to integrate with DevOps tools and cloud platforms.
- Intelligence from tools like HPE InfoSight is providing more insights and enabling more autonomous computing models that can self-optimize
Converged IoT Systems: Bringing the Data Center to the Edge of EverythingDana Gardner
Transcript of a discussion on the rapidly evolving architectural shift of moving advanced information technology (IT) capabilities to the edge to support Internet of Things (IoT) requirements for operational integrity benefits.
Big Data Meets HCI—How South African Insurance Provider King Price Gives Deve...Dana Gardner
Transcript of a discussion on how an insurance innovator built a modern hyperconverged infrastructure environment that rapidly replicates databases to accelerate developer agility.
Microsoft cloud migration and modernization playbook 031819 (1) (2)didicadoida
This section discusses defining a strategy for building a cloud migration and modernization practice. It outlines the benefits customers seek from moving to the cloud, including cost savings, agility, improved service quality, and access to new technologies. It also covers developing a value proposition, service offerings, pricing models, and leveraging Microsoft incentive programs to build a successful cloud migration practice.
How Global Data Availability Accelerates Collaboration And Delivers Business ...Dana Gardner
A transcript of a discussion that explores how comprehensive and global data storage access delivers the rapid insights businesses need for digital business transformation.
Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...Dana Gardner
Transcript of a discussion on why new ways of thinking are demanded if comprehensive analysis of relevant data can become practical across a multi- and hybrid-cloud deployments world.
This is case-study of Yale-NUS College who built a hybrid-cloud solution on Software-Defined Datacenter architecture using Red Hat Cloud Infrastructure.
BioIT World 2016 - HPC Trends from the TrenchesChris Dagdigian
This document discusses trends in bioinformatics infrastructure and IT from the 2016 BioIT World Conference. It notes that science is evolving faster than IT can refresh infrastructure and patterns. There is a trend toward DevOps, automation, and scripting skills being necessary for career mobility. Cloud computing and virtualization are becoming more widespread. Data lakes and Hadoop are also growing trends, though expertise is still needed. The document also discusses trends in computing, including the need for mobile analysis and common hardware for HPC and Hadoop. Storage trends include the rise of data, refresh of scale-out NAS, and new disruptive storage platforms.
How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...Dana Gardner
This document summarizes a podcast discussion about how the TEAM Network at Conservation International is using big data analytics to study biodiversity in tropical rainforests. The TEAM Network collects sensor and camera trap data from protected areas worldwide and analyzes the data using Bayesian models on HP Vertica to monitor species populations and detect trends. Their end-to-end system brings field data into a central repository for analysis and shares results through a dashboard. They are working to expand monitoring to more countries and species using cloud deployment and advanced analytics that leverage hardware processing power.
Ομιλία- Παρουσίαση: Ανδρέας Τσαγκάρης, VP & Chief Technology Officer, Performance Technologies
Τίτλος Παρουσίασης: “Big Data on Linux on Power Systems”
This paper summarizes the results of our research around data center transformation, and discusses the kind of metrics and visibility you will need to successfully migrate and manage applications in a true, high-performance Infrastructure Anywhere environment.
How Software-Defined Storage Translates into Just-in-Time Data Center ScalingDana Gardner
Opus Interactive adopted a software-defined storage approach to better support its multitenant hosting customers. This allowed Opus to scale storage infrastructure in a just-in-time manner to meet dynamic customer demand. The software-defined storage provided high availability, flexibility to scale out without disruptions, and could be managed from a single pane of glass across multiple data centers. This approach helped Opus efficiently support its growing customer base with thousands of virtual servers using only 11 system administrators.
How HTC Centralizes Storage Management to Gain Visibility, Reduce Costs and I...Dana Gardner
Transcript of a Briefings Direct podcast on why bringing a common management view in to play improves problem resolution and automates resource allocation more fully.
Big Data is growing rapidly in terms of volume, variety, and velocity. The cloud is well-suited to handle Big Data challenges by providing elastic and scalable infrastructure, which optimizes resources and reduces costs compared to traditional IT. In the cloud, users can collect, store, analyze and share large amounts of data without upfront investment, and scale easily as needs change. Real-world examples show how companies in industries like banking, retail, and advertising are using the cloud's Big Data services to gain insights from large datasets.
Become a data-driven organization with the Internet of Things
Executive summary
Personal health monitors tracking your fitness, trashcans monitoring their fullness, watches telling you more
than just the time, and agricultural soil monitors saying it’s time to water. It seems a day doesn’t go by that
we don’t hear about the latest “offline” thing, device, or equipment becoming “online,” moving from isolation
to being connected to the Internet of Things (IoT). It’s clear that integrating sensors, electronics, and
network connectivity into devices can enable innovation, enhancing and extending the way we work and
interact with each other and the world around us.
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid ITDana Gardner
A discussion on how IT operators are now looking to increased automation, orchestration, and compatibility benefits to further exploit containers as a mainstay across their next-generation hybrid IT estate.
8.0Transforming records management for Information Governance
•Access and understand virtually any source of information on-premise and in the cloud
•A strategic pillar of HP’s HAVEnBig Data platform
•Non-disruptive, manage-in-place approach complements any organization
This is a very short slide deck I did for a 10-minute slot on a http://pistoiaalliance.org/ webinar. The slides do not fully cover what I intend to talk about so if the webinar is recorded and available afterwards I'll update this description with the recording URL.
PDF copy of the slides available upon request ("chris@bioteam.net")
Revolution in Business Analytics-Zika Virus ExampleBardess Group
Apache Hadoop is an open source software framework for distributed storage and processing of large datasets across clusters of computers. It allows businesses to combine multiple types of analytics on the same data at massive scale. Forrester predicts 100% of large enterprises will adopt Hadoop and related technologies like Spark for big data analytics in the next two years due to benefits like solving storage problems and being a mature technology. Combining big data and analytics through Hadoop allows companies to optimize operations, gain new business insights, and build data-driven products and services.
The document discusses how big data is driving the need for new database technologies that can handle large, unstructured datasets and provide real-time analytics capabilities that traditional relational databases cannot support. It outlines the limitations of relational databases for big data and analyzes emerging technologies like Hadoop, NoSQL databases, and cloud computing that are better suited for storing, processing, and analyzing large volumes of diverse data types. The document also examines the infrastructure, architectural, and market requirements for big data platforms and products.
The document provides an overview of constructing a vSphere private cloud. It discusses defining the private cloud and how it augments simple virtualization. It covers constructing the key components of a private cloud including processing and memory, networking, and storage. It also discusses justifying the evolution to a private cloud through cost savings and metrics.
Presentation on Big Data Hadoop (Summer Training Demo)Ashok Royal
This document summarizes a practical training presentation on Big Data Hadoop. It was presented by Ashutosh Tiwari and Ashok Rayal from Poornima Institute of Engineering & Technology, Jaipur under the guidance of Dr. E.S. Pilli from MNIT Jaipur. The training took place from May 28th to July 9th 2014 at MNIT Jaipur and consisted of studying Hadoop and related papers, building a Hadoop cluster, and implementing a near duplicate detection project using Hadoop MapReduce. The near duplicate detection project aimed to comparatively analyze documents to find similar ones based on a predefined threshold. Snapshots of the HDFS, MapReduce processing, and output of the project are
New Strategies Emerge to Stem the Costly Downside of Today’s Unwieldly Cloud ...Dana Gardner
Transcript of a discussion on what causes haphazard cloud use, and how new tools, processes, and methods are bringing actionable analysis to regain control over hybrid IT sprawl.
This document discusses security issues with Hadoop and available solutions. It identifies vulnerabilities in Hadoop including lack of authentication, unsecured data in transit, and unencrypted data at rest. It describes current solutions like Kerberos for authentication, SASL for encrypting data in motion, and encryption zones for encrypting data at rest. However, it notes limitations of encryption zones for processing encrypted data efficiently with MapReduce. It proposes a novel method for large scale encryption that can securely process encrypted data in Hadoop.
Mapping Life Science Informatics to the CloudChris Dagdigian
This document discusses strategies for mapping informatics to the cloud. It provides 9 tips for doing so effectively. Tip 1 advises that high-performance computing and clouds require a new model where resources are dedicated to each application. Tip 2 recommends hybrid cloud approaches but cautions they are less usable than claimed and practical only sometimes. The document emphasizes the need to handle legacy codes in addition to new "big data" approaches.
This document provides an overview of big data and real-time analytics, defining big data as high volume, high velocity, and high variety data that requires new technologies and techniques to capture, manage and process. It discusses the importance of big data, key technologies like Hadoop, use cases across various industries, and challenges in working with large and complex data sets. The presentation also reviews major players in big data technologies and analytics.
Cloud Computing by Industry: Novel Ways to Collaborate Via Extended Business ...Dana Gardner
Transcript of a sponsored BriefingsDirect podcast examining how cloud computing methods promote innovative sharing and collaboration for industry-specific process efficiencies.
How the Journey to Modern Data Management is Paved with an Inclusive Edge-to-...Dana Gardner
This document discusses how a data fabric approach can help organizations manage data from the edge to the core to the cloud in a harmonized way to improve insights. It explores some of the challenges organizations face with fragmented data and silos that limit insights. The HPE Ezmeral Data Fabric is presented as a solution that can provide common data access and governance across diverse data types and locations through standard APIs and security. This helps avoid issues around complexity, lock-in and lack of portability that come from point solutions and siloed data systems.
Citrix and HPE Team to Make Sense of the Core-Cloud-Edge ArchitectureDana Gardner
Transcript of a discussion on how Citrix and Hewlett Packard Enterprise are aligned to bring new capabilities to the coalescing architectures around data center core, hybrid cloud, and edge computing.
BioIT World 2016 - HPC Trends from the TrenchesChris Dagdigian
This document discusses trends in bioinformatics infrastructure and IT from the 2016 BioIT World Conference. It notes that science is evolving faster than IT can refresh infrastructure and patterns. There is a trend toward DevOps, automation, and scripting skills being necessary for career mobility. Cloud computing and virtualization are becoming more widespread. Data lakes and Hadoop are also growing trends, though expertise is still needed. The document also discusses trends in computing, including the need for mobile analysis and common hardware for HPC and Hadoop. Storage trends include the rise of data, refresh of scale-out NAS, and new disruptive storage platforms.
How Big Data Generates New Insights into What’s Happening in Tropical Ecosyst...Dana Gardner
This document summarizes a podcast discussion about how the TEAM Network at Conservation International is using big data analytics to study biodiversity in tropical rainforests. The TEAM Network collects sensor and camera trap data from protected areas worldwide and analyzes the data using Bayesian models on HP Vertica to monitor species populations and detect trends. Their end-to-end system brings field data into a central repository for analysis and shares results through a dashboard. They are working to expand monitoring to more countries and species using cloud deployment and advanced analytics that leverage hardware processing power.
Ομιλία- Παρουσίαση: Ανδρέας Τσαγκάρης, VP & Chief Technology Officer, Performance Technologies
Τίτλος Παρουσίασης: “Big Data on Linux on Power Systems”
This paper summarizes the results of our research around data center transformation, and discusses the kind of metrics and visibility you will need to successfully migrate and manage applications in a true, high-performance Infrastructure Anywhere environment.
How Software-Defined Storage Translates into Just-in-Time Data Center ScalingDana Gardner
Opus Interactive adopted a software-defined storage approach to better support its multitenant hosting customers. This allowed Opus to scale storage infrastructure in a just-in-time manner to meet dynamic customer demand. The software-defined storage provided high availability, flexibility to scale out without disruptions, and could be managed from a single pane of glass across multiple data centers. This approach helped Opus efficiently support its growing customer base with thousands of virtual servers using only 11 system administrators.
How HTC Centralizes Storage Management to Gain Visibility, Reduce Costs and I...Dana Gardner
Transcript of a Briefings Direct podcast on why bringing a common management view in to play improves problem resolution and automates resource allocation more fully.
Big Data is growing rapidly in terms of volume, variety, and velocity. The cloud is well-suited to handle Big Data challenges by providing elastic and scalable infrastructure, which optimizes resources and reduces costs compared to traditional IT. In the cloud, users can collect, store, analyze and share large amounts of data without upfront investment, and scale easily as needs change. Real-world examples show how companies in industries like banking, retail, and advertising are using the cloud's Big Data services to gain insights from large datasets.
Become a data-driven organization with the Internet of Things
Executive summary
Personal health monitors tracking your fitness, trashcans monitoring their fullness, watches telling you more
than just the time, and agricultural soil monitors saying it’s time to water. It seems a day doesn’t go by that
we don’t hear about the latest “offline” thing, device, or equipment becoming “online,” moving from isolation
to being connected to the Internet of Things (IoT). It’s clear that integrating sensors, electronics, and
network connectivity into devices can enable innovation, enhancing and extending the way we work and
interact with each other and the world around us.
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid ITDana Gardner
A discussion on how IT operators are now looking to increased automation, orchestration, and compatibility benefits to further exploit containers as a mainstay across their next-generation hybrid IT estate.
8.0Transforming records management for Information Governance
•Access and understand virtually any source of information on-premise and in the cloud
•A strategic pillar of HP’s HAVEnBig Data platform
•Non-disruptive, manage-in-place approach complements any organization
This is a very short slide deck I did for a 10-minute slot on a http://pistoiaalliance.org/ webinar. The slides do not fully cover what I intend to talk about so if the webinar is recorded and available afterwards I'll update this description with the recording URL.
PDF copy of the slides available upon request ("chris@bioteam.net")
Revolution in Business Analytics-Zika Virus ExampleBardess Group
Apache Hadoop is an open source software framework for distributed storage and processing of large datasets across clusters of computers. It allows businesses to combine multiple types of analytics on the same data at massive scale. Forrester predicts 100% of large enterprises will adopt Hadoop and related technologies like Spark for big data analytics in the next two years due to benefits like solving storage problems and being a mature technology. Combining big data and analytics through Hadoop allows companies to optimize operations, gain new business insights, and build data-driven products and services.
The document discusses how big data is driving the need for new database technologies that can handle large, unstructured datasets and provide real-time analytics capabilities that traditional relational databases cannot support. It outlines the limitations of relational databases for big data and analyzes emerging technologies like Hadoop, NoSQL databases, and cloud computing that are better suited for storing, processing, and analyzing large volumes of diverse data types. The document also examines the infrastructure, architectural, and market requirements for big data platforms and products.
The document provides an overview of constructing a vSphere private cloud. It discusses defining the private cloud and how it augments simple virtualization. It covers constructing the key components of a private cloud including processing and memory, networking, and storage. It also discusses justifying the evolution to a private cloud through cost savings and metrics.
Presentation on Big Data Hadoop (Summer Training Demo)Ashok Royal
This document summarizes a practical training presentation on Big Data Hadoop. It was presented by Ashutosh Tiwari and Ashok Rayal from Poornima Institute of Engineering & Technology, Jaipur under the guidance of Dr. E.S. Pilli from MNIT Jaipur. The training took place from May 28th to July 9th 2014 at MNIT Jaipur and consisted of studying Hadoop and related papers, building a Hadoop cluster, and implementing a near duplicate detection project using Hadoop MapReduce. The near duplicate detection project aimed to comparatively analyze documents to find similar ones based on a predefined threshold. Snapshots of the HDFS, MapReduce processing, and output of the project are
New Strategies Emerge to Stem the Costly Downside of Today’s Unwieldly Cloud ...Dana Gardner
Transcript of a discussion on what causes haphazard cloud use, and how new tools, processes, and methods are bringing actionable analysis to regain control over hybrid IT sprawl.
This document discusses security issues with Hadoop and available solutions. It identifies vulnerabilities in Hadoop including lack of authentication, unsecured data in transit, and unencrypted data at rest. It describes current solutions like Kerberos for authentication, SASL for encrypting data in motion, and encryption zones for encrypting data at rest. However, it notes limitations of encryption zones for processing encrypted data efficiently with MapReduce. It proposes a novel method for large scale encryption that can securely process encrypted data in Hadoop.
Mapping Life Science Informatics to the CloudChris Dagdigian
This document discusses strategies for mapping informatics to the cloud. It provides 9 tips for doing so effectively. Tip 1 advises that high-performance computing and clouds require a new model where resources are dedicated to each application. Tip 2 recommends hybrid cloud approaches but cautions they are less usable than claimed and practical only sometimes. The document emphasizes the need to handle legacy codes in addition to new "big data" approaches.
This document provides an overview of big data and real-time analytics, defining big data as high volume, high velocity, and high variety data that requires new technologies and techniques to capture, manage and process. It discusses the importance of big data, key technologies like Hadoop, use cases across various industries, and challenges in working with large and complex data sets. The presentation also reviews major players in big data technologies and analytics.
Cloud Computing by Industry: Novel Ways to Collaborate Via Extended Business ...Dana Gardner
Transcript of a sponsored BriefingsDirect podcast examining how cloud computing methods promote innovative sharing and collaboration for industry-specific process efficiencies.
How the Journey to Modern Data Management is Paved with an Inclusive Edge-to-...Dana Gardner
This document discusses how a data fabric approach can help organizations manage data from the edge to the core to the cloud in a harmonized way to improve insights. It explores some of the challenges organizations face with fragmented data and silos that limit insights. The HPE Ezmeral Data Fabric is presented as a solution that can provide common data access and governance across diverse data types and locations through standard APIs and security. This helps avoid issues around complexity, lock-in and lack of portability that come from point solutions and siloed data systems.
Citrix and HPE Team to Make Sense of the Core-Cloud-Edge ArchitectureDana Gardner
Transcript of a discussion on how Citrix and Hewlett Packard Enterprise are aligned to bring new capabilities to the coalescing architectures around data center core, hybrid cloud, and edge computing.
Why do Open and Hybrid work so well together?
The hybrid cloud is about combining public cloud, private cloud and dedicated hardware into an ideal solution for your app, your idea, your innovation and your company.
Mixed that up with open standards and you get the perfect balance of flexibilty and control. You get:
• All public cloud benefits: Pay-as-you-go, no up front investment, instant scalability
• Total flexibility: To match platform with application and change it anytime without lock in or rearchitecting
• Control: You choose the best platform mix without relying on your provider
Get the full story in our easy-to-read slideshare story now.
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower CostDana Gardner
A transcript of a discussion on the latest technologies and products delivering common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.
The IT Intelligence Foundation For Digital Business Transformation Builds fro...Dana Gardner
This document discusses how HPE InfoSight has emerged as a broad and inclusive capability for AIOps across HPE products and services. It began as a way to optimize storage using machine learning but has expanded to provide insights across infrastructure. By collecting large amounts of data from customers, HPE InfoSight can identify issues, automate resolutions, and improve products over time. Its ability to analyze problems at scale and provide rapid, actionable insights has helped improve IT operations. However, continued challenges include dealing with increasing complexity, providing end-to-end visibility, and ensuring recommendations can be trusted without false positives.
Telecom Clouds crossing borders, Chet Golding, Zefflin SystemsSriram Subramanian
This document discusses how OpenStack can help telecom companies transform by enabling cross-border communication and applications. The author argues that with over 6.8 billion cellphone users worldwide, telecom networks must support global connectivity and cloud-based applications and services. OpenStack allows telecoms to build public, private and hybrid clouds that can scale enormously while integrating with other technologies. This represents a new era for telecom where they become cloud companies enabling ubiquitous communication and access to data anywhere in the world.
LinuxCon North America 2013: Why Lease When You Can Buy Your CloudMark Hinkle
Perhaps one of the perplexing things about cloud computing is the choice around renting time in someone else’s cloud (Amazon, Google, Rackspace or a myriad of others) or building your own. It’s not unlike the age-old car buyer’s dilemma, take the lower payments and lower total miles lease or buy the car and drive it for the long haul. Cloud computing users are often faced with the same conundrum. This presentation will focus on how to buy and build a cloud that can be fulfill the needs of most users including strategies for making use of the open source private cloud or managing workloads in both the private and public cloud using open source software.
This white paper discusses five considerations for securing hybrid clouds: 1) gaining constant visibility through continuous monitoring, 2) employing a workload-centric security model, 3) leveraging automation for operational efficiency, 4) applying the right control to the right assets, and 5) employing an integrated security solution for breadth and depth. Hybrid clouds are becoming more common as they allow organizations to take advantage of both internal and external cloud-based infrastructure. However, their dynamic nature presents new security challenges that require solutions designed specifically for hybrid cloud environments.
HPE’s Erik Vogel on Key Factors for Driving Success in Hybrid Cloud Adoption ...Dana Gardner
A discussion on innovation around maturing hybrid cloud models and how proper common management of hybrid cloud operations makes or breaks the expected benefits.
Choice, Consistency, Confidence Keys to Improving Services' Performance throu...Dana Gardner
Transcript of a BriefingsDirect podcast from the HP Discover 2012 Conference on hybrid cloud and tying together the evolving elements of cloud computing.
This document discusses mobile data analytics and big data analysis using cloud computing. It introduces how massive amounts of data are generated from mobile devices and networks. Big data analysis tools like Hadoop, Spark and Storm are described for processing large datasets. Challenges of mobile big data analytics include limited device resources and connectivity. The document discusses how cloud computing addresses these issues by providing scalable infrastructure and handling computation and storage remotely. It also covers security considerations for cloud-based mobile analytics.
The document discusses the future of cloud computing and the Internet of Things (IoT). It covers several topics:
1) The evolution and current state of cloud computing including public, private, hybrid, and community cloud models.
2) Technical pillars of IoT including RFID, wireless sensor networks, machine-to-machine communication, and SCADA systems.
3) The relationship between cloud computing and IoT, and how they will converge with mobile cloud computing.
4) Emerging paradigms like MAI and XaaS for connecting IoT devices within and outside organizations via the cloud.
Cloud Computing: A leap in the near future for the benefit of LibrariesSudesh Sood
Cloud computing provides opportunities for libraries to shift away from owning their own servers and infrastructure to power applications. It allows libraries to access functionality through web-based services and pay only for the capacity needed, bringing more resources online as required. Some key benefits of cloud computing for libraries include lower costs, increased storage, mobility, and the ability to provide library services anytime, anywhere through the cloud. However, issues like data security, network connectivity, and dependence on external providers are weaknesses and threats to consider for libraries adopting cloud computing.
A New Status Quo for Data Centers --Seamless Communication From Core to Cloud...Dana Gardner
A discussion with two leading IT and critical infrastructure executives on how the state of data centers in 2020 demands better speed, agility, and efficiency from IT resources wherever they reside.
This document discusses how cloud computing can provide enterprises with new levels of collaboration, agility, speed, and cost savings. It outlines four key actions every CIO should take to capitalize on cloud computing: 1) build on-premises cloud services, 2) consume off-premises cloud services securely, 3) manage and secure applications and cloud assets across delivery models, and 4) transform legacy infrastructure and applications. The document also summarizes HP's cloud services portfolio and solutions that can help enterprises implement a hybrid cloud delivery model.
The document provides information about various cloud computing services including Microsoft Azure, Apple iCloud, Google Drive, and Amazon Web Services (AWS). It discusses what cloud computing is, the benefits of cloud computing, different types of cloud services and deployments. It then provides more detailed overviews of the features and pricing of Microsoft Azure, Apple iCloud, Google Drive, and AWS. It concludes with a price comparison chart of the storage costs for 1TB of storage on each platform and identifies AWS as the best choice based on price.
2015 How important is Cloud Computing for building Crowd Networks? Crowdsourc...accacloud
The document discusses the relationship between cloud computing and crowd networks. It examines several crowd-based businesses and how they utilize cloud technologies. While crowd funding can utilize cloud computing, it is not necessarily critical. Crowd sourcing businesses that manage a large number of varying projects and suppliers benefit most from cloud computing, as it provides elastic resources and on-demand scaling. Fully integrating cloud technologies with crowd sourcing enables new crowd-based business models and will play a major role in future innovation.
C-BAG Big Data Meetup Chennai Oct.29-2014 Hortonworks and Concurrent on Casca...Hortonworks
The document discusses a Big Data Meetup organized by C-BAG (Chennai Big Data Analytic Group) on October 29, 2014 in Chennai. It provides details about two speakers, Dhruv Kumar from Concurrent Inc. and Vinay Shukla from Hortonworks, who will discuss reducing development time for production-grade Hadoop applications and Hortonworks' Hadoop platform respectively. The remainder of the document consists of presentation slides that cover topics including the modern data architecture with Hadoop, enterprise goals for data architecture, unlocking applications from new data types, and case studies.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Large Language Model (LLM) and it’s Geospatial Applications
How HudsonAlpha Transforms Hybrid Cloud Deployment Complexity Into a Management Force Multiplier
1. How HudsonAlpha Transforms
Hybrid Cloud Deployment Complexity
Into a Management Force Multiplier
Transcript of a discussion on how HudsonAlpha is testing a new Hewlett Packard
Enterprise solution, OneSphere, to gain a simple and more common interface to manage
hybrid computing.
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the
transcript. Sponsor: Hewlett Packard Enterprise.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of
the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor
Solutions, your host and moderator for this ongoing discussion on digital transformation
success stories. Stay with us now to learn how agile businesses are fending off
disruption -- in favor of innovation.
Our next hybrid IT management success story examines how the nonprofit research
institute HudsonAlpha improves how it harnesses and leverages a spectrum of IT
deployment environments. We’ll now learn how HudsonAlpha has been testing a new
Hewlett Packard Enterprise (HPE) solution, OneSphere, to gain a common and
simplified management interface to rule them all.
Here to help explore the benefits of improved levels of multi-
cloud visibility and process automation is Katreena Mullican,
Senior Architect and Cloud Whisperer at HudsonAlpha Institute
for Biotechnology in Huntsville, Alabama.
Welcome, Katreena.
Katreena Mullican: Thank you, Dana. Thank you for having me
as a part of your podcast.
Gardner: We’re delighted to have you with us. What’s driving
the need to solve hybrid IT complexity at HudsonAlpha?
Mullican: The big drivers at HudsonAlpha are the requirements for data locality and
ease-of-adoption. We produce about 6 petabytes of new data every year, and that rate is
increasing with every project that we do.
Page of1 9
Mullican
2. We support hundreds of research programs with data and trend analysis. Our
infrastructure requires quickly iterating to identify the approaches that are both cost-
effective and the best fit for the needs of our users.
Gardner: Do you find that having multiple types of IT platforms, environments, and
architectures creates a level of complexity that’s increasingly difficult to manage?
Mullican: Gaining a competitive edge requires adopting new approaches to hybrid IT.
Even carefully contained shadow IT is a great way to develop new approaches and
attain breakthroughs.
Gardner: You want to give people enough leash where they can go and roam and
experiment, but perhaps not so much that you don’t know where they are, what they are
doing.
Software-defined everything
Mullican: Right. “Software-defined everything” is our mantra. That’s what we aim to do
at HudsonAlpha for gaining rapid innovation.
Gardner: How do you gain balance from too hard-to-manage complexity, with a potential
of chaos, to the point where you can harness and optimize -- yet allow for
experimentation, too?
Mullican: IT is ultimately responsible for the
security and the uptime of the infrastructure. So it’s
important to have a good framework on which the
developers and the researchers can compute. It’s
about finding a balance between letting them have
provisioning access to those resources versus
being able to keep an eye on what they are doing.
And not only from a usage perspective, but from a
cost perspective, too.
Gardner: Tell us about HudsonAlpha and its fairly
extreme IT requirements.
Mullican: HudsonAlpha is a nonprofit organization of entrepreneurs, scientists, and
educators who apply the benefits of genomics to everyday life. We also provide IT
services and support for about 40 affiliate companies on our 150-acre campus in
Huntsville, Alabama.
Gardner: What about the IT requirements? How you fulfill that mandate using
technology?
Page of2 9
It’s important to have a good
framework on which the developers
and the researchers can compute.
It’s about finding a balance between
letting them have provisioning
access to those resources versus
being able to keep an eye on what
they are doing.
3. Mullican: We produce 6 petabytes of new data every year. We have millions of hours of
compute processing time running on our infrastructure. We have hardware acceleration.
We have direct connections to clouds. We have collaboration for our researchers that
extends throughout the world to external organizations. We use containers, and we use
multiple cloud providers.
Gardner: So you have been doing multi-cloud before there was even a word for multi-
cloud?
Mullican: We are the hybrid-scale and hybrid IT organization that no one has ever heard
of.
Gardner: Let’s unpack some of the hurdles you need to overcome to keep all of your
scientists and researchers happy. How do you avoid lock-in? How do you keep it so that
you can remain open and competitive?
Agnostic arrangements of clouds
Mullican: It’s important for us to keep our local
datacenters agnostic, as well as our private and
public clouds. So we strive to communicate with
all of our resources through application
programming interfaces (APIs), and we use
open-source technologies at HudsonAlpha. We
are proud of that. Yet there are a lot of possibilities for arranging all of those pieces.
There are a lot [of services] that you can combine with the right toolsets, not only in your
local datacenter but also in the clouds. If you put in the effort to write the code with that
in mind -- so you don’t lock into any one solution necessarily -- then you can optimize
and put everything together.
Simplified
Hybrid
Cloud management
Gardner: Because you are a nonprofit institute, you often seek grants. But those grants
can come with unique requirements, even IT use benefits and cloud choice
considerations.
Cloud cost control, granted
Page of3 9
It’s important for us to keep our
local data centers agnostic, as well
as our private and public clouds.
4. Mullican: Right. Researchers are applying for grants throughout the year, and now with
the National Institutes of Health (NIH), when grants are awarded, they come with
community cloud credits, which is an exciting idea for the researchers. It means they can
immediately begin consuming resources in the cloud -- from storage to compute -- and
that cost is covered by the grant.
So they are anxious to get started on that, which brings challenges to IT. We certainly
don’t want to be the holdup for that innovation. We want the projects to progress as
rapidly as possible. At the same time, we need to be aware of what is happening in a
cloud and not lose control over usage and cost.
Gardner: Certainly HudsonAlpha is an extreme test bed for multi-cloud management,
with lots of different systems, changing requirements, and the need to provide the
flexibility to innovate to your clientele. When you wanted a better management capability,
to gain an overview into that full hybrid IT environment, how did you come together with
HPE and test what they are doing?
Variety is the spice of IT
Mullican: We’ve invested in composable
infrastructure and hyperconverged infrastructure
(HCI) in our datacenter, as well as blade server
technology. We have a wide variety of compute,
networking, and storage resources available to us.
The key is: How do we rapidly provision those
resources in an automated fashion? I think the key there is not only for IT to be aware of
those resources, but for developers to be as well.
We have groups of developers dealing with bioinformatics at HudsonAlpha. They can
benefit from all of the different types of infrastructure in our datacenter. What HPE
OneSphere does is enable them to access --through a common API -- that infrastructure.
So it’s very exciting.
Gardner: What did HPE OneSphere bring to the table for you in order to be able to
rationalize, visualize, and even prioritize this very large mixture of hybrid IT assets?
Mullican: We have been beta testing HPE OneSphere since October 2017, and we
have tied it into our VMware ESX Server environment, as well as our Amazon Web
Services (AWS) environment successfully -- and that’s at an IT level. So our next step is
to give that to researchers as a single pane of glass where they can go and provision the
resources themselves.
Gardner: What this might capability bring to you and your organization?
Page of4 9
We’ve invested in composable
infrastructure and HCI in our
datacenter, as well as blade
server technology.
5. Cross-training the clouds
Mullican: We want to do more with cross-cloud. Right now we are very adept at
provisioning within our datacenters, provisioning within each individual cloud.
HudsonAlpha has a presence in all the major public clouds -- AWS, Google, Microsoft
Azure. But the next step would be to go cross-cloud, to provision applications across
them all.
For example, you might have an application that runs as a series of microservices. So
you can have one microservice take advantage of your on-premises datacenter, such as
for local storage. And then another piece could take advantage of object storage in the
cloud. And even another piece could be in another separate public cloud.
But the key here is that our developer and researchers -- the end users of OneSphere –
they don’t need to know all of the specifics of provisioning in each of those
environments. That is not a level of expertise in their wheelhouse. In this new
OneSphere way, all they know is that they are provisioning the application in the pipeline
-- and that’s what the researchers will use. Then it’s up to us in IT to come along and
keep an eye on what they are doing through the analytics that HPE OneSphere
provides.
Simplified
Hybrid
Cloud management
Gardner: Because OneSphere gives you the visibility to see what the end users are
doing, potentially, for cost optimization and remaining competitive, you may be able to
play one cloud off another. You may even be able to automate and orchestrate that.
Mullican: Right, and that will be an ongoing
effort to always optimize cost -- but not at the risk
of slowing the research. We want the research to
happen, and to innovate as quickly as possible.
We don’t want to be the holdup for that. But we
definitely do need to loop back around and keep
an eye on how the different clouds are being
used and make decisions going forward based
on the analytics.
Gardner: There may be other organizations that are going to be more cost-focused, and
they will probably want to dial back to get the best deals. It’s nice that we have the
flexibility to choose an algorithmic approach to business, if you will.
Page of5 9
That will be an ongoing effort to
always optimize cost – but not
at the risk of slowing the
research. We want the research
to happen, and to innovate as
quickly as possible.
6. Mullican: Right. The research that we do at HudsonAlpha saves lives and the utmost
importance is to be able to conduct that research at the fastest speed.
Gardner: HPE OneSphere seems geared toward being cloud-agnostic. They are
beginning on AWS, yet they are going to be adding more clouds. And they are
supporting more internal private cloud infrastructures, and using an API-driven approach
to microservices and containers.
As an early tester, and someone who has been a long-time user of HPE infrastructure, is
there anything about the combination of HPE Synergy, HPE SimpliVity HCI, and HPE
3PAR intelligent storage -- in conjunction with OneSphere -- that’s given you a ‘whole
greater than the sum of the parts’ effect?
Mullican: HPE Synergy and composable infrastructure is something that is very near
and dear to me. I have a lot of hours invested with HPE Synergy Image Streamer and
customizing open-source applications on Image Streamer – open-source operating
systems and applications.
The ability to utilize that in the mix that I have architected natively with OneSphere -- in
addition to the public clouds -- is very powerful, and I am excited to see where that goes.
Gardner: Any words of wisdom to others who may be have not yet gone down this
road? What do you advise others to consider as they are seeking to better compose,
automate, and optimize their infrastructure?
Get adept at DevOps
Mullican: It needs to start with IT. IT needs to take on more of a DevOps approach.
As far as putting an emphasis on automation -- and
being able to provision infrastructure in the
datacenter and the cloud through automated APIs -- a
lot of companies probably are still slow to adopt that.
They are still provisioning in older methods, and I
think it’s important that they do that. But then, once
your IT department is adept with DevOps, your
developers can begin feeding from that and using
what IT has laid down as a foundation. So it needs to start with IT.
It involves a skill set change for some of the traditional system administrators and
network administrators. But now, with software-defined networking (SDN) and with
automated deployments and provisioning of resources -- that’s a skill set that IT really
needs to step up and master. That’s because they are going to need to set the example
Page of6 9
Once your IT department is
adept with DevOps, your
developers can begin feeding
from that and using what IT has
laid down as a foundation.
7. for the developers who are going to come along and be able to then use those same
tools.
That’s the partnership that companies really need to foster -- and it’s between IT and
developers. And something like HPE OneSphere is a good fit for that, because it
provides a unified API.
On one hand, your IT department can be busy mastering how to communicate with their
infrastructure through that tool. And at the same time, they can be refactoring
applications as microservices, and that’s up to the developer teams. So both can be
working on all of this at the same time.
Then when it all comes together with a service catalog of options, in the end it’s just a
simple interface. That’s what we want, to provide a simple interface for the researchers.
They don’t have to think about all the work that went into the infrastructure, they are just
choosing the proper workflow and pipeline for future projects.
Simplified
Hybrid
Cloud management
Gardner: It also sounds, Katreena, like you are able to elevate IT to a solutions-level
abstraction, and that OneSphere is an accelerant to elevating IT. At the same time,
OneSphere is an accelerant to the adoption of DevOps, which means it’s also elevating
the developers. So are we really finally bringing people to that higher plane of business-
focus and digital transformation?
HCI advances across the globe
Mullican: Yes. HPE OneSphere is an advantage to both of those departments, which
in some companies can be still quite disparate. Now at HudsonAlpha, we are DevOps in
IT. It’s not a distinguished department, but in some companies that’s not the case.
And I think we have a lot of advantages because we think in terms of automation, and
we think in terms of APIs from the infrastructure standpoint. And the tools that we have
invested in, the types of composable and hyperconverged infrastructure, are helping
accomplish that.
Gardner: I speak with a number of organizations that are global, and they have some
data sovereignty concerns. I’d like to explore, before we close out, how OneSphere also
might be powerful in helping to decide where data sets reside in different clouds, private
and public, for various regulatory reasons.
Page of7 9
8. Is there something about having that visibility into hybrid IT that extends into hybrid data
environments?
Mullican: Data locality is one of our driving factors in IT, and we do have on-premises
storage as well as cloud storage. There is a time and a place for both of those, and they
do not always mix, but we have requirements for our data to be available worldwide for
collaboration.
So, the services that HPE OneSphere makes available are designed to use the
appropriate data connections, whether that would be back to your object storage on-
premises, or AWS Simple Storage Service (S3), for example, in the cloud.
Gardner: Now we can think of HPE OneSphere as also elevating data scientists -- and
even the people in charge of governance, risk, and compliance (GRC) around adhering
to regulations. It seems like it’s a gift that keeps giving.
Hybrid hard work pays off
Mullican: It is a good fit for hybrid IT and what we do at HudsonAlpha. It’s a natural
addition to all of the preparation work that we have done in IT around automated
provisioning with HPE Synergy and Image Streamer.
HPE OneSphere is a way to showcase to the end user all of the efforts that have been,
and are being, done by IT. That’s why it’s a satisfying tool to implement, because, in the
end, you want what you have worked on so hard to be available to the researchers and
be put to use easily and quickly.
Gardner: It was a long time coming, right?
Mullican: Yes, yeah. I think so.
Gardner: I’m afraid we will have to leave it there. We have been exploring how nonprofit
research institute HudsonAlpha is better managing its multiple cloud and hybrid IT
deployment environments. And we have learned how HPE OneSphere is delivering
consolidated and deep insights across multiple clouds and IT deployments at
HudsonAlpha, an early beta tester and user.
So please join me in thanking our guest, Katreena Mullican, Senior Architect and Cloud
Whisperer at HudsonAlpha Institute for Biotechnology.
Mullican: Thank you very much.
Gardner: And a big thank you to our audience as well for joining us for this
BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana
Page of8 9
9. Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of
Hewlett Packard Enterprise-sponsored interviews.
Thanks again for listening. Please pass this content along to your IT community and do
come back next time.
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the
transcript. Sponsor: Hewlett Packard Enterprise.
Transcript of a discussion on how HudsonAlpha is testing a new Hewlett Packard
Enterprise solution, OneSphere, to gain a simple and more common interface to manage
hybrid computing. Copyright Interarbor Solutions, LLC, 2005-2018. All rights reserved.
You may also be interested in:
• South African insurer King Price gives developers the royal treatment as HCI meets big
data
• Containers, microservices, and HCI help governments in Norway provide safer public
data sharing
• Pay-as-you-go IT models provide cost and operations advantages for Northrop Grumman
• Ericsson and HPE accelerate digital transformation via customizable mobile business
infrastructure stacks
• A tale of two hospitals—How healthcare economics in Belgium hastens need for new IT
buying schemes
• How VMware, HPE, and Telefonica together bring managed cloud services to a global
audience
• Retail gets a makeover thanks to data-driven insights, edge computing, and revamped
user experiences
• Inside story on HPC's role in the Bridges Research Project at Pittsburgh Supercomputing
Center
• How UBC gained TCO advantage via flash for its EduCloud cloud storage service
Page of9 9