The HPC storage performance tier is well defined: scale-out solid state storage systems. But the capacity tier is up for debate. Should you use a high end NAS file system or make the switch to object storage? More importantly: How do you move data from the performance tier to the capacity tier without placing additional burden on already overworked IT personnel?
We answer these questions and provide designs that solve the HPC storage-tug-of-war in our webinar with Caringo. Listen as experts on HPC, NAS and Object Storage discuss the HPC storage challenge, debate the potential solutions and provide you guidance on how to create the right architecture.
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Webinar: End NAS Sprawl - Gain Control Over Unstructured DataStorage Switzerland
The key to ending NAS Sprawl is to fix the file system so it can offer cost effective, scalable, high performance storage. In this webinar Storage Switzerland Lead Analyst George Crump, Quantum VP of Global Marketing Molly Rector, and the Quantum StorNext Solution Marketing Senior Director Dave Frederick discuss the challenges facing the typical scale-out storage environment and what IT professionals should be looking for in solutions to eliminate NAS Sprawl once and for all.
Backup systems are being asked to do things they were never designed for - and it’s killing them. Join experts from Storage Switzerland and NEC as they discuss the four assumptions that are killing backup storage:
* Assuming backup is an archive
* Assuming it can grow forever
* Assuming it can support production applications
* Assuming deduplication won’t impact recovery
You’ll come away with strategies that could save your backup system from the changes that threaten to overwhelm it.
For complete audio and access to exclusive papers, register for our on-demand webinar"
https://www.brighttalk.com/webcast/5583/126249
The rapid growth of in-memory compute applications is not surprising given the tremendous performance gains they can offer. Jobs that used to take hours can now take minutes or seconds as they are no longer subject to the rotational and seek latencies of spinning media. While Flash memory provide some relief, it is still a hundred times slower than the DRAM that in-memory compute applications utilize as their primary storage.
One drawback to in-memory compute applications is the high cost associated with DRAM. Not only are the acquisition costs an order of magnitude more expensive than Flash, DRAM consumes far more power. Power can be a significant issue in data centers besides contributing a major part to the operational costs. In addition, a single server has limited capacity for DRAM and datasets that are larger, need to find an alternate solution or cope with the nuisance of sharding. Furthermore, in order to utilize the maximum capacity of DRAM in a server, higher cost DRAM needs to be installed further escalating the cost of compute.
We discuss a paradigm to allow in-memory computing applications to extend their capacity by utilizing Flash memory; often with minimal performance loss. We give examples of applications that have been modified to utilize the paradigm and show performance comparisons. We also discuss TCO and the relative cost per transaction of the different solutions.
Slides: Start Small, Grow Big with a Unified Scale-Out InfrastructureNetApp
Slides from the on-demand webcast (showcasing customer Cirrity). Learn how NetApp® clustered Data ONTAP® 8.2 can help you scale multiple workloads on a single unified storage platform with support for multiple protocols such as SMB 3.0 and pNFS, and scale the performance of all of your applications, whether on SAN or NAS infrastructure.
Unstructured data is growing at a staggering rate. It is breaking traditional storage and IT budgets and burying IT professionals under a mountain of operational challenges. Listen as Cloudian and Storage Switzerland discuss panel-style discussion the seven key reasons why organizations can dramatically lower storage infrastructure costs by deploying a hardware-agnostic object storage solution instead of sticking with legacy NAS.
In this webinar join experts from Storage Switzerland and Tegile to discover if the All-Flash Data Center can become reality. We will explore the return on investment that All-Flash systems can deliver, like increase user and virtual machine densities, lower drive counts and simpler storage architectures. We will also look at some of the methods that All-Flash systems employ to deliver an acceptable cost per GB like thin provisioning, clones, deduplication and compression. Finally we will take one last look at disk, does it have a role in the All-Flash Data Center and if it does what should that role be?
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Webinar: End NAS Sprawl - Gain Control Over Unstructured DataStorage Switzerland
The key to ending NAS Sprawl is to fix the file system so it can offer cost effective, scalable, high performance storage. In this webinar Storage Switzerland Lead Analyst George Crump, Quantum VP of Global Marketing Molly Rector, and the Quantum StorNext Solution Marketing Senior Director Dave Frederick discuss the challenges facing the typical scale-out storage environment and what IT professionals should be looking for in solutions to eliminate NAS Sprawl once and for all.
Backup systems are being asked to do things they were never designed for - and it’s killing them. Join experts from Storage Switzerland and NEC as they discuss the four assumptions that are killing backup storage:
* Assuming backup is an archive
* Assuming it can grow forever
* Assuming it can support production applications
* Assuming deduplication won’t impact recovery
You’ll come away with strategies that could save your backup system from the changes that threaten to overwhelm it.
For complete audio and access to exclusive papers, register for our on-demand webinar"
https://www.brighttalk.com/webcast/5583/126249
The rapid growth of in-memory compute applications is not surprising given the tremendous performance gains they can offer. Jobs that used to take hours can now take minutes or seconds as they are no longer subject to the rotational and seek latencies of spinning media. While Flash memory provide some relief, it is still a hundred times slower than the DRAM that in-memory compute applications utilize as their primary storage.
One drawback to in-memory compute applications is the high cost associated with DRAM. Not only are the acquisition costs an order of magnitude more expensive than Flash, DRAM consumes far more power. Power can be a significant issue in data centers besides contributing a major part to the operational costs. In addition, a single server has limited capacity for DRAM and datasets that are larger, need to find an alternate solution or cope with the nuisance of sharding. Furthermore, in order to utilize the maximum capacity of DRAM in a server, higher cost DRAM needs to be installed further escalating the cost of compute.
We discuss a paradigm to allow in-memory computing applications to extend their capacity by utilizing Flash memory; often with minimal performance loss. We give examples of applications that have been modified to utilize the paradigm and show performance comparisons. We also discuss TCO and the relative cost per transaction of the different solutions.
Slides: Start Small, Grow Big with a Unified Scale-Out InfrastructureNetApp
Slides from the on-demand webcast (showcasing customer Cirrity). Learn how NetApp® clustered Data ONTAP® 8.2 can help you scale multiple workloads on a single unified storage platform with support for multiple protocols such as SMB 3.0 and pNFS, and scale the performance of all of your applications, whether on SAN or NAS infrastructure.
Unstructured data is growing at a staggering rate. It is breaking traditional storage and IT budgets and burying IT professionals under a mountain of operational challenges. Listen as Cloudian and Storage Switzerland discuss panel-style discussion the seven key reasons why organizations can dramatically lower storage infrastructure costs by deploying a hardware-agnostic object storage solution instead of sticking with legacy NAS.
In this webinar join experts from Storage Switzerland and Tegile to discover if the All-Flash Data Center can become reality. We will explore the return on investment that All-Flash systems can deliver, like increase user and virtual machine densities, lower drive counts and simpler storage architectures. We will also look at some of the methods that All-Flash systems employ to deliver an acceptable cost per GB like thin provisioning, clones, deduplication and compression. Finally we will take one last look at disk, does it have a role in the All-Flash Data Center and if it does what should that role be?
Headquartered in Asia with coverage across the region and beyond, 1cloudstar is a pure-play Cloud Services Provider offering cloud-related consulting and professional services. 1cloudstar brings a deep understanding of what is possible when legacy systems and cloud solutions coexist and we have a clear vision of the digital future toward which this hybrid world is leading us. We combine those insights with our traditional Enterprise IT knowledge to drive innovation and transform complex environments into high-performance engines.
Whether you’re in the early stages of evaluating how the cloud can benefit your business, need guidance on developing a cloud strategy or how to integrate new cloud technology with their existing technology investments, 1cloudstar can leverage the skills and experience gained from many other enterprise cloud projects to ensure you achieve your business objectives.
1cloudstar’s unique strategic approach and engagement model ‘1cloudstar Engage’ combined with it’s cloud infrastructure and application integration skills sets the company apart from traditional technology system integrators. 1cloudstar’s team of consultants can leverage years of technology infrastructure and applications experience along with first hand experience of public, private and hybrid cloud projects to ensure your enterprise journey to cloud is a success.
1cloudstar accelerates the cloud-powered business, helping enterprises achieve real results from cloud applications and platforms.
BCLOUD: Smart Scale your Storage - festival ICT 2015festival ICT 2016
I dati non strutturati come Big Data Analytics, le applicazioni “mobile”, il “sync and share” e i servizi “cloud-like”, solo per citarne alcuni, generano una quantità di dati che rischia di compromettere l’agilità e la competitività delle aziende. I moderni sistemi di Object Storage possono essere considerati come una piattaforma globale in grado di fornire soluzioni di storage sicure, disponibili, affidabili, geograficamente distribuite e con costi competitivi per moltissimi servizi quali l’archiviazione, il backup ed il sync&share, in grado di migliorare l’esperienza e la produttività dell’utente finale.
Cloudian Hyperstore® è una piattaforma di Object Storage compatibile con Amazon S3 che consente di realizzare sistemi di storage distribuiti estremamente flessibili, scalabili ed affidabili, potendo al contempo trarre vantaggio dal vasto numero di applicazioni disponibili nell’ecosistema Amazon. La piattaforma è progettata specificatamente per soddisfare le esigenze di cloud storage multi-tenant di elevati volumi di dati con un’interfaccia di gestione robusta e flessibile.”
Dell Solutions Tour 2014 Norge
Kaj Inge Skjønhaug Storage Spesialist, Dell Norway
Få en teknisk gjennomgang av Dells nyeste løsninger innen SSD og flash basert lagring. Presentasjonen tar for seg All-flash array, hybrid arrays, og FluidCache løsninger. Den tekniske gjennomgangen viser hvordan SSD kan anvendes i ulike løsninger for å gi maksimal effekt. Presentasjonen tar også for seg NAS, og VMware relatert lagringsteknologi
NetApp enterprise All Flash Storage
This presentation provides the key messages and differentiation, value propositions, and promotional programs for AFF.
MT48 A Flash into the future of storage…. Flash meets Persistent Memory: The...Dell EMC World
Several key technology trends are redefining the boundaries of the traditional storage infrastructure stack: In a rapidly changing world of system interconnects, emerging memory media, and storage semantics, Server Designers and Storage Architects are engaging and collaborating like never before to exploit breakthrough technology capabilities.
With the backdrop of Big Data volume, Cloud Data ubiquity and IoT Data velocity, Application Developers are entering the Post-POSIX world of real-time, high-frequency, low latency data management frameworks.
This session will address key technology trends in Storage, Networking, and Compute, as they define the parameters of a Memory Centric Architecture (MCA) and the Next Generation Data Center.
The advance of solid state disk as a replacement for mechanical disk is driving gains in application performance and computing efficiency. However, the same advancements are also driving even more revolutionary changes in system memory. A new and even higher performance persistent data storage layer is emerging that will enable broad adoption of the as yet unrealized power real time computing.
Maximizing the Value of Flash Across the Entire IT Stack
Combining 20 years of innovation and expertise with the latest in enterprise solid state memory.
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...NetAppUK
Flash Storage technologies are opening up a wealth of new opportunities for improving the optimisation of applications, data and storage, as well as reducing costs. In this session, Peter Mason, NetApp Consulting Systems Engineer, shares his experiences and discusses the use and impact of different Flash technologies.
Cost Effectively Run Multiple Oracle Database Copies at Scale NetApp
Scaling multiple databases with a single legacy storage system works well from a cost perspective, but workload conflicts and hardware contention make these solutions an unattractive choice for anything but low-performance applications.
Even though users and application owners are demanding it, the Always-On Data Center seems unrealistic to most IT professionals. Overcoming the cost and complexity of an Always-On environment while delivering consistent results is almost too much to ask. But the reality is that data centers of all sizes can affordably meet this expectation. The Always-On environment requires a holistic approach, counting on a highly virtualized infrastructure, flexible data protection software and purpose built protection storage.
Listen in as experts from Storage Switzerland, Veeam and ExaGrid architect a data availability and protection infrastructure that can meet and even exceed the Always-On expectations of an Always-On organization.
In this deck from DDN booth at SC18, John Bent from DDN presents: IME - Unlocking the Potential of NVMe.
"DDN’s Infinite Memory Engine (IME) is a scale-out, software-defined, flash storage platform that streamlines the data path for application I/O. IME interfaces directly to applications and secures I/O via a data path that eliminates file system bottlenecks. With IME, architects can realize true flash-cache economics with a storage architecture that separates capacity from performance."
Learn more: https://www.ddn.com/products/ime-flash-native-data-cache/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Next to performance and scalability, cost efficiency is one of the top three reasons most companies cite as their motivations for acquiring storage technology. Businesses are struggling to control the storage costs, and to reduce OPEX costs for administrative staff, infrastructure and data management, and environmental and energy. Every storage vendor, it seems, including most of the Software-defined Storage purveyors, are promising ROIs that require nothing short of a suspension of disbelief.
In this presentation, Jon Toigo of the Data Management Institute digs out the root causes of high storage costs and sketches out a prescription for addressing them. He is joined by Ibrahim “Ibby” Rahmani of DataCore Software, who will address the specific cost efficiency advantages that are being realized by customers of Software-defined Storage
VMworld 2013: Dell Solutions for VMware Virtual SAN VMworld
VMworld 2013
Sheetal Kochavara, VMware
Bryan Martin, Dell Inc.
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
This ESG Lab Validation Report presents the hands-on evaluation and testing results of the NetApp FAS2200 series with Flash Pool. ESG Lab focused on key areas that make the FAS2200 an attractive offering for midsized businesses and distributed enterprises: cost-effective mixed workload performance, ease of implementation, and storage efficiency.
DataCore Software introduction from my "Meet DataCore" webinar. DataCore products include software-defined storage and hyperconverged infrastructure solutions. Datacore has more than 10K customers and 30K+ implementations world-wide.
The combination of scalable ANSYS design and simulation software and HPC clusters with Panasas parallel storage has demonstrated new and significant productivity advantages for workflows in computer aided engineering (CAE) applications. The combination provides
dramatic cost-performance improvements and speeds time-to-results for engineering simulation solutions on commodity HPC clusters
Google has recently announced expansion of their cloud storage service. It offers similar service levels as Amazon S-3 and Glacier, but with simplified pricing. How viable is their cloud storage product for the average customer? How do their service levels compare to Amazon and Azure service levels? What about pricing? Is it really that different? And what are some example use cases?
Other questions center around applications that support these offerings. If an application supports Amazon, will it be easy for them to support Google? Is there anything about Google cloud storage that makes it easier or harder for service providers to work with them?
Everyone understands disk has become the primary target for backups in the last several years. It’s also safe to say that the main type of disk storage used as a target for backups would be a purpose-built backup appliance that presents itself to the backup application as an NFS or SMB server and then deduplicates any backups stored on it.
But what about object storage? Object storage vendors tout that their systems are less expensive to buy and less expensive to operate than traditional disk arrays and NAS appliances. So, does it make sense to use them for backups? How much is deduplication a factor and is deduplication even available with object storage? What else can object storage bring to the table that traditional disk backup appliances can’t?
Headquartered in Asia with coverage across the region and beyond, 1cloudstar is a pure-play Cloud Services Provider offering cloud-related consulting and professional services. 1cloudstar brings a deep understanding of what is possible when legacy systems and cloud solutions coexist and we have a clear vision of the digital future toward which this hybrid world is leading us. We combine those insights with our traditional Enterprise IT knowledge to drive innovation and transform complex environments into high-performance engines.
Whether you’re in the early stages of evaluating how the cloud can benefit your business, need guidance on developing a cloud strategy or how to integrate new cloud technology with their existing technology investments, 1cloudstar can leverage the skills and experience gained from many other enterprise cloud projects to ensure you achieve your business objectives.
1cloudstar’s unique strategic approach and engagement model ‘1cloudstar Engage’ combined with it’s cloud infrastructure and application integration skills sets the company apart from traditional technology system integrators. 1cloudstar’s team of consultants can leverage years of technology infrastructure and applications experience along with first hand experience of public, private and hybrid cloud projects to ensure your enterprise journey to cloud is a success.
1cloudstar accelerates the cloud-powered business, helping enterprises achieve real results from cloud applications and platforms.
BCLOUD: Smart Scale your Storage - festival ICT 2015festival ICT 2016
I dati non strutturati come Big Data Analytics, le applicazioni “mobile”, il “sync and share” e i servizi “cloud-like”, solo per citarne alcuni, generano una quantità di dati che rischia di compromettere l’agilità e la competitività delle aziende. I moderni sistemi di Object Storage possono essere considerati come una piattaforma globale in grado di fornire soluzioni di storage sicure, disponibili, affidabili, geograficamente distribuite e con costi competitivi per moltissimi servizi quali l’archiviazione, il backup ed il sync&share, in grado di migliorare l’esperienza e la produttività dell’utente finale.
Cloudian Hyperstore® è una piattaforma di Object Storage compatibile con Amazon S3 che consente di realizzare sistemi di storage distribuiti estremamente flessibili, scalabili ed affidabili, potendo al contempo trarre vantaggio dal vasto numero di applicazioni disponibili nell’ecosistema Amazon. La piattaforma è progettata specificatamente per soddisfare le esigenze di cloud storage multi-tenant di elevati volumi di dati con un’interfaccia di gestione robusta e flessibile.”
Dell Solutions Tour 2014 Norge
Kaj Inge Skjønhaug Storage Spesialist, Dell Norway
Få en teknisk gjennomgang av Dells nyeste løsninger innen SSD og flash basert lagring. Presentasjonen tar for seg All-flash array, hybrid arrays, og FluidCache løsninger. Den tekniske gjennomgangen viser hvordan SSD kan anvendes i ulike løsninger for å gi maksimal effekt. Presentasjonen tar også for seg NAS, og VMware relatert lagringsteknologi
NetApp enterprise All Flash Storage
This presentation provides the key messages and differentiation, value propositions, and promotional programs for AFF.
MT48 A Flash into the future of storage…. Flash meets Persistent Memory: The...Dell EMC World
Several key technology trends are redefining the boundaries of the traditional storage infrastructure stack: In a rapidly changing world of system interconnects, emerging memory media, and storage semantics, Server Designers and Storage Architects are engaging and collaborating like never before to exploit breakthrough technology capabilities.
With the backdrop of Big Data volume, Cloud Data ubiquity and IoT Data velocity, Application Developers are entering the Post-POSIX world of real-time, high-frequency, low latency data management frameworks.
This session will address key technology trends in Storage, Networking, and Compute, as they define the parameters of a Memory Centric Architecture (MCA) and the Next Generation Data Center.
The advance of solid state disk as a replacement for mechanical disk is driving gains in application performance and computing efficiency. However, the same advancements are also driving even more revolutionary changes in system memory. A new and even higher performance persistent data storage layer is emerging that will enable broad adoption of the as yet unrealized power real time computing.
Maximizing the Value of Flash Across the Entire IT Stack
Combining 20 years of innovation and expertise with the latest in enterprise solid state memory.
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...NetAppUK
Flash Storage technologies are opening up a wealth of new opportunities for improving the optimisation of applications, data and storage, as well as reducing costs. In this session, Peter Mason, NetApp Consulting Systems Engineer, shares his experiences and discusses the use and impact of different Flash technologies.
Cost Effectively Run Multiple Oracle Database Copies at Scale NetApp
Scaling multiple databases with a single legacy storage system works well from a cost perspective, but workload conflicts and hardware contention make these solutions an unattractive choice for anything but low-performance applications.
Even though users and application owners are demanding it, the Always-On Data Center seems unrealistic to most IT professionals. Overcoming the cost and complexity of an Always-On environment while delivering consistent results is almost too much to ask. But the reality is that data centers of all sizes can affordably meet this expectation. The Always-On environment requires a holistic approach, counting on a highly virtualized infrastructure, flexible data protection software and purpose built protection storage.
Listen in as experts from Storage Switzerland, Veeam and ExaGrid architect a data availability and protection infrastructure that can meet and even exceed the Always-On expectations of an Always-On organization.
In this deck from DDN booth at SC18, John Bent from DDN presents: IME - Unlocking the Potential of NVMe.
"DDN’s Infinite Memory Engine (IME) is a scale-out, software-defined, flash storage platform that streamlines the data path for application I/O. IME interfaces directly to applications and secures I/O via a data path that eliminates file system bottlenecks. With IME, architects can realize true flash-cache economics with a storage architecture that separates capacity from performance."
Learn more: https://www.ddn.com/products/ime-flash-native-data-cache/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Next to performance and scalability, cost efficiency is one of the top three reasons most companies cite as their motivations for acquiring storage technology. Businesses are struggling to control the storage costs, and to reduce OPEX costs for administrative staff, infrastructure and data management, and environmental and energy. Every storage vendor, it seems, including most of the Software-defined Storage purveyors, are promising ROIs that require nothing short of a suspension of disbelief.
In this presentation, Jon Toigo of the Data Management Institute digs out the root causes of high storage costs and sketches out a prescription for addressing them. He is joined by Ibrahim “Ibby” Rahmani of DataCore Software, who will address the specific cost efficiency advantages that are being realized by customers of Software-defined Storage
VMworld 2013: Dell Solutions for VMware Virtual SAN VMworld
VMworld 2013
Sheetal Kochavara, VMware
Bryan Martin, Dell Inc.
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
This ESG Lab Validation Report presents the hands-on evaluation and testing results of the NetApp FAS2200 series with Flash Pool. ESG Lab focused on key areas that make the FAS2200 an attractive offering for midsized businesses and distributed enterprises: cost-effective mixed workload performance, ease of implementation, and storage efficiency.
DataCore Software introduction from my "Meet DataCore" webinar. DataCore products include software-defined storage and hyperconverged infrastructure solutions. Datacore has more than 10K customers and 30K+ implementations world-wide.
The combination of scalable ANSYS design and simulation software and HPC clusters with Panasas parallel storage has demonstrated new and significant productivity advantages for workflows in computer aided engineering (CAE) applications. The combination provides
dramatic cost-performance improvements and speeds time-to-results for engineering simulation solutions on commodity HPC clusters
Google has recently announced expansion of their cloud storage service. It offers similar service levels as Amazon S-3 and Glacier, but with simplified pricing. How viable is their cloud storage product for the average customer? How do their service levels compare to Amazon and Azure service levels? What about pricing? Is it really that different? And what are some example use cases?
Other questions center around applications that support these offerings. If an application supports Amazon, will it be easy for them to support Google? Is there anything about Google cloud storage that makes it easier or harder for service providers to work with them?
Everyone understands disk has become the primary target for backups in the last several years. It’s also safe to say that the main type of disk storage used as a target for backups would be a purpose-built backup appliance that presents itself to the backup application as an NFS or SMB server and then deduplicates any backups stored on it.
But what about object storage? Object storage vendors tout that their systems are less expensive to buy and less expensive to operate than traditional disk arrays and NAS appliances. So, does it make sense to use them for backups? How much is deduplication a factor and is deduplication even available with object storage? What else can object storage bring to the table that traditional disk backup appliances can’t?
Webinar: Preserve, Distribute and Deliver - M&E's Three Biggest Data ChallengesStorage Switzerland
In this webinar Storage Switzerland and Caringo, providers of cloud and object storage, will discuss why preservation, distribution and delivery is so critical for M&E IT and also why it is so challenging to deliver. More importantly, we will discuss practical solutions to these challenges so IT departments can lead their organizations to more monetization opportunities.
"This deck is from the opening session of the "Introduction to Programming Pascal (P100) with CUDA 8" workshop at CSCS in Lugano, Switzerland. The three-day course is intended to offer an introduction to Pascal computing using CUDA 8."
Watch the video: http://wp.me/p3RLHQ-gsQ
Learn more: http://www.cscs.ch/events/event_detail/index.html?tx_seminars_pi1%5BshowUid%5D=155
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Webinar: Cloud Storage: The 5 Reasons IT Can Do it BetterStorage Switzerland
In this webinar learn the five reasons why a private cloud storage system may be more cost effective and deliver a higher quality of service than public cloud storage providers.
In this webinar you will learn:
1. What Public Cloud Storage Architectures Look Like
2. Why Public Providers Chose These Architectures
3. The Problem With Traditional Data Center File Solutions
4. Bringing Cloud Lessons to Traditional IT
5. The Five Reasons IT can Do it Better
Object Storage promises many things - unlimited scalability, both in terms of capacity and file count, low cost but highly redundant capacity and excellent connectivity to legacy NAS. But, despite these promises object storage has not caught on in the enterprise like it has in the cloud. It seems like, for the enterprise object storage just isn’t a good fit. The problem is that most object storage system’s starting capacity is too large. And while connectivity to legacy NAS systems is available, seamless integration is not. Can object storage be sized so that it is a better fit for the enterprise?
Webinar: Overcoming the Top 3 Challenges of the Storage Status QuoStorage Switzerland
Between 2010 and 2020, IDC predicts that the amount of data created by humans and enterprises will increase 50x. Legacy network attached storage (NAS) systems can't meet the unstructured data demands of the mobile workforce or distributed organizations. In this webinar, George Crump, Lead Analyst at Storage Switzerland and Brian Wink, Director of Solutions Engineering at Panzura expose the hidden gotcha's of the storage status quo and explore how to manage unstructured data in the cloud.
Share on LinkedIn Share on Twitter Share on Facebook Share on Google+ Share b...Avere Systems
For years vendors have been trying to drive down the cost of flash so that the all-flash data center can become reality. The problem is that even the rapidly declining price of flash storage can’t keep pace with the rapidly declining price of hard disk. As a result data that does not need to be on flash storage has to be stored on something less expensive. But does that less expensive storage need to be another hard disk array or could it be stored in the cloud?
In this webinar join Storage Switzerland’s founder George Crump and Avere Systems CEO, Ron Bianchini for an interactive webinar Using the Cloud to Create an All-Flash Data Center.
Webinar: Which Storage Architecture is Best for Splunk Analytics?Storage Switzerland
We discuss the pros and cons of the three most common storage architectures for Splunk, enabling you to decide which makes the most sense for your organization.
1. Leverage existing storage resources
2. Deploy a cloud storage and SaaS solution
3. Deploy a hybrid, Splunk-ready solution
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFSUSE Italy
In questa sessione HPE e SUSE illustrano con casi reali come HPE Data Management Framework e SUSE Enterprise Storage permettano di risolvere i problemi di gestione della crescita esponenziale dei dati realizzando un’architettura software-defined flessibile, scalabile ed economica. (Alberto Galli, HPE Italia e SUSE)
Deliver Best-in-Class HPC Cloud Solutions Without Losing Your MindAvere Systems
While cloud computing offers virtually unlimited capacity, harnessing that capacity in an efficient, cost effective fashion can be cumbersome and difficult at the workload level. At the organizational level, it can quickly become chaos.
You must make choices around cloud deployment, and these choices could have a long-lasting impact on your organization. It is important to understand your options and avoid incomplete, complicated, locked-in scenarios. Data management and placement challenges make having the ability to automate workflows and processes across multiple clouds a requirement.
In this webinar, you will:
• Learn how to leverage cloud services as part of an overall computation approach
• Understand data management in a cloud-based world
• Hear what options you have to orchestrate HPC in the cloud
• Learn how cloud orchestration works to automate and align computing with specific goals and objectives
• See an example of an orchestrated HPC workload using on-premises data
From computational research to financial back testing, and research simulations to IoT processing frameworks, decisions made now will not only impact future manageability, but also your sanity.
With AWS, you can choose the right storage service for the right use case. Given the myriad of choices, from object storage to block storage, this session will profile details and examples of some of the choices available to you, with details on real world deployments from customers who are using Amazon Simple Storage Service (Amazon S3), Amazon Elastic Block Store (Amazon EBS), Amazon Glacier, and AWS Storage Gateway.
Webinar: 4 Ways to Improve NetApp Storage Performance Without Replacing ItStorage Switzerland
New on demand webinar with Storage Switzerland Lead Analyst George Crump and Avere Systems Director Chris Bowen. In this webinar, George and Chris discuss why NAS storage performance is so critical, how to balance storage performance and storage capacity, and four ways to improve storage performance without replacing your existing NAS system.
Webinar: Overcoming the Storage Challenges Cassandra and Couchbase CreateStorage Switzerland
NoSQL databases like Cassandra and Couchbase are quickly becoming key components of the modern IT infrastructure. But this modernization creates new challenges – especially for storage. Storage in the broad sense. In-memory databases perform well when there is enough memory available. However, when data sets get too large and they need to access storage, application performance degrades dramatically. Moreover, even if enough memory is available, persistent client requests can bring the servers to their knees.
Join Storage Switzerland and Plexistor where you will learn:
1. What is Cassandra and Couchbase?
2. Why organizations are adopting them?
3. What are the storage challenges they create?
4. How organizations attempt to workaround these challenges.
5. How to design a solution to these challenges instead of a workaround.
Building a scalable analytics environment to support diverse workloadsAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Building a scalable analytics environment to support diverse workloads
Tom Panozzo, Chief Technology Officer (Aunalytics)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Slides: Accelerating Queries on Cloud Data LakesDATAVERSITY
Using “zero-copy” hybrid bursting on remote data to solve data lake analytics capacity and performance problems.
Data scientists want answers on demand. But in today’s enterprise architectures, the reality is that most data remains on-prem, despite the promise of cloud-based analytics. Moving all that data to the cloud has typically not been possible for many reasons including cost, latency, and technical difficulty. So, what if there was a technology that would connect these on-prem environments to any major cloud platform, enabling high-powered computing without the need to move massive amounts of data?
Join us for this webinar where Alex Ma of Alluxio, an open-source data orchestration platform, will discuss how a data orchestration approach offers a solution for connecting traditional on-prem data centers and cloud data lakes with other clouds and data centers. With Alluxio’s “zero-copy” burst solution, companies can bridge remote data centers and data lakes with computing frameworks in other locations, enabling them to offload, compute, and leverage the flexibility, scalability, and power of the cloud for their remote data.
In this on demand webinar, Storage Switzerland and Virtual Instruments give you a 5 step, independent process for moving through a storage refresh project.
1. Storage Technology Evaluation: (flash, hybrid, OpenStack, Ceph, etc)
2. Product Evaluation: Determine which storage system meets your requirements
3. Storage Configuration Optimization: Determine optimal storage configuration
4. Production Management: Determine how to assure performance and rapidly resolve the inevitable problems
5. Change Impact Analysis: Determine how future changes will impact performance
Most organizations making an investment in NetApp Filers count on the system to store user data and host virtual machine datastores from an environment like VMware. In addition these organizations want their NetApp systems to do more and be the repository for the next wave of unstructured data; data generated by machines. NetApp systems are busting at the seams, so these organizations are trying to decide what to do next.
In this slidecast, David Cerf from Crossroads Systems describe the company's innovative Strongbox Shared Storage for HPC data protection.
"StrongBox is a network attached storage (NAS) appliance that is purpose-built to lower the costs of long-term storage and protection for unstructured, fixed content. By pairing a flexible, policy-driven disk cache with Linear Tape File System (LTFS) technology, StrongBox empowers you to control storage costs without sacrificing data availability."
Learn more: http://www.crossroads.com/data-archive-products/strongbox
Watch the video presentation: http://wp.me/p3RLHQ-aT8
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash ArrayStorage Switzerland
Join Storage Switzerland and Data Direct Networks (DDN) for this on demand webinar: "Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array”. We discuss the different types of flash storage and compare them, why vendors want to replace your SAN instead of enhance it and what you can do to not only protect your current storage investments but also prepare a path to the future.
Webinar: Are You Treating Unstructured Data as a Second Class Citizen?Storage Switzerland
Join Storage Switzerland and Aparavi for our in-person webinar to learn how to improve your ability to protect unstructured data while at the same time gaining insight into it.
To some, tape storage may seem like an outdated technology in the era of NAS and object-based storage. But— here’s a surprise – tape today is more relevant than ever. Even the most modern data centers can benefit from its low cost of ownership, scalability, reliability and security. In our on demand webinar, Storage Switzerland is joined by Spectra Logic, Fujifilm and Iron Mountain to discuss why tape use shouldn’t just continue but actually expand, including in hybrid cloud environments.
Special Presentation of Meet The CEOs - Commvault and HedvigStorage Switzerland
Commvault has acquired Hedvig. What are the ramifications for the combined companies and the storage industry? Find out when we have Sanjay Mirchandani, CEO of Commvault and Avinash Lakshman, CEO of Hedvig in a special Live Edition of our Meet The CEO Webinar series.
Join us to learn a little bit more about the CEO behind this acquisition, what motivated the two companies to come together and what role Hedvig’s scale-out Software Defined Storage Software play’s in Commvault’s future. Most importantly ask questions directly of our CEO’s.
Panel Discussion: Is Computational Storage a Better Path to Extreme Performance?Storage Switzerland
Vendors are re-writing software and adding custom hardware in an attempt to not bottleneck extremely low latency NVMe Flash and Storage Class Memory technologies. Eventually, they all are at the mercy of physics. Data has to traverse an internal and sometimes and external network so the computing tier can process it. Computational Storage offers an alternative by performing at least some of the processing on the storage device itself, eliminating most of the network activity.
Join Storage Switzerland's Lead Analyst, George Crump as he leads a panel of computational storage experts to include NGD System's Scott Shadley, ScaleFlux's Thad Omura, and Samsung's Pankaj Mehra as preview the Computational Storage Workshop at the Flash Memory Summit on Thursday, August 8th.
Webinar: Complete Your Cloud Transformation - Store Your Data in The CloudStorage Switzerland
Organizations are moving to the cloud but according to a recent Osterman Research study, only 14% of companies have completed that transformation. The study clearly identifies data storage as an area where IT can easily accelerate their cloud transformation journey. Potentially more so than any other component, intelligently moving data to the cloud has the opportunity to significantly lower on-premises storage costs without the threat of impacting day to day operations.
Join Storage Switzerland, HubStor and Osterman Research for our webinar where we discuss the results of the Osterman Research study, what it means for IT, and how IT can take advantage of that research to leverage the cloud to alleviate data management and data protection concerns.
Webinar: Simplifying the Enterprise Hybrid Cloud with Azure Stack HCIStorage Switzerland
During our on demand webinar, “Simplifying the Large-Scale Hybrid Cloud”, Storage Switzerland and Axellio discuss how Microsoft Azure Stack HCI and Axellio’s FabricXpress Servers can deliver new levels of consolidation in the enterprise. Learn how to intelligently leverage Azure to simplify operations like data protection, business continuity, and data center operations – while deploying less infrastructure and less software for your demanding on-premises workloads.
Webinar: Designing a Storage Consolidation Strategy for Today, the Future and...Storage Switzerland
Most storage consolidation strategies fail because they attempt to consolidate to a single piece of storage hardware. To successfully consolidate storage, IT professionals need to look at consolidation strategies that worked. Server consolidation was VMware’s first use case. It was successful because instead of consolidating hardware, VMware consolidated the environment under a single hypervisor (ESXi) and console (vCenter) but still provided organizations with hardware flexibility. A successful storage consolidation strategy needs to follow a similar formula by providing a single software solution that controls a variety of storage hardware, but that software also has to extract maximum performance and value from each hardware platform on which it sits.
Join Storage Switzerland and StorOne in which we discuss how to design a storage consolidation strategy for today, the future and the cloud.
Webinar: Is It Time to Upgrade Your Endpoint Data Strategy?Storage Switzerland
In this webinar Storage Switzerland and Carbonite discuss the increased importance of endpoints and why organizations need to upgrade their strategy to make sure these devices are protected, secure and in compliance. In light of increasing legal requirements and potential penalties, organizations can no longer afford to ignore this critical issue.
Webinar: Rearchitecting Storage for the Next Wave of Splunk Data GrowthStorage Switzerland
Join Storage Switzerland and SwiftStack, a Splunk technology partner, for our webinar where our panel of experts will discuss the value of having Splunk analyze larger datasets while providing insight into overcoming infrastructure cost and complexity challenges through Splunk enhancements like SmartStore.
Backup software is continuously improving. Solutions like Veeam Backup and Replication deliver instant recoveries, enabling virtual machine volumes to instantiate directly on the backup device, without having to wait for data to transfer back to primary storage. These solutions can also move older backups to higher capacity, lower cost object storage or cloud storage systems. To deliver meaningful performance during instant recovery without exceeding the backup storage budget requires IT to re-think its backup storage architecture.
Modern backup processes need high performance, low capacity systems to deliver high-performance instant recovery, as well as high-capacity, modest performance systems to store backup data long term and software to manage data placement for the most appropriate recovery performance while not breaking the budget.
It is 2019, why are we still backing up production storage? The simple answer is that most production storage systems can provide some level of data protection but they can’t fully replace backup. It's time to consider what backup independence means. The technology is there for production storage to replace backup but most organizations haven’t fully leveraged it. Join Storage Switzerland and ClearSky Data for our webinar to learn how to design a self-protecting production storage infrastructure that not only protects data but replaces backup.
Join us to learn:
•. The difference between data protection and backup
•. Why most production systems can’t replace backup
• How to architect a production system that is fully protected and achieves backup inde-pendence
For over a decade Network Attached Storage (NAS) was the go to file storage device for organizations needing to store large amounts of unstructured data. But unstructured data is changing. While large file use cases are still prevalent, small file use cases are becoming more dominant. Workloads like artificial intelligence, analytics and IoT are typically driven by millions, if not billions of small files.
Object Storage is often hailed as the heir apparent but is it? Can file systems be redesigned to continue to support traditional NAS workloads while also supporting modern, small file and high velocity workloads? Join Storage Switzerland and Qumulo for our webinar, “NAS vs. Object — Can NAS Make a Comeback,” to learn the state of unstructured data storage and if NAS file systems can provide a superior alternative to object storage.
Join us on our event to learn
•. Why traditional NAS solutions fall short
•. Why object storage systems haven't replaced NAS
• How to bridge the gap by modernizing NAS file systems
•. Live Q&A with file system and NAS experts
The cloud seems like a logical destination for backup data. It is by definition off-site, and the organization no longer needs to worry about allocating valuable floor space to secondary data storage. The problem is that most cloud backup solutions fall short of delivering enterprise class data protection.
Most cloud backup solutions are too complicated to set up and upgrade, don't provide complete platform support, don't provide flexible recovery options, can't meet the enterprises RTO/RPO requirements and don't provide a class of support that enables organizations to lower their operational expense.
In this live webinar Storage Switzerland and Carbonite discuss the five critical capabilities that enterprises, looking to move to cloud backup, need to make sure their solution has.
Join us for this event to learn:
1. Why switch to the cloud for enterprise backup?
2. The five critical capabilities enterprises MUST have in cloud backup solutions. What they are and why enterprises need them.
3. Why and where most solutions miss the mark
4. How Carbonite Server delivers the five critical capabilities
Webinar: Overcoming the Shortcomings of Legacy NAS with Microsoft AzureStorage Switzerland
Most organizations use Network Attached Storage (NAS) to store data, but the modern workforce and organization expect more capabilities than what the typical NAS can provide. Also, as organizations themselves become more distributed, the idea of a single centralized file server with users tunneling through virtual private networks won’t scale. The common alternative, putting a NAS in each remote office offers problems of its own when IT tries to make sure the data is protected and made available to the right users at the right time.
Join Storage Switzerland and Nasuni for our webinar “Providing Global File Services Using Microsoft Azure.” By attending this event, you’ll learn:
1. The shortcomings of the Legacy NAS
2. The Cloud Advantage
3. Where the Cloud Needs Help and How to Get it
Webinar: 3 Steps to be a Storage Superhero - How to Slash Storage CostsStorage Switzerland
Reducing or a least slowing the growth of storage costs is a top priority facing IT organizations in 2019. In this live webinar with Storage Switzerland and SolarWinds, you will learn the three steps IT professionals can take to lower storage costs WITHOUT buying more storage (the typical vendor answer). The biggest challenges are that IT professionals don't arm themselves with the tools they need to be successful, take the next step in their career path and of course, save their company money.
Join our on demand webinar and learn:
1. How to Eliminate/Resolve Storage Problems - Not Throw Hardware at the Problem
2. Plan and be prepared for capacity growth and performance demands
3. How to manage multiple vendor's storage systems without replacing them
NVMe storage systems and NVMe networks promise to reduce latency further and increase performance beyond what SAS based flash systems and current networking technology can deliver. To take advantage of that performance gain however, the data center must have workloads that can take advantage of all the latency reduction and performance improvements that NVMe offers. Vendors emphatically state that NVMe is the next must-have technology, yet many still continue to provide SAS based arrays using traditional networks.
How do IT planners know then, that investing in NVMe will truly provide their organizations the benefits of NVMe for their demanding applications and see a measurable return on investment? Just creating a test environment to perform an NVMe evaluation can break the IT budget!
Register now to join Storage Switzerland, Virtual Instruments, and SANBlaze as we look at the state of the data center and provide IT planners with the information they need to decide if NVMe is an investment they should make now or if they should wait a year or more. The key is determining which applications can benefit from NVMe-based approaches.
In this event, IT professionals will learn
- About NVMe, NVMe Storage Systems and NVMe over Fabric Networking
- The Performance Potential of NVMe Storage and Networks
- What attributes are needed for a workload to take advantage of NVMe
- Why NVMe creates problems for current IT testing strategies
- Why a Workload Simulation approach is the only practical way to test NVMe
- How to build a storage performance validation practice
Webinar: All in the Cloud - Data Protection Up, Costs DownStorage Switzerland
Managing and protecting critical data across servers and applications in multiple locations around the globe is challenging. And the more decentralized and complex your infrastructure, the more difficult it is to manage your data. The potential bad news? Data loss, site outages, revenue loss, and potential non-compliance with regulations.
But here’s the good news: centralizing data protection in the cloud can make all the difference. That’s why you should join our webinar and hear from storage expert, George Crump, from Storage Switzerland and Druva’s W. Curtis Preston, Chief Technologist, as they discuss:
• Why protecting a distributed data center is challenging with traditional methods
• How a cloud-centralized backup strategy can be a game changer for your organization
• How Druva can help you drastically improve data protection quality, reduce costs, and simplify global management and configuration?
Hyperconverged Infrastructure is supposed to simplify the data center by creating an environment that automatically scales as new applications and workloads are added to it. The problem is that the current generation of HCI solutions can only address specific use cases like virtual desktops or tier 2 applications. First generation HCI solutions don’t have the per node power to accommodate enterprise workloads and tier 1 applications. The organization needs a next generation HCI solution, HCI 2.0, that can address HCI 1.0 shortcomings and fulfill the original promises of HCI; Lower costs, faster innovation, simpler scale, single vendor and unified management. The combination enables HCI 2.0 to handle a variety of storage intensive workloads.
15 Minute Friday: Tips for The Weekend - Stop the Unstructured Data MadnessStorage Switzerland
Join Storage Switzerland and Igneous for another Fifteen Minute Friday: Storage Tips for the Weekend. Most organizations are not satisfied with their ability to backup, recovery and correctly retain unstructured data. The combination of unprecedented growth and increased scrutiny caused by regulations like GDPR and CCPA is pushing organizations to the brink. It's time to stop the madness!
By joining this short webinar, you’ll learn tips to overcome:
- The Unstructured Data Backup Challenge
- The Archive Challenge
- The Data Privacy / Data Retention Challenge
Webinar: 2019 Storage Strategies Series - What’s Your Plan for Object Storage?Storage Switzerland
Join Storage Switzerland, Caringo, Cloudian and Scality, for a live roundtable discussion on Object Storage. Learn what you need to know to develop a strategy for object storage. Our panel of experts will discuss what object storage is, what is better/different about object storage than other types of storage and what are the use cases enterprises should consider.
Our Object Storage Panel will provide specific details on how to get started with object storage, what use cases pay the most immediate return on the investment and where object storage fits in an organization's overall storage strategy. The panel will also discuss what is coming next in object storage. Each expert will also give a brief overview of their company's object storage solution. We will close out the roundtable by taking questions from our live audience.
By attending one webinar, you’ll learn everything you need to know about object storage, provide actionable information in developing an object storage strategy and hear from three top vendors about how their products will fit into those plans.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
Webinar: Performance vs. Cost - Solving The HPC Storage Tug-of-War
1. Performance vs. Cost
Solving The HPC Storage Tug-of-War
Webinar
● Should you use a
high end NAS file
system or make the
switch to object
storage?
● How do you move
data from the
performance tier to
the capacity tier?
For audio playback and Q&A go to:
http://bit.ly/HPCTug
2. ● Analyst firm focused on storage, cloud
and virtualization
● Knowledge of these markets is gained
through product testing and interaction
with end users and suppliers
● The results of this research can be
found in the articles, videos, webinars,
product analysis and case studies on
our web site:
http://storageswiss.com
Who Is Storage Switzerland?
3. Our Speakers
George Crump is the founder of Storage
Switzerland, the leading storage analyst focused on
the subjects of big data, solid state storage,
virtualization, cloud computing and data protection.
He is widely recognized for his articles, white papers,
and videos on such current approaches as all-flash
arrays, deduplication, SSDs, software-defined
storage, backup appliances, and storage networking.
He has over 25 years of experience designing
storage solutions for data centers across the US.
4. Our Speakers
Tony Barbagallo is the VP of Product at Caringo
heading product management and product
positioning. He has held a number of senior and
executive product management and marketing
positions in the IT industry at Skyera, GroundWork
Open Source, EVault, EMC-Dantz, Microsoft. He
holds a BS in Computer Science from Syracuse
University.
The Cloud & Object Storage Platform
5. Who Is Caringo?
Object Storage Experts
10+ Years
Swarm 9
The Cloud and Object
Storage Platform
500+ Deployments
6. • Faster performance means
• More simulations
• Faster analysis
• Broader analysis
HPC Requires Higher and Higher
Performance
7. • HPC Performance Solutions
• Intelligent Software with Data Locality
• Put the data next to the CPU
• Shared All-Flash Storage
• Scale-Out Object-Based All-Flash Arrays
• Parallel File Systems with Back End Flash
HPC Requires Higher and Higher
Performance
8. HPC Requires Higher and Higher
Capacity
• Data Preservation
• Re-run old simulations
• Use old data to provide
deeper analysis/simulation
• Larger Working Dataset
• Larger per file/object
• Greater quality of
file/objects
• HPC Capacity Solutions
• Hard Disk Tier within
Primary Storage
• High-Capacity, Highly
Dense Flash System
• Tape
9. • Hard Disks are and will
remain more Cost-
Effective than Flash
• Hard Disks are actually
increasing in reliability
Do Hard Disks Still Make Sense for
HPC?
10. • Recurring Cost of the
Cloud gets Expensive
• Access Charges are
unpredictable
• Access Times are
unpredictable
Why Not Use The Cloud For This?
• And there are the
ongoing Security
Concerns
11. • Seamless integration between High-
Performance Flash and Object
• NAS becomes a Cache to Object
• Native HTTP/S3 Support
• Direct ingest via native NFS Support
• Tiering of older Data to Object
• All-Flash Object Storage
• Flash-based Object Storage is becoming more practical
Object Store To HPC Integration
12. Caringo Swarm Brings Cloud
Approach to HPC
Cut Storage Costs by Up to 75%
Search & custom
metadata for analysis
Consolidate data
Multi-protocol support
for next-gen app & workflows
Secure multi-tenancy
for collaboration
13. Swarm Manages Unpredictability
in HPC Workflows
Instruments & Devices
MULTI-PROTOCOL ACCESS
1. Reduce primary storage footprint & costs
and increase experimentation space
2. Services approach adapts to requirements
eliminating constant migration
3. Streamline access from analysis
tier, local or on compute cluster
14. Faster Discovery by Reducing
Complexity & Migrations
Instruments & Devices
BEFORE
App Server
HPC Cluster
Storage
SAN
Eliminate LUNs & RAID
Reduce data migrations
Instruments & Devices
AFTER
App Server
✓ Reduce storage TCO up to 75%
✓ Reduce data migrations
✓ Secure access and collaboration
15. Scale-Out Storage
with Parallel Access
● Each Storage Node adds CPU, Memory, Network
● Can support thousands of servers simultaneously
● Requests can be sent to any Storage Node
● Requests are routed to the correct Storage Nodes
● All healing operations performed in parallel
16. Thank you!
Storage Switzerland
http://www.storageswiss.com
StorageSwiss on Twitter:
http://twitter.com/storageswiss
StorageSwiss on YouTube:
http://www.youtube.com/user/storageswiss
Caringo
https://www.caringo.com
Caringo on Twitter:
https://twitter.com/CaringoStorage
Caringo on Facebook:
https://www.facebook.com/CaringoStorage
17. Performance vs. Cost
Solving The HPC Storage Tug-of-War
For complete Audio and Q&A please register for the On Demand Version at:
http://bit.ly/HPCTug