Server virtualization is being widely adopted throughout the industry. Server virtualization places new demands on the storage infrastructure that should be considered early in the design process. NetApp provides storage and data management solutions that uniquely enable effective server virtualization environments, and which further extend the benefits of server virtualization. In this presentation, we’ll review why NetApp is the best storage solution for virtualized server environments.
This Blueprint is designed to help with customers who are utilising OST technology with Backup Exec’s deduplication Option to improve back end storage capabilities within a complex backup environment.
Relentless Information Growth
The data deduplication technology within Backup Exec 2014 breaks down streams of backup data into “blocks.” Each data block is identified as either unique or non-unique, and a tracking database is used to ensure that only a single copy of a data block is saved to storage by that Backup Exec server. For subsequent backups, the tracking database identifies which blocks have been protected and only stores the blocks that are new or unique. For example, if five different client systems are sending backup data to a Backup Exec server and a data block is found in backup streams from all five of those client systems, only a single copy of the data block is actually stored by the Backup Exec server. This process of reducing redundant data blocks that are saved to backup storage leads to significant reduction in storage space needed for backups.
Data has increased the necessity in making greater investments in IT infrastructure, with the increase in the duplication of data and Data protection processes, such as backup, has compound data growth creating multiple copies of primary data made for operational and disaster recovery. This has also made the Backup Infrastructure far more complex. Now that disk-based systems inherently offer faster restores, disk systems can also make backup environments more complex and difficult to manage. This creates a problem for many backup solutions to manage advanced storage device capabilities such as data deduplication, replication, and ability to write directly to tape.
Power of OpenStorage Technology (OST)
Symantec Backup Exec software and the OpenStorage technology (OST) have been designed to provide centrally managed, edge-to-core data protection in order to span multiple sites and provide disk-to-disk-to-tape (D2D2T) functionality and automate Data Movement. The OpenStorage API introduced in Backup Exec 2010 provides automated movement of data between sites and storage tiers and acts as a single Point of Management and Catalog for Backup Data, regardless of where it resides (remote office or corporate data center) or of what type of media it is stored on (disk or tape), or its age (recent backup or long term archive), providing better Control of Advanced Storage Devices.
The OpenStorage initiative allows customers to better utilize advanced, disk-based storage solutions from qualified partners. It gives the ability to ensure tighter integration between the backup software and storage, greater efficiency and performance using an easy-to-deploy, purpose-built appliance that does not have the limitation of tape emulation devices: increasing Performance and Optimization, achieving faster backups to deduplication appliances via a third-party OST plug-in enabled by Backup Exec.
Veeam presentation for the NimbleStorage Connect User Group in Bristol and London. Including how the integration works, and why it is important to use Veeam and Nimble together.
Symantec Corp. (Nasdaq: SYMC) today announced it will deliver a new approach for modernizing backup and recovery, a process that has become unnecessarily complicated and expensive as organizations’ data stores grow exponentially. Compared to traditional backup, Symantec’s approach enables 100 times faster backup, eases management and simplifies recovery if a disaster occurs, helping customers realize significant cost savings while better protecting their business information.
Implementing a Disaster Recovery Solution using VMware Site Recovery Manager ...Paula Koziol
IBM Spectrum Virtualize delivers business continuity capabilities using a stretched cluster configuration together with VMware Site Recovery Manager (SRM). The result is an end-to-end disaster recovery solution for organizations of all sizes. Join this session to understand how IBM Spectrum Virtualize, including offerings like IBM SAN Volume Controller (SVC) and IBM Storwize Family, integrates with VMware SRM to automate and optimize disaster recovery operations. Everyone who works in mission critical environments understands the need for high availability and effective solutions for planned and unplanned outages. Organizations demand disaster recovery operations that are fully automated and can be executed in a repeatable manner, so that they are always prepared for disaster situations. This IBM-VMware solution offers SMB and enterprise customers the ability to survive a wide range of failures and enables seamless migration of applications across company sites for various planned activities, enabling zero-downtime application mobility.
This Blueprint is designed to help with customers who are utilising OST technology with Backup Exec’s deduplication Option to improve back end storage capabilities within a complex backup environment.
Relentless Information Growth
The data deduplication technology within Backup Exec 2014 breaks down streams of backup data into “blocks.” Each data block is identified as either unique or non-unique, and a tracking database is used to ensure that only a single copy of a data block is saved to storage by that Backup Exec server. For subsequent backups, the tracking database identifies which blocks have been protected and only stores the blocks that are new or unique. For example, if five different client systems are sending backup data to a Backup Exec server and a data block is found in backup streams from all five of those client systems, only a single copy of the data block is actually stored by the Backup Exec server. This process of reducing redundant data blocks that are saved to backup storage leads to significant reduction in storage space needed for backups.
Data has increased the necessity in making greater investments in IT infrastructure, with the increase in the duplication of data and Data protection processes, such as backup, has compound data growth creating multiple copies of primary data made for operational and disaster recovery. This has also made the Backup Infrastructure far more complex. Now that disk-based systems inherently offer faster restores, disk systems can also make backup environments more complex and difficult to manage. This creates a problem for many backup solutions to manage advanced storage device capabilities such as data deduplication, replication, and ability to write directly to tape.
Power of OpenStorage Technology (OST)
Symantec Backup Exec software and the OpenStorage technology (OST) have been designed to provide centrally managed, edge-to-core data protection in order to span multiple sites and provide disk-to-disk-to-tape (D2D2T) functionality and automate Data Movement. The OpenStorage API introduced in Backup Exec 2010 provides automated movement of data between sites and storage tiers and acts as a single Point of Management and Catalog for Backup Data, regardless of where it resides (remote office or corporate data center) or of what type of media it is stored on (disk or tape), or its age (recent backup or long term archive), providing better Control of Advanced Storage Devices.
The OpenStorage initiative allows customers to better utilize advanced, disk-based storage solutions from qualified partners. It gives the ability to ensure tighter integration between the backup software and storage, greater efficiency and performance using an easy-to-deploy, purpose-built appliance that does not have the limitation of tape emulation devices: increasing Performance and Optimization, achieving faster backups to deduplication appliances via a third-party OST plug-in enabled by Backup Exec.
Veeam presentation for the NimbleStorage Connect User Group in Bristol and London. Including how the integration works, and why it is important to use Veeam and Nimble together.
Symantec Corp. (Nasdaq: SYMC) today announced it will deliver a new approach for modernizing backup and recovery, a process that has become unnecessarily complicated and expensive as organizations’ data stores grow exponentially. Compared to traditional backup, Symantec’s approach enables 100 times faster backup, eases management and simplifies recovery if a disaster occurs, helping customers realize significant cost savings while better protecting their business information.
Implementing a Disaster Recovery Solution using VMware Site Recovery Manager ...Paula Koziol
IBM Spectrum Virtualize delivers business continuity capabilities using a stretched cluster configuration together with VMware Site Recovery Manager (SRM). The result is an end-to-end disaster recovery solution for organizations of all sizes. Join this session to understand how IBM Spectrum Virtualize, including offerings like IBM SAN Volume Controller (SVC) and IBM Storwize Family, integrates with VMware SRM to automate and optimize disaster recovery operations. Everyone who works in mission critical environments understands the need for high availability and effective solutions for planned and unplanned outages. Organizations demand disaster recovery operations that are fully automated and can be executed in a repeatable manner, so that they are always prepared for disaster situations. This IBM-VMware solution offers SMB and enterprise customers the ability to survive a wide range of failures and enables seamless migration of applications across company sites for various planned activities, enabling zero-downtime application mobility.
Symantec continues to deliver on its information management strategy to enable organizations to protect their information completely, deduplicate everywhere to eliminate redundant data, delete confidently and discover efficiently with Enterprise Vault 9.0, Enterprise Vault Discovery Collector, NetBackup 5000 and the NetBackup Cloud Storage for Nirvanix.
Accelerate Your Signature Banking Applications with IBM Storage OfferingsPaula Koziol
Signature Users can cut application run and response times by as much as 50% by applying the latest IBM Storage offerings. Hear about an example Signature User’s experience and benefits with IBM Flash. Also, hear about IBM’s direction with the IBMi processor and answer questions you may have in upgrading your IT infrastructure. Current data growth, analytics, and real-time access needs have changed the storage landscape for our clients, particularly in banking. IBM’s multi-billion dollar investments in storage are making a significant impact on the speed, efficiency, and management of these needs. Offerings such as all-flash systems and software defined storage have especially become attractive to our banking clients who are both accelerating the speed of existing applications, such as core banking – or, creating new applications demanding real-time access, such as cybersecurity and cognitive in payments. Learn how others in the Financial Services industry are addressing core banking, payments, and risk & compliance applications using IBM Storage offerings. In addition to Signature, other core banking examples applying flash storage within Fiserv include: Premier, Precision, and XP2. Many of the same business benefits experienced within the banking industry could apply to you and your clients. Learn how you can easily implement these proven capabilities with your Signature application now.
Symantec delivers on its deduplication everywhere strategy - designed to reduce data everywhere, reduce complexity, and reduce data infrastructure – by announcing Backup Exec 2010 and NetBackup 7.0.
These products both integrate deduplication technology closer to the information source at the client and at the media server to help organizations achieve significant storage and cost savings and simplify their backup and recovery operations through a unified platform.
In addition to deduplication, NetBackup 7 helps enterprise-level organizations protect, store and recover information and adds improved virtual machine protection and faster disaster recovery. Backup Exec 2010 also adds integrated archiving and improved virtual machine protection, helping mid-sized businesses protect more data and utilize less storage - overall saving them time and money.
Five things virtualization has changed in your dr planJosh Mazgelis
Are you still rolling with the changes? Virtualization has made a huge impact on the way we deploy our computer workloads, and with that it has also changed the ways in which we protect them. The business continuity plans in place for IT even just five years ago look very different than what many companies have in place today. Keeping on top of these changes will help you understand your recovery capabilities, and your limitations as well. Join us with our friends at Neverfail and make sure you're keeping your IT business continuity plans spicy and fresh!
Building vSphere Perf Monitoring ToolsPablo Roesch
Balaji and Ravi present on how to build vSphere monitoring tools using the vSphere APIs - this is a must view for anyone managing a large complex environment. For vSphere SDKs, API visit http://developer.vmware.com Blogs, Forums, Sample Code
IBM Storage is ideally suited for SAP HANA workloads. Learn more about our certified offerings for SAP HANA TDI, and all about our flash storage technologies and systems. Our Storage is built to power the future of IT!
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...Symantec
Symantec’s latest backup appliances, NetBackup 5220 and Backup Exec 3600, which now include the latest NetBackup 7.5 and Backup Exec 2012 software from Symantec announced earlier this year. The new appliances deliver on Symantec’s Better Backup for All initiative to advance what Gartner has called “The Broken State of Backup.”
VMworld 2013: Implementing a Holistic BC/DR Strategy with VMware - Part TwoVMworld
VMworld 2013
Jeff Hunter, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Ken Werneburg, VMware
Symantec continues to deliver on its information management strategy to enable organizations to protect their information completely, deduplicate everywhere to eliminate redundant data, delete confidently and discover efficiently with Enterprise Vault 9.0, Enterprise Vault Discovery Collector, NetBackup 5000 and the NetBackup Cloud Storage for Nirvanix.
Accelerate Your Signature Banking Applications with IBM Storage OfferingsPaula Koziol
Signature Users can cut application run and response times by as much as 50% by applying the latest IBM Storage offerings. Hear about an example Signature User’s experience and benefits with IBM Flash. Also, hear about IBM’s direction with the IBMi processor and answer questions you may have in upgrading your IT infrastructure. Current data growth, analytics, and real-time access needs have changed the storage landscape for our clients, particularly in banking. IBM’s multi-billion dollar investments in storage are making a significant impact on the speed, efficiency, and management of these needs. Offerings such as all-flash systems and software defined storage have especially become attractive to our banking clients who are both accelerating the speed of existing applications, such as core banking – or, creating new applications demanding real-time access, such as cybersecurity and cognitive in payments. Learn how others in the Financial Services industry are addressing core banking, payments, and risk & compliance applications using IBM Storage offerings. In addition to Signature, other core banking examples applying flash storage within Fiserv include: Premier, Precision, and XP2. Many of the same business benefits experienced within the banking industry could apply to you and your clients. Learn how you can easily implement these proven capabilities with your Signature application now.
Symantec delivers on its deduplication everywhere strategy - designed to reduce data everywhere, reduce complexity, and reduce data infrastructure – by announcing Backup Exec 2010 and NetBackup 7.0.
These products both integrate deduplication technology closer to the information source at the client and at the media server to help organizations achieve significant storage and cost savings and simplify their backup and recovery operations through a unified platform.
In addition to deduplication, NetBackup 7 helps enterprise-level organizations protect, store and recover information and adds improved virtual machine protection and faster disaster recovery. Backup Exec 2010 also adds integrated archiving and improved virtual machine protection, helping mid-sized businesses protect more data and utilize less storage - overall saving them time and money.
Five things virtualization has changed in your dr planJosh Mazgelis
Are you still rolling with the changes? Virtualization has made a huge impact on the way we deploy our computer workloads, and with that it has also changed the ways in which we protect them. The business continuity plans in place for IT even just five years ago look very different than what many companies have in place today. Keeping on top of these changes will help you understand your recovery capabilities, and your limitations as well. Join us with our friends at Neverfail and make sure you're keeping your IT business continuity plans spicy and fresh!
Building vSphere Perf Monitoring ToolsPablo Roesch
Balaji and Ravi present on how to build vSphere monitoring tools using the vSphere APIs - this is a must view for anyone managing a large complex environment. For vSphere SDKs, API visit http://developer.vmware.com Blogs, Forums, Sample Code
IBM Storage is ideally suited for SAP HANA workloads. Learn more about our certified offerings for SAP HANA TDI, and all about our flash storage technologies and systems. Our Storage is built to power the future of IT!
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...Symantec
Symantec’s latest backup appliances, NetBackup 5220 and Backup Exec 3600, which now include the latest NetBackup 7.5 and Backup Exec 2012 software from Symantec announced earlier this year. The new appliances deliver on Symantec’s Better Backup for All initiative to advance what Gartner has called “The Broken State of Backup.”
VMworld 2013: Implementing a Holistic BC/DR Strategy with VMware - Part TwoVMworld
VMworld 2013
Jeff Hunter, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Ken Werneburg, VMware
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...NetAppUK
Flash Storage technologies are opening up a wealth of new opportunities for improving the optimisation of applications, data and storage, as well as reducing costs. In this session, Peter Mason, NetApp Consulting Systems Engineer, shares his experiences and discusses the use and impact of different Flash technologies.
Vidhyalive offers netapp training(Instructor led live online training) with real-world scenarios.Learn how to configure the technologies of the NetApp. Please visit Here for further details: http://www.vidhyalive.com/product/netapp-training/
A storage analysis based on a VMware P2V project.
This analysis looks at the necessary storage infrastructure required to support a 500 VM environment on EMC LUNs and NetApp NFS volumes.
IT Brand Pulse survey data presented at Flash Memory Summit covering 2016 flash product brand leaders, IT pro perceptions of flash vendor marketing, satisfaction with flash products in production, and awareness of new technologies.
MT48 A Flash into the future of storage…. Flash meets Persistent Memory: The...Dell EMC World
Several key technology trends are redefining the boundaries of the traditional storage infrastructure stack: In a rapidly changing world of system interconnects, emerging memory media, and storage semantics, Server Designers and Storage Architects are engaging and collaborating like never before to exploit breakthrough technology capabilities.
With the backdrop of Big Data volume, Cloud Data ubiquity and IoT Data velocity, Application Developers are entering the Post-POSIX world of real-time, high-frequency, low latency data management frameworks.
This session will address key technology trends in Storage, Networking, and Compute, as they define the parameters of a Memory Centric Architecture (MCA) and the Next Generation Data Center.
Microsoft Azure has become the default option for anyone migrating from on-site data centres to the cloud.
It’s an obvious choice.
Most IT departments are familiar and skilled with the Microsoft toolset, so for back-office systems it just makes sense.
The question is, where to start? Setting up disaster recovery to Azure is a logical, low-risk first step.
Profit from tricks on how to achieve a better TCO by using Azure for DRaaS:
Replicating both VMware and Hyper-V environments
Setting up the ZCA (Zerto Cloud Appliance) in Azure
Connecting the ZVM (Zerto Virtual Manager) to your vCenter
Using Blob storage for replica disks and journals
Failing-over into Azure and failing-back
PERNIXDATA OPTIMIZES STORAGE FOR VIRTUALIZED ENVIRONMENTS. By decoupling strategic storage performance and management functions from the underlying storage hardware, our software maximizes VM performance, delivers predictable scale-out growth, and minimizes storage costs.
- See more at: http://pernixdata.com/#sthash.8zXxu1cd.dpuf
The United States National Institute of Standards and Technology (NIST) has p...Michael Hudak
The NIST Definition of Cloud Computing
http://www.championcloudservices.com/Blog/bid/71922/8-18-2011-FINALLY-an-Agreement-on-Defining-what-the-CLOUD-is
Ar Accelerating Converged Infrastructure With FlexpodMichael Hudak
IDC investigates the transformation of IT delivery in the datacenter and the emerging role of pretested solutions that focus on unified, shared infrastructure. The paper focuses on the advent of converged infrastructure solutions that simplify the deployment and management of virtualized environments spanning server, storage, and networking components. The paper then takes a closer look at the FlexPod datacenter solution offered by NetApp and Cisco, with a focus on the VMware vSphere design configuration. IDC examines the importance of professional services to delivering the solution and the key role of a collaborative support model aimed at streamlining the support process for customers and channel partners.
This is a New list that CRN publishes that is similar to the VAR500 and Fast Growth 100. The following vendor certifications are part of the CRN Tech Elite: Emc, IBM,Cisco,VMware, Citrix, Dell, Symantec and NetApp
Champion Solutions Group Atlanta Solutions Lab PartnersMichael Hudak
Atlanta Solutions Lab –a state-of-the-art facility that enables us to replicate customers\' environments, validate storage solutions and solve customers\' business challenges. Our team of experts provide hands-on training and can help customers explore cutting-edge technologies such as SAN and NAS environments –on-site or remotely. With this Solutions Lab operating and on demand access, Champion is ready to help you solve your next big IT challenge, regardless of your location.
They use our centers for everything from intriguing customers with a new approach to educating them with in-depth demonstrations of complete and proven solutions
Risk Enterprise Management Limited (REM) provides claims, managed care, and risk management solutions to the property and casualty insurance industry. The company employs 400 professionals and delivers services globally to Fortune 1000 companies, program managers, captive managers, insurers, reinsurers, brokers, and agents
Net App Syncsort Integrated Backup Solution SheetMichael Hudak
NetApp has core technology for
block-level data protection .
What does Syncsort add?
A: Syncsort leverages NetApp core technology while adding the following: Heterogeneous application support for Exchange,Oracle, SQL• Deeper integration with VMware for advanced,
automated recovery scenarios• Catalog search and restore across disk-based backup• A catalog that spans both disk and tape• Recovery from SnapMirror DR destinations• Automated Bare Metal Recovery
Desktop virtualization is a hot topic throughout the virtualization industry. Organizations view desktop virtualization as a way to control costs and use limited resources to manage large-scale desktop infrastructures while increasing security and deployment efficiency. NetApp, VMware, Cisco, Fujitsu, and Wyse joined forces to create an architectural design for a 50,000-seat VMware® View™ architecture. This project demonstrates the most reliable methods for deploying a scalable and easily expandable environment from 5,000 to 50,000 seats in an efficient manner.
Storage Efficiency Customer Success Stories Sept 2010 power pointMichael Hudak
See over 100 companies from Government, High Tech, Manufacturing, Financial, Retail, Health Care, Research and Technical, Information and Media, Telco and Service Providers, Transport, Education, Legal, Energy and Entertainment and their challenges regarding their storage challenges and how NetApp helped to resolve them
Storage for Microsoft®Windows EnfironmentsMichael Hudak
This document explores some of the common challenges Windows® architects and administrators face in managing storage for Microsoft® environments with workloads such as:
• Microsoft Exchange Server,
• Microsoft Office SharePoint® Server,
• Microsoft SQL Serv
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Mind map of terminologies used in context of Generative AI
The Best Storage For V Mware Environments Customer Presentation Jul201
1. Tag line, tag line
Extend the Benefits of
VMware vSphere with
NetApp Storage
Michael Hudak
mhudak@championsg.com
NetApp Sales Specialist
Champion Solutions Group
(800 771-7000 x344
page
VMware Thin ProvisioningThin provisioning your datastoresVirtual machine sees full logical disk size at all timesBenefitsEliminate need to over-provision virtual disksVMware Administrator has visibility and control over his storage consumptionNetApp Thin ProvisioningThin provisioning on the arrayFull reporting and alerting on allocation and consumptionBenefitsBuy less storage up frontStorage Administrator has visibility and control over her storage consumptionCapture management benefits overall
Goal of slide: Position NetApp as the superior storage choice for vSphere environmentsVMware and DAS is goodVMware connected to intelligent arrays is betterVMware with NetApp is bestNetApp has offered thin provisioning capabilities inherent in the arrays for many years now. VMware, recognizing the value of offering array based data management, is working with NetApp and other storage partners to integrate intelligent array capabilities into the ESX framework.ESX based thin provisioning works at the level of a datastore and therefore may be most ideally suited for smaller deployments. For larger deployments that need to scale, using the array level functionality enables customers to derive efficiencies across datastores at a resource pool level.NetApp is engaged in making available vCenter based data management tools that can simplify things further.Deduplication across all storage tiers provides additional storage efficiencyNote that VMware Thin Provisioning and NetApp thin provisioning should not be used together. vStorage helps identify whether thin provisioning is already available on the array so that there is no conflict.NetApp enhances VMware valueEnsuring optimal storage efficiency beyond Thin ProvisioningSupports FT, Individual desktops, etc…Block based VM backupsvSphere 4.0 is the OS and foundation for cloud computingVMware enables storage functionality for small environments (those w/o intelligent storage)Thin provisioned virtual disksReduces initial storage consumptionLacks support with all featuresVMware Data RecoveryProvides backup to disk and simple recovery at a file or image level for small environmentsAutomating multi-pathing assignments via NMPAutomates laborious tasks in multi-node clusters
Joint solution that maintains application performance during long distance VM migrations Combines Cisco IP network, NetApp FAS with FlexCache, and VMware VMotion/Storage VMotionLeverages FlexCache to create local copies of remote dataIdeal for application migrations, technology updates and hardware maintenanceNetApp advantages:Simplest (IP, NFS-based)Broadest platform supportMost storage-efficientJointly validated
Goal of slide: Illustrate how NetApp has unique value in provisioning capabilities, to match the provisioning capabilities that VMware brings to the servers.Key Points:Most businesses highly value an IT infrastructure that can quickly respond to change, and that can quickly take advantage of new applications and capabilities. So the ability to rapidly move through test, development, and deployment activities is very important.Traditional storage systems struggle to support rapid rollouts. Test, dev, and deployment activities typically revolve around generating physical copies of data or VMs, which takes both time and disk space.NetApp uniquely enables instant virtual copies, which do not require any data to be copied and are much faster than alternative approaches. The FAS3070 VeriTest study indicated that it takes 7 seconds to clone a LUN on NetApp, and 27 minutes to clone a LUN on EMC Clariion (check).With the ability to create instant virtual copies, test, dev, and deployment activities for virtualized servers can be substantially accelerated.Notes: Comparison of cloning times is from a VeriTest study of NetApp FAS3070 versus EMC CX3-80. See the full report at http://www.lionbridge.com/competitive_analysis/reports/netapp/NetApp_FAS3070_vs_EMC_CX3-80_Performance_and_Usability.pdf
Goal of slide: NetApp PS can help you transition to a virtualized infrastructure, no matter what stage you’re at from Test/Dev to Production.There are various ways to rollout server virtualization and NetApp storage, and what’s right for you will depend on your current situation and business objectives.In general, a good approach is to first deploy in test/dev, then infrastructure apps, and then key production applications that require DR. With this approach, incremental risk is minimized at each step.At each step of the way, it’s important to architect server and storage solutions together. It’s also important to leverage professional services to help ensure your success.These are general guidelines, and your account team can work with you to develop a rollout plan that is right for you.