This document discusses the challenges of backing up virtualized servers and provides an overview of ExaGrid and Veeam Backup & Replication as an integrated solution. 51% of IT professionals report that their virtual server backups do not fully meet recovery goals. ExaGrid addresses this with disk-based backup using data deduplication for faster backups and restores in minutes along with a scalable architecture. When used with Veeam, customers can leverage their virtual environments for reliable backup and recovery. The presentation provides examples of how together ExaGrid and Veeam have helped customers improve their backup and disaster recovery.
This session will explore what happens after server consolidation: Workload profiles change, usage patterns change, and resource utilization goes up and down. During this session, we will discuss how the storage footprint changes, and what used to be ideally optimized isn't anymore.
You will learn how to quantify the gap between resource allocation and actual usage, how to recover and redeploy excess capacity, and how to find common virtual machine configuration mistakes that can degrade overall performance.
Symantec Corp. (Nasdaq: SYMC) today announced it will deliver a new approach for modernizing backup and recovery, a process that has become unnecessarily complicated and expensive as organizations’ data stores grow exponentially. Compared to traditional backup, Symantec’s approach enables 100 times faster backup, eases management and simplifies recovery if a disaster occurs, helping customers realize significant cost savings while better protecting their business information.
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...Symantec
Symantec’s latest backup appliances, NetBackup 5220 and Backup Exec 3600, which now include the latest NetBackup 7.5 and Backup Exec 2012 software from Symantec announced earlier this year. The new appliances deliver on Symantec’s Better Backup for All initiative to advance what Gartner has called “The Broken State of Backup.”
This session will explore what happens after server consolidation: Workload profiles change, usage patterns change, and resource utilization goes up and down. During this session, we will discuss how the storage footprint changes, and what used to be ideally optimized isn't anymore.
You will learn how to quantify the gap between resource allocation and actual usage, how to recover and redeploy excess capacity, and how to find common virtual machine configuration mistakes that can degrade overall performance.
Symantec Corp. (Nasdaq: SYMC) today announced it will deliver a new approach for modernizing backup and recovery, a process that has become unnecessarily complicated and expensive as organizations’ data stores grow exponentially. Compared to traditional backup, Symantec’s approach enables 100 times faster backup, eases management and simplifies recovery if a disaster occurs, helping customers realize significant cost savings while better protecting their business information.
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...Symantec
Symantec’s latest backup appliances, NetBackup 5220 and Backup Exec 3600, which now include the latest NetBackup 7.5 and Backup Exec 2012 software from Symantec announced earlier this year. The new appliances deliver on Symantec’s Better Backup for All initiative to advance what Gartner has called “The Broken State of Backup.”
Windows Server 2012 introduceert het gebruik van Storage Pools. Hiermee kunt u zowel USB, externe als interne harde schijven in een Storage Pool plaatsen. Vanuit deze pool kunt u vervolgens zoveel virtuele schijven maken als u nodig heeft. Dit zijn in feite VHD bestanden zoals deze ook al door HyperV gebruikt werden. Server 2012 ondersteunt de RAID versies 0,1 en 5. Wilt u flexibiliteit en file redundancy, zonder een duur SAN aan te hoeven schaffen, dan is deze feature is voor u!
Symantec delivers on its deduplication everywhere strategy - designed to reduce data everywhere, reduce complexity, and reduce data infrastructure – by announcing Backup Exec 2010 and NetBackup 7.0.
These products both integrate deduplication technology closer to the information source at the client and at the media server to help organizations achieve significant storage and cost savings and simplify their backup and recovery operations through a unified platform.
In addition to deduplication, NetBackup 7 helps enterprise-level organizations protect, store and recover information and adds improved virtual machine protection and faster disaster recovery. Backup Exec 2010 also adds integrated archiving and improved virtual machine protection, helping mid-sized businesses protect more data and utilize less storage - overall saving them time and money.
Securing Your Endpoints Using Novell ZENworks Endpoint Security ManagementNovell
Endpoint security is one of the greatest concerns on the minds of senior management today. Protecting your data and controlling how systems access resources is of the utmost importance. You must take actions to protect your infrastructure while ensuring your employees can continue to perform their jobs effectively and efficiently. Come to this session to learn how you can leverage the power of Novell ZENworks Endpoint Security Management across your enterprise to achieve this delicate balance—so you and the rest of your organization can sleep at night.
This Blueprint is designed to help with customers who are utilising OST technology with Backup Exec’s deduplication Option to improve back end storage capabilities within a complex backup environment.
Relentless Information Growth
The data deduplication technology within Backup Exec 2014 breaks down streams of backup data into “blocks.” Each data block is identified as either unique or non-unique, and a tracking database is used to ensure that only a single copy of a data block is saved to storage by that Backup Exec server. For subsequent backups, the tracking database identifies which blocks have been protected and only stores the blocks that are new or unique. For example, if five different client systems are sending backup data to a Backup Exec server and a data block is found in backup streams from all five of those client systems, only a single copy of the data block is actually stored by the Backup Exec server. This process of reducing redundant data blocks that are saved to backup storage leads to significant reduction in storage space needed for backups.
Data has increased the necessity in making greater investments in IT infrastructure, with the increase in the duplication of data and Data protection processes, such as backup, has compound data growth creating multiple copies of primary data made for operational and disaster recovery. This has also made the Backup Infrastructure far more complex. Now that disk-based systems inherently offer faster restores, disk systems can also make backup environments more complex and difficult to manage. This creates a problem for many backup solutions to manage advanced storage device capabilities such as data deduplication, replication, and ability to write directly to tape.
Power of OpenStorage Technology (OST)
Symantec Backup Exec software and the OpenStorage technology (OST) have been designed to provide centrally managed, edge-to-core data protection in order to span multiple sites and provide disk-to-disk-to-tape (D2D2T) functionality and automate Data Movement. The OpenStorage API introduced in Backup Exec 2010 provides automated movement of data between sites and storage tiers and acts as a single Point of Management and Catalog for Backup Data, regardless of where it resides (remote office or corporate data center) or of what type of media it is stored on (disk or tape), or its age (recent backup or long term archive), providing better Control of Advanced Storage Devices.
The OpenStorage initiative allows customers to better utilize advanced, disk-based storage solutions from qualified partners. It gives the ability to ensure tighter integration between the backup software and storage, greater efficiency and performance using an easy-to-deploy, purpose-built appliance that does not have the limitation of tape emulation devices: increasing Performance and Optimization, achieving faster backups to deduplication appliances via a third-party OST plug-in enabled by Backup Exec.
Protecting Data in an Era of Content Creation – Presented by Softchoice + EMCSoftchoice Corporation
This presentation was delivered during the February and March Discovery Series events titled “Protecting Data in an Era of Content Creation” sponsored by EMC.
Storage budgets are only growing 3-4% per year, but data is growing 30-40% per year – so in five years you will have to manage roughly five times as much data, with roughly the same budget. This data is also being accessed by more people, from more places and in new ways. The end result is that some of the old-standby approaches to protecting data are no longer cost effective or manageable. This presentation looks at the strategies IT can employ to provide adequate levels of protection and availability in a very different data management environment.
If you have any questions about the content or the event, please contact @scTechEvents.
The Cloud: A game changer to test, at scale and in production, SOA based web...Fred Beringer
Today in retail, financial services, media, telecommunications and a host of other industries, more and more business is transacted through consumer web sites and mobile applications. With new channels creating spikes in traffic, highly complex system architectures, and internet-savvy customers, websites and web applications must be tested at scale to maximize business results and avoid a catastrophic crash. However, whether due to time or cost or other reasons, upwards of 90 percent of web applications are not fully tested before launching. If testing is done, many times it's with a small percent of expected traffic, which is then extrapolated for an estimation of performance.
Cloud computing is changing the game for testing web applications. Cloud testing enables, for the first time, performance testing that complements the lab and accounts for the conditions in a production environment, such as traffic spikes, network latency, firewalls, and other factors. And it can be done far more affordably than traditional testing methods, as part of agile development cycles, and without an army of highly skilled performance engineers.
Presentation given in Rome for the International SOA conference - Moving SOA into the Cloud in Rome, May 2011
SOASTA's tens-of-thousands of tests in customer labs and production environments have uncovered issues that range from code level bugs to issues in 3rd party services. Testing early, often and at real scale is the only way to be fully prepared.
[café techno] Présentation de Backup Exec 2012Groupe D.FI
Présentation de la solution de sauvegarde informatique Backup Exec 2012 :
- Les composants
- L'interface
- Option de déduplication
- Amélioration majeure de Backup Exec 2012
Lessons Learned: Novell Open Enterprise Server Upgrades Made EasyNovell
You've read the documentation, played in the lab, and now you're ready to jump in and upgrade your NetWare environment to Novell Open Enterprise Server 2 on Linux. Attend this session to glean a final few best practices and to learn how to make the most of the migration tools included in the product. You'll also learn about the various pitfalls encountered during real-world upgrades, as well as the solutions used to resolve them.
VMware PEX Boot Camp - Reaching the Clouds with NetApp Integrations with VMwa...NetApp
Enterprises are asking their architects and integrators to find the most robust, cost-effective, scalable, easy-to-manage infrastructure for their vCloud environments. NetApp integrations with VMware vCloud Director make efficiency, data agility and integrated data protection a native part of vCloud environments. This session will show how integrations with vCloud Director make provisioning and cloning, session orchestration and automation, resource creation and management easier and smarter on NetApp storage.
Integrating Apple Macs Using Novell TechnologiesNovell
Apple Macs continue to increase in popularity and make up an increasingly large percentage of enterprise desktops. In this session, we'll explore the various Novell products and technologies that can be used to integrate Macs into your environment. You'll leave with a clear understanding of the issues involved and the options available to support the Mac user community in a Novell environment. You'll also have a chance to discuss suggestions for improving on this support.
Windows Server 2012 introduceert het gebruik van Storage Pools. Hiermee kunt u zowel USB, externe als interne harde schijven in een Storage Pool plaatsen. Vanuit deze pool kunt u vervolgens zoveel virtuele schijven maken als u nodig heeft. Dit zijn in feite VHD bestanden zoals deze ook al door HyperV gebruikt werden. Server 2012 ondersteunt de RAID versies 0,1 en 5. Wilt u flexibiliteit en file redundancy, zonder een duur SAN aan te hoeven schaffen, dan is deze feature is voor u!
Symantec delivers on its deduplication everywhere strategy - designed to reduce data everywhere, reduce complexity, and reduce data infrastructure – by announcing Backup Exec 2010 and NetBackup 7.0.
These products both integrate deduplication technology closer to the information source at the client and at the media server to help organizations achieve significant storage and cost savings and simplify their backup and recovery operations through a unified platform.
In addition to deduplication, NetBackup 7 helps enterprise-level organizations protect, store and recover information and adds improved virtual machine protection and faster disaster recovery. Backup Exec 2010 also adds integrated archiving and improved virtual machine protection, helping mid-sized businesses protect more data and utilize less storage - overall saving them time and money.
Securing Your Endpoints Using Novell ZENworks Endpoint Security ManagementNovell
Endpoint security is one of the greatest concerns on the minds of senior management today. Protecting your data and controlling how systems access resources is of the utmost importance. You must take actions to protect your infrastructure while ensuring your employees can continue to perform their jobs effectively and efficiently. Come to this session to learn how you can leverage the power of Novell ZENworks Endpoint Security Management across your enterprise to achieve this delicate balance—so you and the rest of your organization can sleep at night.
This Blueprint is designed to help with customers who are utilising OST technology with Backup Exec’s deduplication Option to improve back end storage capabilities within a complex backup environment.
Relentless Information Growth
The data deduplication technology within Backup Exec 2014 breaks down streams of backup data into “blocks.” Each data block is identified as either unique or non-unique, and a tracking database is used to ensure that only a single copy of a data block is saved to storage by that Backup Exec server. For subsequent backups, the tracking database identifies which blocks have been protected and only stores the blocks that are new or unique. For example, if five different client systems are sending backup data to a Backup Exec server and a data block is found in backup streams from all five of those client systems, only a single copy of the data block is actually stored by the Backup Exec server. This process of reducing redundant data blocks that are saved to backup storage leads to significant reduction in storage space needed for backups.
Data has increased the necessity in making greater investments in IT infrastructure, with the increase in the duplication of data and Data protection processes, such as backup, has compound data growth creating multiple copies of primary data made for operational and disaster recovery. This has also made the Backup Infrastructure far more complex. Now that disk-based systems inherently offer faster restores, disk systems can also make backup environments more complex and difficult to manage. This creates a problem for many backup solutions to manage advanced storage device capabilities such as data deduplication, replication, and ability to write directly to tape.
Power of OpenStorage Technology (OST)
Symantec Backup Exec software and the OpenStorage technology (OST) have been designed to provide centrally managed, edge-to-core data protection in order to span multiple sites and provide disk-to-disk-to-tape (D2D2T) functionality and automate Data Movement. The OpenStorage API introduced in Backup Exec 2010 provides automated movement of data between sites and storage tiers and acts as a single Point of Management and Catalog for Backup Data, regardless of where it resides (remote office or corporate data center) or of what type of media it is stored on (disk or tape), or its age (recent backup or long term archive), providing better Control of Advanced Storage Devices.
The OpenStorage initiative allows customers to better utilize advanced, disk-based storage solutions from qualified partners. It gives the ability to ensure tighter integration between the backup software and storage, greater efficiency and performance using an easy-to-deploy, purpose-built appliance that does not have the limitation of tape emulation devices: increasing Performance and Optimization, achieving faster backups to deduplication appliances via a third-party OST plug-in enabled by Backup Exec.
Protecting Data in an Era of Content Creation – Presented by Softchoice + EMCSoftchoice Corporation
This presentation was delivered during the February and March Discovery Series events titled “Protecting Data in an Era of Content Creation” sponsored by EMC.
Storage budgets are only growing 3-4% per year, but data is growing 30-40% per year – so in five years you will have to manage roughly five times as much data, with roughly the same budget. This data is also being accessed by more people, from more places and in new ways. The end result is that some of the old-standby approaches to protecting data are no longer cost effective or manageable. This presentation looks at the strategies IT can employ to provide adequate levels of protection and availability in a very different data management environment.
If you have any questions about the content or the event, please contact @scTechEvents.
The Cloud: A game changer to test, at scale and in production, SOA based web...Fred Beringer
Today in retail, financial services, media, telecommunications and a host of other industries, more and more business is transacted through consumer web sites and mobile applications. With new channels creating spikes in traffic, highly complex system architectures, and internet-savvy customers, websites and web applications must be tested at scale to maximize business results and avoid a catastrophic crash. However, whether due to time or cost or other reasons, upwards of 90 percent of web applications are not fully tested before launching. If testing is done, many times it's with a small percent of expected traffic, which is then extrapolated for an estimation of performance.
Cloud computing is changing the game for testing web applications. Cloud testing enables, for the first time, performance testing that complements the lab and accounts for the conditions in a production environment, such as traffic spikes, network latency, firewalls, and other factors. And it can be done far more affordably than traditional testing methods, as part of agile development cycles, and without an army of highly skilled performance engineers.
Presentation given in Rome for the International SOA conference - Moving SOA into the Cloud in Rome, May 2011
SOASTA's tens-of-thousands of tests in customer labs and production environments have uncovered issues that range from code level bugs to issues in 3rd party services. Testing early, often and at real scale is the only way to be fully prepared.
[café techno] Présentation de Backup Exec 2012Groupe D.FI
Présentation de la solution de sauvegarde informatique Backup Exec 2012 :
- Les composants
- L'interface
- Option de déduplication
- Amélioration majeure de Backup Exec 2012
Lessons Learned: Novell Open Enterprise Server Upgrades Made EasyNovell
You've read the documentation, played in the lab, and now you're ready to jump in and upgrade your NetWare environment to Novell Open Enterprise Server 2 on Linux. Attend this session to glean a final few best practices and to learn how to make the most of the migration tools included in the product. You'll also learn about the various pitfalls encountered during real-world upgrades, as well as the solutions used to resolve them.
VMware PEX Boot Camp - Reaching the Clouds with NetApp Integrations with VMwa...NetApp
Enterprises are asking their architects and integrators to find the most robust, cost-effective, scalable, easy-to-manage infrastructure for their vCloud environments. NetApp integrations with VMware vCloud Director make efficiency, data agility and integrated data protection a native part of vCloud environments. This session will show how integrations with vCloud Director make provisioning and cloning, session orchestration and automation, resource creation and management easier and smarter on NetApp storage.
Integrating Apple Macs Using Novell TechnologiesNovell
Apple Macs continue to increase in popularity and make up an increasingly large percentage of enterprise desktops. In this session, we'll explore the various Novell products and technologies that can be used to integrate Macs into your environment. You'll leave with a clear understanding of the issues involved and the options available to support the Mac user community in a Novell environment. You'll also have a chance to discuss suggestions for improving on this support.
How teams work do's and don'ts for dealing with resistance to your team projectMike Cardus
Working on teams you will deal with resistance. What To Do When Stakeholders Resist Your Project … And What Not To Do. The checklist provides guidance on how to effectively deal with resistant behavior … and what not to do.
explaining the development of the organization human resources, how to organize the training, and develop the HR managerial skills as well as their capabilities (ugik sugiharto, GBS)
A high level look at considerations for training new employees. Compares new employee orientation for geographically dispersed employees to the starfish model of business design.
There's a lot of buzz about the cloud, primarily around business value. But, what about the care and feeding of that private cloud? This presentation explores some of the critical storage processes to automate to feed that storage-thirsty cloud.
Flexibility In The Remote Branch Office VMware Mini Forum CalgaryJames Charter
VMware Mini Forum Calgary Afternoon Keynote Presentation, February 18, 2010. Overview on how Virtualization Technologies can provide flexibility and additional value in the Remote Office / Branch Office (ROBO). Topics discussed: Centralized vs. Distributed Deployment Models, Backup, Data Replication, Disaster Recovery, vSphere features, Site Recovery Manager, Virtual Desktop, WAN Acceleration.
Virtualizing Latency Sensitive Workloads and vFabric GemFireCarter Shanklin
This presentation was made by Emad Benjamin of VMware Technical Marketing. Normally I wouldn't upload someone else's preso but I really insisted this get posted and he asked me to help him out.
This deck covers tips and best practices for virtualizing latency sensitive apps on vSphere in general, and takes a deep dive into virtualizing vFabric GemFire, which is a high-performance distributed and memory-optimized key/value store.
Best practices include how to configure the virtual machines and how to tune them appropriately to the hardware the application runs on.
Announcing Symantec & Microsoft’s Azure Cloud Disaster Recovery as a Service ...Symantec
Symantec today announced it will extend Symantec’s Storage Foundation High Availability for Windows and Veritas Volume replicator disaster recovery (DR) software solution to the Windows Azure cloud platform. Symantec’s solution will allow organizations of any size to recover business critical applications and their associated data in Windows Azure in the event of a local failure or site disaster. The solution will expand the ability of Symantec’s existing business continuity solutions for Microsoft Corp., providing on-premise-to-cloud disaster recovery as a service (DRaaS).
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
2. The Challenges
51
Percent of IT pros say virtualized server backups do not meet or only
partially meet business goals for restores and recovery
87
Percent of CIOs say recovery times from large-scale disaster are
growing with the number of business-critical servers in the enterprise
$43 million
Estimate cost to Toronto Dominion Bank for losing two backup tapes in
2012
Sources: 2012 CIO Pulse Backup & Recovery Survey: IDG
2011 Veeam Virtualization Data Protection Report
3.
4. Agenda
Overcoming barriers to virtualization
● Backup and restoration concerns among top barriers to virtualization penetration
● More VMware administrators choosing disk backup systems over tape
5 Keys to Virtual Backup Excellence
ExaGrid overview
● Disk-based backup with deduplication
Veeam Backup & Replication
● Fully leverage your virtualized environment
Integrated solution
● Use Veeam Backup & Replication with ExaGrid’s disk backup for optimal results
Customer case studies
5. Challenges of Server Virtualization
Monitor, manage backups on growing # of VMs
Issues when managing multiple guest OS’s
Lack ability to see down to individual VMs
Hassles, limitations of tape backup
These challenges are driving your customers’ need for better tools
to reliably and easily back up and restore virtual machines
6. The Backup and Recovery Problem
Email Database File VMware App.
servers servers servers servers servers
Backup
Server
Offsite Tapes
Tape Library
6
7. The Backup and Recovery Problem
Email Database File VMware App.
servers servers servers servers servers
Slow backups Backup
Server
Long restores/recovery times
Offsite Tapes
Tape failures, management burden
Tape Library
Slow, unreliable offsite DR
Challenging virtual backup & recovery
7
8. Most Significant Backup, Recovery Challenges
Long backup windows - backups taking too long 54%
Growing business requirements for more
51%
reliable, efficient disaster recovery
Long restore/recovery times 48%
Total cost of ownership (TCO) of backup solutions 48%
Movement to virtualized servers putting pressure
43%
on VM backup and recovery
Obsolescence of backup systems as data grows 43%
Extremely/very challenging
Lack of smooth scalability as data grows, and
43%
associated costs
Source: CIO Pulse Survey: (May 2012): 1269 IT professionals
9. Time to Resume Operations After an Incident
Average of 7 Hours
More than 3 days
2%
1-3 days Less than 1 hour
6% 16%
11-24 hours
10%
4-10 hours
29% 1-3 hours
37%
10. Disk Backup with Deduplication. What if you had…
20:1 to 100:1 reduction in backup storage
50-90% faster backup times
<5 minutes to restore a VM or file
98% reduction in replication bandwidth
0 forklift upgrades or models obsoleted
Fast, efficient offsite disaster recovery
Reliability, elimination of backup failures
Investment protection as data grows
11. 5 Keys to Virtual Backup Excellence
1. Deduplicate data for cost savings
2. Shorten backup window permanently as data grows
3. Make most recent backup disk backup with
The only fastest to restore
deduplication that permanently
4. Deploy scalable protects your budget
architecture for cost-effective growth
Up-front price comparable to tape, and protection over time by
5. Eliminate tape and protect offsite data with disk
avoiding costly forklift upgrades
12. How ExaGrid & Veeam Fit in Your Environment
Remote Offices
SAN Storage
NAS Storage
DAS Storage
MAN/WAN
Email Servers Database Servers File Servers VMware Servers App. Servers
SQL Dumps
RMAN, TAR
Backup
Server
Gigabit/10 Gigabit
Ethernet
Offsite Tapes
Tape Library
ExaGrid Primary
Backup Site Optional Remote or
MAN/WAN DR Site ExaGrid
13. 1. Deduplicate data for cost savings
Capacity used by conventional
disk backup product
50 TB of backup data Stored in 3.5 TB with deduplication
50
TB
Stored
40 Conventional Disk
30 Storage
Savings
20
10
1 2 3 4 5 6 7 8 9 10
Number of Backup Jobs
14. 2. Shorten backup window…
Tape ExaGrid
Post-process deduplication:
Media
Server 3-10X faster than tape
● Backup directly to disk
Shortest Backup Window
● Unique landing zone architecture
● Post-process zone-level data deduplication
Long Backup
Window
● Shortest possible backup window
● Most reliable backups and restores
15. 2. Shorten backup window permanently
as data grows
Front End Server “GRID” Architecture
Architecture
Single Front-End Controller Full Servers in a Grid
Backup Window Backup Window
Deduplication Engine Deduplication Engine
X TB/hr
10 TB
X TB/hr CPU MEM GigE DISK CPU MEM GigE DISK
Deduplication Engine
10 TB
Deduplication Engine
2X TB/hr 20 TB
X TB/hr DISK 20 TB
CPU MEM GigE DISK
Deduplication Engine
3X TB/hr 30 TB
X TB/hr DISK 30 TB
CPU MEM GigE DISK
Deduplication Engine 40 TB
X TB/hr DISK 40 TB 4X TB/hr
CPU MEM GigE DISK
Deduplication Engine 50 TB
X TB/hr DISK 50 TB 5X TB/hr
CPU MEM GigE DISK
60 TB Deduplication Engine 60 TB
X TB/hr DISK 6X TB/hr
CPU MEM GigE DISK
Stable backup window, linear performance as data grows
Capacity virtualized across nodes
Deduplication shared across nodes
1
16. 3. Make most recent backup fastest to restore
Inline ExaGrid
Solutions Post-Processing
Latest backup stored in complete form
Media Ready for instant restore—as fast as minutes
Server
Instant VM Recovery
Supports SureBackup
Latest
backup
retained
Efficient restore of earlier versions
for fast
restores Optimized recovery of deduplicated data
Seamless to backup application
Can restore from any historical version
Deduplicated
Data and
Long-term
Retention
17. Only ExaGrid Delivers Restores in Minutes
Fast Backup
ExaGrid
Backup 1 10 TB
Landing Zone Backup 2 10 TB
High speed cache. 10 TB
Backup 3 10 TB
Most recent backup in
complete form for
10TB
Intact VMs for
InstantRestoreTM Backup 4 10 TB
Instant VM Recovery
Backup 5 10 TB
Repository
Most recent backup 5 TB
500 GB 50TB backed up to
2:1 Compression 5.8 TB repository 5.8 TB
space
Tiny byte-level deltas
for history 200 GB 200 GB 200 GB 200 GB
20. See It in Action: ExaGrid vs. Other Appliances
21. 4. Deploy scalable architecture for cost-effective
data growth
Legacy Architecture “GRID” Architecture
Single Front-End Controller Full Servers in a Grid
Backup Window Backup Window
Deduplication Engine Deduplication Engine
Deduplication Engine X TB/hr
10 TB
X TB/hr CPU MEM GigE DISK
CPU Deduplication Engine
CPU MEM
MEM GigE
GigE DISK
DISK
10 TB
Deduplication Engine
2X TB/hr 20 TB
X TB/hr DISK 20 TB
CPU MEM GigE DISK
Deduplication Engine
3X TB/hr 30 TB
X TB/hr DISK 30 TB
CPU MEM GigE DISK
Deduplication Engine 40 TB
X TB/hr DISK 40 TB 4X TB/hr
CPU MEM GigE DISK
Deduplication Engine 50 TB
X TB/hr DISK 50 TB 5X TB/hr
CPU MEM GigE DISK
60 TB Deduplication Engine 60 TB
X TB/hr DISK 6X TB/hr
CPU MEM GigE DISK
22. Total System Costs Over Time (20TB+)
Total Single
System Controller, Forklift
Costs Upgrade 50% savings
($)
ExaGrid
1 2 3
Time (Years)
23. 5. Eliminate tape, protect offsite data with disk
Optimizes most recent
backups for “Instant DR”
Avoids delays from restoring
from tape
Restores faster from full
copy vs “rehydrating” data
Example:
10TB Backup
Updated to
5TB (last backup least deduplicated) 5 TB Last Backup
5 TB – 200 GB 200 GB
5 TB - 200 GB 200 GB
2
24. ExaGrid Appliance: Disk at the Cost of Tape
Easy to use, plug-and-play disk backup and dedupe appliances
□ Low-risk easy integration with existing processes
□ Shortest backup times (30-90% faster than tape)
□ Unique support for Instant VM Recovery and
SureBackup
□ Most scalable architecture to support data growth
Systems: □ WAN-efficient elimination of offsite tape: “Instant DR”
1 TB 5 TB
2 TB 7 TB □ Most comprehensive management and reporting
3 TB 10 TB
4 TB 13 TB □ Built-in redundancy with simple automated operation
Scalable from 1-130 TB
in a single GRID
26. ExaGrid and Veeam Solution
Core Components:
● Backup storage - ExaGrid disk backup appliance
● Management - Veeam Backup & Replication
What’s Unique:
● Restore a VM in seconds to minutes
● Confirm 100% restorability of critical apps
Key Benefits:
● Reliable backup and restoration
● Advanced deduplication and data replication for
off-site protection
27. Veeam Backup & Replication
Version 6 now available!
Enterprise scalability
Advanced replication
Multi-hypervisor support
Numerous enhancements, including
1-Click File Restore
28. Breakthrough Technology
Designed from the
ground up
specifically for
virtualization
Leverages the
advantages of disk
as a backup target
29. v6 features
Enterprise scalability
Advanced replication
Multi-hypervisor support
Numerous enhancements, including
1-Click File Restore
30. Enterprise scalability
Enhanced distributed architecture
● Before: Backup Server
Now: Backup Server + Proxy Server + Repository Server
● Significant improvement in replication to ESXi
Intelligent load balancing—within and across jobs
● Not just round robin—considers datastore connection,
parallel tasks, etc.
● Eliminates backup storms and target saturation
(problems reported by users of other backup tools)
31. v6 features
Enterprise scalability
Advanced replication
Multi-hypervisor support
Numerous enhancements, including
1-Click File Restore
32. Advanced replication
10x faster
● Leverages the enhanced distributed architecture
● Uses VADP’s Hot Add mode to perform writes
● Also speeds up restores (and unlike Direct SAN mode, provides
fast restores even on VMware thin provisioned disks)
Improved failover
● Re-IP
● Re-home (for example, data center migrations)
Real failback
● 1-click and confirm—for one or several VMs
● Delta sync
● Re-IP
33. v6 features
Enterprise scalability
Advanced replication
Multi-hypervisor support
Numerous enhancements, including
1-Click File Restore
34. 2-in-1 backup and replication for Hyper-V
Comprehensive data protection
Unified approach
35. Built-in compression and deduplication
Manage large volume of data generated by image-
based backups
Included at no extra charge
Also includes optional synthetic full backups to manage
data volume
36. Unified data protection across hypervisors
Single license, install and console
Same data protection infrastructure
but still hypervisor-specific (e.g., volume- vs. VM-level snapshot)
Single console
for VMware
and Hyper-V
Same data
protection
infrastructure
for VMware
and Hyper-V
37. v6 features
Enterprise scalability
Advanced replication
Multi-hypervisor support
Numerous enhancements, including
1-Click File Restore
38. 1-Click File Restore
Enhancement to existing IFLR
IFLR = file-level recovery from an image-level backup
Restore directly from search: 10:1 reduction in number of
steps
Delegated, agentless, secure
39. 1-Click File Restore (continued)
Web-
based
Find the
file
Right-click
to restore one… … or several files
40. 1-Click File Restore (continued)
Restore directly to original location
● Uses guest interaction APIs
● Does not require:
Direct network connection to VM
User permissions on host or VM
Agent in host or VM
Delegate file restores
● New “File Restore Operator” role
● Delegated operator does not need permissions on backup
files, the host or the VM
● Secure: files restored directly to original location
● Can be restricted to certain file types (.docx, .xlsx, etc.)
42. Case Study – Hoffman Construction
Before After
Environment Environment
Began move to virtualization with five VMware • ExaGrid appliance with Veeam Backup and
ESX hosts and 60 virtual machines (VMs) Recovery
VM snapshots, backed up to tape, stored on SAN
Backup Window Backup Window
Took at least six hours to recover one Microsoft • Fast, reliable and verifiable VM backups
SQL Server database. • Backups completed in less than half the time
• Restores VM from backup in minutes to minimize
downtime and disruption.
Results
• Restored 100 percent of VMs almost instantly with
no user disruption after a major SAN crash and
loss of all data stored on VMs
“The integration of Veeam Backup & Replication and ExaGrid’s landing zone architecture is a
winning combination for flexibility and scalability.”
Kelly Bott, Field Technician/Technical Specialist
43. Recent Customer Successes
• Situation: struggling with Backup Exec going to tape, saw
presentation with ExaGrid & Veeam. Downloaded Veeam
University software, saw how well it worked.
• Solution: ExaGrid (10TB) and Veeam, eliminated tape hassles
• Situation: Customer was backing up to 24-tape library; 90% of servers
County are virtualized; slow backups and unreliable DR.
Government • Solution: ExaGrid and Veeam; now have faster, reliable backups and
recovery, saving hours a week in backup maintenance
• Situation: forklift upgrade from EMC Data Domain, customer loved
Law Firm Veeam and didn’t want to change.
• Solution: ExaGrid (13TB) and Veeam; saving valuable IT budget
• Situation: customer moving from Backup Exec (with tape) to Veeam.
ExaGrid chosen over EMC/Data Domain due to
Non-Profit scalability, performance, integration with Veeam and Backup Exec
• Solution: ExaGrid (3TB) and Veeam ; faster, more reliable backup
44. ExaGrid/Veeam: Making Your Backups & DR Better
Fastest backups, with no exploding backup windows
• Post-process for fastest backups and shortest backup windows
• Only system that scales with full servers to eliminate backup window expansion
Instantaneous VM and file restores, more reliable recovery
• Restore a VM in as fast as minutes
• Confirm 100% restorability of critical apps
• Instant restores from last full copy on disk—fastest full system restores, fastest tape copy
• Fast, reliable WAN-efficient offsite disaster recovery
Lowest total costs for your organization
• Best price in the industry up-front
• Budget protected over time via modular expansion, no forklift upgrades or obsolescence
45. Next Steps
5-minute • Schedule 5-minute call to discuss your current
environment, backup/DR challenges and issues
Phone Call • Determine if disk backup with deduplication or VM
backup & replication software is a match for your needs
• Schedule 20-30 minute phone call: review/discuss
Free Backup current environment, challenges and issues
& Recovery • Complete site survey
• Get technology suggestions, pricing proposal to
Assessment permanently eliminate backup headaches
• Visit http://www.veeam.com/exagrid_and_veeam.html
• Learn more about how ExaGrid and Veeam take VM
Test-Drive backup to a whole new level
Veeam • Register for your FREE 30-day trial of Veeam Backup &
Replication
www.exagrid.com
Editor's Notes
20110524
Here are three numbers: 51, 87 and $17,391. What do they have to do with VM backup and recovery?
Monitor,manage backups on growing # of VMsIssues when managing multiple guest OS’sMore data to store = each change means entire vmdk file is backed upEg: 10 guest OS instances x 50GB = backing up 500GB of virtual images dailyLack ability to see down to individual VMsDetermine how backups are running and adjust if problems
This is a typical backup environment today. Email/Exchange servers, database servers…backed up disk fronting a tape library, or direct to tape. As your environment grows, you have more data to backup, AND more types of data to protect. Here’s an typical example of what has happened in many environments over the past 10 years. Slide Show mode for animation. You started backing up your applications like Exchange to tape andClick to advance: you added Databases to the backup workload, which required more tape to protect. Click to advance: You need to protect your NAS filers, which of course, also required more tapes. Click to advance: And as you virtualize your environment, you need to backup the virtual servers – add more tapes for that. Beyond that, maybe you also have governance rules that require that you keep copies of certain data types for years, so you have backups stored as archives stored on tape as well. And then you need protect desktops and laptops as well.Click to advance to explosion graphicAs if all these tapes weren’t enough of a headache alone, you also have to pay to ship and store them offsite.
If you still have tape somewhere in your backup infrastructure, you’re probably dealing with one or more of these problems. Anyone dealing with at least one of these?
Can we cite the source for this data within the slide?
For each 1TB full, WAN bandwidth of 3 mbps is required (assuming a 2% byte change rate)
vPower is the underlying technology and Veeam’s vision for data protection in the virtual world. Made up of 5 patent pending technologies and includes the ability to run a VM directly from a backup file
Data protection infrastructure = backup servers, proxy servers, repository servers
Purpose:Optimize your environment to best meet operational, business goalsProcess:30-60 minute review and discussion:Current backup and recovery environmentData sources and composition of dataRetention requirementsKey challenges and issuesPayoff and Session Output:Best practices, strategies, technologies (ExaGrid or other vendors/partners) to cover current, future needs Suggestions to help achieve operational benefits:Elimination of failed data backups and recoveriesReduction in backup windows, scalability to support future data growthImproved operational efficiency and reliabilityReduced risk of data lossValidated integrity of backup/ recovery architecture