PHD Virtual: Optimizing Backups for Any StorageMark McHenry
Learn about the differences between virtual full, and traditional full and incremental backup modes, and which mode works best depending on the type of storage.
Acroknight the Caribbean Data Backup solution presentation October 2013Steven Williams
Acroknight is an automated online backup service which has been designed to be extremely easy to use, and with extensive management and reporting features for technology re-sellers or business customers.
With support for PC and Server backups, in-built support for common applications like Outlook, Exchange Server, SQL Server, SharePoint & MySQL, and the ability to interoperate between various operating systems, Acroknight can effectively power your Online Backup Services, however large…or small!
Webinar: The Three Reasons Cloud Backup is Broken and How to Fix ItStorage Switzerland
The cloud was supposed to be the answer to IT’s backup dilemma; instead, it in some ways has made the situation worse. In this on demand webinar Storage Switzerland and Microsoft Azure have a roundtable discussion on what’s gone wrong with cloud backup and how to fix it. In this webinar we cover the three problems with cloud backup:
* Doesn’t eliminate the on-premises infrastructure; it replaces it with a new one.
* Doesn’t protect Hybrid IT. Solutions are often on-premises focused or cloud-focused. IT needs one integrated solution.
* Creates a problematic ROI calculation. Egress calculations are not factored into the cost, making understating the cost of a cloud backup solution very difficult.
After a careful examination of the problem, the team will discuss ways to address these problems to create an infrastructure-less data protection environment. Attendees to the live webinar can ask their specific cloud backup questions directly of the panel and get expert answers immediately.
PHD Virtual: Optimizing Backups for Any StorageMark McHenry
Learn about the differences between virtual full, and traditional full and incremental backup modes, and which mode works best depending on the type of storage.
Acroknight the Caribbean Data Backup solution presentation October 2013Steven Williams
Acroknight is an automated online backup service which has been designed to be extremely easy to use, and with extensive management and reporting features for technology re-sellers or business customers.
With support for PC and Server backups, in-built support for common applications like Outlook, Exchange Server, SQL Server, SharePoint & MySQL, and the ability to interoperate between various operating systems, Acroknight can effectively power your Online Backup Services, however large…or small!
Webinar: The Three Reasons Cloud Backup is Broken and How to Fix ItStorage Switzerland
The cloud was supposed to be the answer to IT’s backup dilemma; instead, it in some ways has made the situation worse. In this on demand webinar Storage Switzerland and Microsoft Azure have a roundtable discussion on what’s gone wrong with cloud backup and how to fix it. In this webinar we cover the three problems with cloud backup:
* Doesn’t eliminate the on-premises infrastructure; it replaces it with a new one.
* Doesn’t protect Hybrid IT. Solutions are often on-premises focused or cloud-focused. IT needs one integrated solution.
* Creates a problematic ROI calculation. Egress calculations are not factored into the cost, making understating the cost of a cloud backup solution very difficult.
After a careful examination of the problem, the team will discuss ways to address these problems to create an infrastructure-less data protection environment. Attendees to the live webinar can ask their specific cloud backup questions directly of the panel and get expert answers immediately.
Capacity - Ransomware - Protection - Three Windows File Server Upgrades to AvoidStorage Switzerland
For most enterprises, optimizing your Windows fileserver means buying a new, bigger server with more capacity and better network connections. Or it means buying a dedicated Network Attached Storage (NAS) system. Then they feel they need to add new ransomware and data protection software. All of these "solutions" are expensive, involve a costly data migration and don't really increase IT efficiency. In this webinar, experts from Storage Switzerland and Caringo teach you how data management can eliminate the need for additional Windows file servers, capacity and protection.
SQLBits 2008 - SQL Server High Availability and Disaster Recovery Overview - ...Charley Hanania
Session from SQLBits 2008, covering:
Scopes of Protection in SQL Server 2005
SQL Server Backup Features and Technologies
SQL Server Disaster Recovery Features
SQL Server High Availability Features
SQL Server Data Distribution Features
Recoverability Scenarios Review
TechTarget Event - Storage Architectures for the Modern Data Center - Howard ...NetApp
Keynote Presentation: How Storage Function Follows Architecture
Presented by Howard Marks, Founder and Chief Scientist, Deep Storage, LLC
Storage buyers today are faced with a broader variety of choices than ever before. Unfortunately, the architecture of the storage system they select will forever determine how well that system adapts to changes in their data center. While flash does make almost every storage system faster, the system's scalability, flexibility and manageability are determined not by the media but by the system's architecture.
This session will examine how storage system architectures predetermine how systems behave in the real world. We'll see how common storage architectures affect performance, scalability, quality of service, snapshots and vVol support.
In this session, we’ll explore the various approaches to caching data in an OutSystems application—from the basic concepts of caching, to situations when you should or shouldn’t cache data.
We’ll also discuss how to use the built-in cache mechanism in OutSystems, including how it works, how to implement it, and some considerations for best practices.
For mobile apps and reactive web applications, we’ll cover caching data client-side using IndexedDB, as well as additional server-side caching resources.
For over a decade Network Attached Storage (NAS) was the go to file storage device for organizations needing to store large amounts of unstructured data. But unstructured data is changing. While large file use cases are still prevalent, small file use cases are becoming more dominant. Workloads like artificial intelligence, analytics and IoT are typically driven by millions, if not billions of small files.
Object Storage is often hailed as the heir apparent but is it? Can file systems be redesigned to continue to support traditional NAS workloads while also supporting modern, small file and high velocity workloads? Join Storage Switzerland and Qumulo for our webinar, “NAS vs. Object — Can NAS Make a Comeback,” to learn the state of unstructured data storage and if NAS file systems can provide a superior alternative to object storage.
Join us on our event to learn
•. Why traditional NAS solutions fall short
•. Why object storage systems haven't replaced NAS
• How to bridge the gap by modernizing NAS file systems
•. Live Q&A with file system and NAS experts
All the content of this website is informative and non-commercial, does not imply a commitment to develop, launch or schedule delivery of any feature or functionality, should not rely on it in making decisions, incorporate or take it as a reference in a contract or academic matters. Likewise, the use, distribution and reproduction by any means, in whole or in part, without the authorization of the author and / or third-party copyright holders, as applicable, is prohibited.
Multi-tenancy is part of the OutSystems platform for a long time but has not often been used.
In this session, Tiago will first explain what multi-tenancy exactly is. But very fast, we'll leave the theoretical aspect and dive into the practice. No better way to learn than from real-life cases. You'll get insight on when to use - and perhaps when not to use - the built-in mechanism and tip and trick to avoid pitfalls.
Centerally monitor and control legal cases, notice to improve efficiencies, communication through the legal ecosystem. Visit :- https://lexcomply.com/enterprise-litigation-management
Netbackup advantages features and benefits Netbackup classes may help hands o...Vidhyalive
Netbackup is an enterprise-level varied backup & recovery tool. It caters cross-platform support functionality to a huge range of Windows, UNIX & Linux operating systems. It is a veritas backup product intended to offers a fast, trustworthy backup & recovery resolution for setting varying from terabytes to petabytes in size.The word- Netbackup denoted to either of two products such as
• VeritasNetBackupDataCenter
• VeritasNetBackupBusinesServer
Please read the complete post here http://www.vidhyalive.com/netbackup-advantages-features-benefit/
Tips and Tricks from the Trenches for Migrating to a Virtual Private CloudXantrion
Christian Kelly, Director of Technology, will be sharing tips and tricks from the trenches for migrating to a virtual private cloud His talk is focused on the potential gotcha’s in migrating your infrastructure to a private cloud and how to avoid them.
The following types of issues will be covered.
• IP addressing
• Application dependencies
• Bandwidth limitations due to latency
• The impact of block size in SAN based replication
Capacity - Ransomware - Protection - Three Windows File Server Upgrades to AvoidStorage Switzerland
For most enterprises, optimizing your Windows fileserver means buying a new, bigger server with more capacity and better network connections. Or it means buying a dedicated Network Attached Storage (NAS) system. Then they feel they need to add new ransomware and data protection software. All of these "solutions" are expensive, involve a costly data migration and don't really increase IT efficiency. In this webinar, experts from Storage Switzerland and Caringo teach you how data management can eliminate the need for additional Windows file servers, capacity and protection.
SQLBits 2008 - SQL Server High Availability and Disaster Recovery Overview - ...Charley Hanania
Session from SQLBits 2008, covering:
Scopes of Protection in SQL Server 2005
SQL Server Backup Features and Technologies
SQL Server Disaster Recovery Features
SQL Server High Availability Features
SQL Server Data Distribution Features
Recoverability Scenarios Review
TechTarget Event - Storage Architectures for the Modern Data Center - Howard ...NetApp
Keynote Presentation: How Storage Function Follows Architecture
Presented by Howard Marks, Founder and Chief Scientist, Deep Storage, LLC
Storage buyers today are faced with a broader variety of choices than ever before. Unfortunately, the architecture of the storage system they select will forever determine how well that system adapts to changes in their data center. While flash does make almost every storage system faster, the system's scalability, flexibility and manageability are determined not by the media but by the system's architecture.
This session will examine how storage system architectures predetermine how systems behave in the real world. We'll see how common storage architectures affect performance, scalability, quality of service, snapshots and vVol support.
In this session, we’ll explore the various approaches to caching data in an OutSystems application—from the basic concepts of caching, to situations when you should or shouldn’t cache data.
We’ll also discuss how to use the built-in cache mechanism in OutSystems, including how it works, how to implement it, and some considerations for best practices.
For mobile apps and reactive web applications, we’ll cover caching data client-side using IndexedDB, as well as additional server-side caching resources.
For over a decade Network Attached Storage (NAS) was the go to file storage device for organizations needing to store large amounts of unstructured data. But unstructured data is changing. While large file use cases are still prevalent, small file use cases are becoming more dominant. Workloads like artificial intelligence, analytics and IoT are typically driven by millions, if not billions of small files.
Object Storage is often hailed as the heir apparent but is it? Can file systems be redesigned to continue to support traditional NAS workloads while also supporting modern, small file and high velocity workloads? Join Storage Switzerland and Qumulo for our webinar, “NAS vs. Object — Can NAS Make a Comeback,” to learn the state of unstructured data storage and if NAS file systems can provide a superior alternative to object storage.
Join us on our event to learn
•. Why traditional NAS solutions fall short
•. Why object storage systems haven't replaced NAS
• How to bridge the gap by modernizing NAS file systems
•. Live Q&A with file system and NAS experts
All the content of this website is informative and non-commercial, does not imply a commitment to develop, launch or schedule delivery of any feature or functionality, should not rely on it in making decisions, incorporate or take it as a reference in a contract or academic matters. Likewise, the use, distribution and reproduction by any means, in whole or in part, without the authorization of the author and / or third-party copyright holders, as applicable, is prohibited.
Multi-tenancy is part of the OutSystems platform for a long time but has not often been used.
In this session, Tiago will first explain what multi-tenancy exactly is. But very fast, we'll leave the theoretical aspect and dive into the practice. No better way to learn than from real-life cases. You'll get insight on when to use - and perhaps when not to use - the built-in mechanism and tip and trick to avoid pitfalls.
Centerally monitor and control legal cases, notice to improve efficiencies, communication through the legal ecosystem. Visit :- https://lexcomply.com/enterprise-litigation-management
Netbackup advantages features and benefits Netbackup classes may help hands o...Vidhyalive
Netbackup is an enterprise-level varied backup & recovery tool. It caters cross-platform support functionality to a huge range of Windows, UNIX & Linux operating systems. It is a veritas backup product intended to offers a fast, trustworthy backup & recovery resolution for setting varying from terabytes to petabytes in size.The word- Netbackup denoted to either of two products such as
• VeritasNetBackupDataCenter
• VeritasNetBackupBusinesServer
Please read the complete post here http://www.vidhyalive.com/netbackup-advantages-features-benefit/
Tips and Tricks from the Trenches for Migrating to a Virtual Private CloudXantrion
Christian Kelly, Director of Technology, will be sharing tips and tricks from the trenches for migrating to a virtual private cloud His talk is focused on the potential gotcha’s in migrating your infrastructure to a private cloud and how to avoid them.
The following types of issues will be covered.
• IP addressing
• Application dependencies
• Bandwidth limitations due to latency
• The impact of block size in SAN based replication
A look at the challenges and benefits of delivering unified content within the public sector.
Presentation originally given by Matt J (@mhj_work) at UK GovCamp 2011
Symantec Corp. (Nasdaq: SYMC) today announced it will deliver a new approach for modernizing backup and recovery, a process that has become unnecessarily complicated and expensive as organizations’ data stores grow exponentially. Compared to traditional backup, Symantec’s approach enables 100 times faster backup, eases management and simplifies recovery if a disaster occurs, helping customers realize significant cost savings while better protecting their business information.
Power point presentation on backup and recovery.
A good presentation cover all topics.
For any other type of ppt's or pdf's to be created on demand contact -dhawalm8@gmail.com
mob. no-7023419969
When planning for Disaster Recovery it is essential to have a clearly defined set of objectives that are based on your businesses needs .InTechnology's Product Director for Data & Cloud Services, Stefan Haase, provides tips for any business to consider when putting together their disaster recovery plan. http://www.intechnology.co.uk/resource-centre/webcast-disaster-recovery-planning.aspx
PHD Virtual Image-based Backup for Citrix XenServerMark McHenry
This presentation shares information about PHD Virtual's Image-based backup for Citrix XenServer environments. This solution is a simple and cost effective alternative to those who are still wrestling with agents and writing scripts to perform backups.
Five things virtualization has changed in your dr planJosh Mazgelis
Are you still rolling with the changes? Virtualization has made a huge impact on the way we deploy our computer workloads, and with that it has also changed the ways in which we protect them. The business continuity plans in place for IT even just five years ago look very different than what many companies have in place today. Keeping on top of these changes will help you understand your recovery capabilities, and your limitations as well. Join us with our friends at Neverfail and make sure you're keeping your IT business continuity plans spicy and fresh!
How to Use Cloud Storage to Overcome The 3 Challenges to ACTIVE Data ArchivingStorage Switzerland
Join Storage Switzerland and Panzura in on demand webinar as we cover why you should really want to aggressively archive data, what the challenges are to an aggressive strategy and, most importantly, how to use cloud storage to overcome them.
Module 13: Recovering Network Data and Servers
This module explains how to recover network data and servers. There are a variety of scenarios where a network data or a server that provides networks services can be lost. Volume shadow copies can be used to restore previous versions of files when a file is accidentally deleted or modified on a computer that is running Windows Server 2008. Windows Server Backup can be used to back up and restore data files or an entire server.
Lessons
Recovering Network Data with Volume Shadow Copies
Recovering Network Data and Servers with Windows Server Backup
Lab : Recovering Network Data and Servers
Configuring Shadow Copies
Configuring a Scheduled Backup
After completing this module, students will be able to:
Describe how to configure and use volume shadow copies.
Describe how to configure and use Windows Server Backup.
Never go back to tape again! Learn about the advantages of Cloud Backup and the considerations in defining your strategy at both the diocese and church level.
Phase two of OpenAthens SP evolution including OpenID connect optionEduserv
David Orrell, System Architect and Phil Leahy, Service Relationship Manager, talk about Phase II of the OpenAthens Cloud Service Provider project, and also about how OpenAthens is being used as an identity provider service in the corporate sector.
Tim Lull, Vice President of Sales and Gar Sydnor, Vice President of Discovery Innovation, showcases EBSCO and how this product benefits the identity and access management community.
Phil Leahy, Service Relationship Manager covers our commitment to the publishing community as part of our Publisher Manifesto. David Orrell, System Architect, runs through phase one of our new service provider product.
Neil Scully, Head of Development and Service Delivery, shares the AGILE SCRUM and SPRINT process used in our product development methodology and the benefits this brings.
Tracy Gardner from Simon Inger Consulting presents the results of their 12 month research project, which included a survey of how over 40,000 readers discover scholarly content. The findings are pertinent to publishers and information professionals alike across sectors.
Jon Bentley, Commercial Director, shares the vision for our products, explains our brand evolution and presents key milestones in the development of our identity and access management (IAM) solutions. He also highlights the range of applications that work with OpenAthens.
Mike Brooksbank, Executive Director of OpenAthens, runs through the schedule of the day, plus an overview of OpenAthens and Eduserv, our last FY year and the year ahead.
Eduserv's Marketing Manager, Alex Bacon, presented at the B2B Network about his experience of content marketing and how to deliver valuable and engaging content to your audiences whilst generating leads at the same time.
This presentation by Jonathan Watkins of Maplesoft and the University of Birmingham was given to the Eduserv Maths and Stats Software Focus Group in June 2016. Möbius is a comprehensive online courseware environment that focuses on science, technology, engineering, and mathematics (STEM). students can explore important concepts using engaging, interactive applications, visualize problems and solutions, and test their understanding by answering questions that are graded instantly.
This presentation was given to the Eduserv Maths and Stats Software Focus Group in June 2016. It focuses on updates to NVivo 11 for Windows and Mac, the new QSR Certification Programme and how QSR and the academic community might work more closely together.
Nick Wallace, Government Analyst, Public Sector Ovum
Momentum for the adoption of cloud services continues to grow in the public sector as services mature and agencies experience in buying and using cloud services grows. As agencies steadily incorporate various cloud components into their environment, it is clear that public sector organisations are starting to realise the benefits of cloud. In fact if one where creating a “greenfield” service, “in the cloud” would be the default approach. However the reality is that most institutions are not in this position. Most have to manage a legacy environment that comprises aging technology, duplicate, inefficient and inconsistent business processes. Developing and implementing a staged migration to cloud will be pivotal when determining whether the “as-a-service” promise facilitates innovation or undermines organisational integrity
Planning your cloud strategy: Adur and Worthing CouncilsEduserv
Paul Brewer, Director for Digital & Resources at Adur & Worthing Council.
How do you assess your organisations readiness to move to the cloud and adopt new platforms drive business change? Paul Brewer from Adur and Worthing Councils will be sharing how they evaluated whether cloud was right for them, the talk will cover how they evaluated the benefits, costs and risks of moving to the cloud, and how they used this assessment to support and build their cloud strategy.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
5. Agenda
• Understanding your data
• Defining backup requirements
• Overview of Eduserv’s data and requirements
• Problems with traditional backups in a virtual
datacentre
• Solutions to traditional backup issues
6. Terminology
• Recovery Point objective
• Oldest point the data stored on backups can be
• Recovery Time Objective
• Time allowed to restore the data
• Backup window
• Time which the backup window must complete in
7. Terminology
• Backup/Archive/DR
• Backup: used to recover data following loss/corruption
• Achieve: used to store data long term
• Disaster recovery: policy and process to provide service
continuation in event of catastrophic failure
8. Terminology
• Consistency
• Crash consistent: does not provide guarantees of data
integrity
• File system consistency: guarantees file system state
• Application consistency: guarantees application
consistency
15. Understand your data
• Data structure
• Highly transactional/static content
• Large or small files
• Rate of duplication
• Data use
• Useful life of the data
• Does the data need to be backed
• Who controls the data
16. Requirements
• Why is the data backed up
• Recovery Time Objective
• Recovery Point Objective
• Retention period
• Offsite requirements
• Cost
17. Virtual server data
• Large and small files
• High percentage of duplicated data
• Data change rate varies
• Typically short data life
• What we don’t backup
• We don’t control the data
18. Virtual server backup requirements
• Restores: recover from deletion/corruption
• Backup service that is independent from the OS
• Fast backup and restores with low overhead
• Short retention period
• Ability to restore entire VMs or individual files
• Single site/dual site
• Self service backups
• Scalability
• Low cost
19. A very…brief backup history
• One to one relationship between servers and backup
disks
http://www.flickr.com/photos/sylvar/31436967/sizes/l/in/photostream/
20. A very…brief backup history
• Central backup tape repository for backup storage
• Accessed via a client side agent
• Traditional approach was used initially for virtual
machine backups
21. Problems with traditional backups in a virtual world
• Processing
• High consolidation ratios mean higher impact
• Agent based backups require client resources
• Streaming to tape
• No parallelisation
• High latency
• Issues with long term incremental backups
• High administrative overhead
22. Problems with traditional backups in a virtual world
• Restore time
• Slow to locate and load tapes, and to locate data on tape
• Slow to restore entire VM as process is the same as
physical server
• Storage footprint
• Large storage foot print required as de-dup etc. cannot
easily be used
• To improve restore time full backups taken weekly
23. Resolutions to issues – disk to disk
• Enhanced parallelisation of jobs
• Reduced administrative overhead
• Improved restore time
• Reduced foot print
24. Resolution – move backups to the hypervisor
• No more agents :-)
• Change block tracking
26. Resolution – move backups to the hypervisor
• No more agents :-)
• Change block tracking
• Single backup to provide file level and image level
restore
28. Resolution – move backups to the hypervisor
• No more agents :-)
• Change block tracking
• Single backup to provide file level and image level
restore
• Forever/Reversed incremental
31. Resolution – move backups to the hypervisor
• No more agents :-)
• Change block tracking
• Single backup to provide file level and image level
restore
• Forever/Reversed incremental
• Scale-out infrastructure
33. What does this mean?
• 170GB machine with static data
• Traditional backup: ~3 hours
• Virtualised backup: ~2 minutes
• ~230TB VM data
• Virtualised backup: ~3 hour backup window
• Continue meeting the backup window with horizontal
scaling
34. Conclusion
• Understand your data
• Understand your requirements
• For virtual backups
• Look at disk to disk for virtual platforms
• Make sure you take advantage of low processing
overhead such as change block tracking and single
backups for image and file level restores
• Only backup what you need to!
35. Thank you – questions?
Charles Llewellyn and Matt Johnson