Practical framework to help SharePoint Administrators understand and develop a successful SharePoint recovery plan. Focus is on pragmatic "what and how" for an administrator rather than academic theory or burdensome project management structure.
Archiving and compliance for SharePoint on premise and onlineOlga Siamashka
OpenText Application Governance & Archiving for Microsoft SharePoint (AGA) empowers organizations to meet compliance and archiving requirements, manage the growth of SharePoint sites, provide access to disparately spread enterprise content, and reduce ongoing administration and storage costs. AGA can support you in on-premise, cloud, or hybrid environments, even with different SharePoint versions or Office 365.
Pyxa Solutions provides document migration services with a standardized process including project kick-off, gap assessment, data and document mapping, testing, and validation. The document outlines Pyxa's typical migration project milestones and key activities at each stage, with an emphasis on gaining client agreement and buy-in throughout.
Continuously improving factory operations is of critical importance to manufacturers. Consider the facts: the total cost of poor quality amounts to a staggering 20% of sales (American Society of Quality) and unplanned downtime costs plants approximately $50 billion per year (Deloitte).
The most pressing questions are: which process variables effect quality and yield and which process variables predict equipment failure? Getting to those answers is providing forward thinking manufacturers a leg up over competitors.
The speakers address the data management challenges facing today's manufacturers, including proprietary systems and silo'ed data sources, as well as an inability to make sensor-based data usable.
Integrating enterprise data from ERP, MES, maintenance systems and other sources with real time operations data from sensors, PLCs, SCADA systems and historians represents a major first step. But how to get started? What is the value of a data lake? How are AI/ML being applied to enable real time action?
Join us for this educational session, which includes a rare view from one of our SWAT team experts into our roadmap for an open source industrial IoT data management platform.
Key Takeaways:
• How to choose an initial project from which to quickly demonstrate high value returns
• Understand the value of multivariate data sources, as opposed to a single sensor on a piece of equipment
• Understand advances in big data management and streaming analytics that are paving the way to next-generation factory performance
MICHAEL GER, General Manager, Manufacturing and Automotive, Hortonworks and RYAN TEMPLETON, Senior Solutions Engineer, Hortonworks
How to Test Big Data Systems | QualiTest GroupQualitest
Big Data is perceived as a huge amount of data and information but it is a lot more than this. Big Data may be said to be a whole set of approach, tools and methods of processing large volumes of unstructured as well as structured data. The three parameters on which Big Data is defined i.e. Volume, Variety and Velocity describes how you have to process an enormous amount of data in different formats at different rates.
QualiTest is the world’s second largest pure play software testing and QA company. Testing and QA is all that we do! visit us at: www.QualiTestGroup.com
Hortonworks for Financial Analysts PresentationHortonworks
Hortonworks was founded in 2011 by former Yahoo engineers to support the growth of Apache Hadoop. Their strategy is to overcome technology gaps by making Hadoop easier to install and use, enable an ecosystem of partners by defining open APIs, and overcome knowledge gaps by expanding technical content and training. This will help drive wider adoption of Apache Hadoop as the platform for managing big data in the enterprise.
The Importance of Data for DevOps: How TCF Bank Meets Test Data ChallengesCompuware
Generating realistic, privatized test data and delivering it to your teams fast enough to meet the demands of your business is a growing issue at agile organizations.
To help you overcome these challenges, TCF Bank shares how they are innovatively using DevOps-supporting test data management techniques with the help of Compuware to effectively:
• Deliver test data to internal teams with agility
• Develop repeatable processes that fit within two-week sprints
• Privatize data based on nuanced demands from development teams
• Manage an influx of test data requests from internal teams
• Automate processes to ensure test data management aligns with security protocols
• Work across mainframe and distributed teams with their own priorities and deliverables
10 Things You'll Need to Succeed with Information Governance and SharePointRecordLion
This educational presentation discusses not only Information Governance surrounding SharePoint Environments, but expanding beyond to other platforms in your organization.
Testing Big Data: Automated Testing of Hadoop with QuerySurgeRTTS
Are You Ready? Stepping Up To The Big Data Challenge In 2016 - Learn why Testing is pivotal to the success of your Big Data Strategy.
According to a new report by analyst firm IDG, 70% of enterprises have either deployed or are planning to deploy big data projects and programs this year due to the increase in the amount of data they need to manage.
The growing variety of new data sources is pushing organizations to look for streamlined ways to manage complexities and get the most out of their data-related investments. The companies that do this correctly are realizing the power of big data for business expansion and growth.
Learn why testing your enterprise's data is pivotal for success with big data and Hadoop. Learn how to increase your testing speed, boost your testing coverage (up to 100%), and improve the level of quality within your data - all with one data testing tool.
Archiving and compliance for SharePoint on premise and onlineOlga Siamashka
OpenText Application Governance & Archiving for Microsoft SharePoint (AGA) empowers organizations to meet compliance and archiving requirements, manage the growth of SharePoint sites, provide access to disparately spread enterprise content, and reduce ongoing administration and storage costs. AGA can support you in on-premise, cloud, or hybrid environments, even with different SharePoint versions or Office 365.
Pyxa Solutions provides document migration services with a standardized process including project kick-off, gap assessment, data and document mapping, testing, and validation. The document outlines Pyxa's typical migration project milestones and key activities at each stage, with an emphasis on gaining client agreement and buy-in throughout.
Continuously improving factory operations is of critical importance to manufacturers. Consider the facts: the total cost of poor quality amounts to a staggering 20% of sales (American Society of Quality) and unplanned downtime costs plants approximately $50 billion per year (Deloitte).
The most pressing questions are: which process variables effect quality and yield and which process variables predict equipment failure? Getting to those answers is providing forward thinking manufacturers a leg up over competitors.
The speakers address the data management challenges facing today's manufacturers, including proprietary systems and silo'ed data sources, as well as an inability to make sensor-based data usable.
Integrating enterprise data from ERP, MES, maintenance systems and other sources with real time operations data from sensors, PLCs, SCADA systems and historians represents a major first step. But how to get started? What is the value of a data lake? How are AI/ML being applied to enable real time action?
Join us for this educational session, which includes a rare view from one of our SWAT team experts into our roadmap for an open source industrial IoT data management platform.
Key Takeaways:
• How to choose an initial project from which to quickly demonstrate high value returns
• Understand the value of multivariate data sources, as opposed to a single sensor on a piece of equipment
• Understand advances in big data management and streaming analytics that are paving the way to next-generation factory performance
MICHAEL GER, General Manager, Manufacturing and Automotive, Hortonworks and RYAN TEMPLETON, Senior Solutions Engineer, Hortonworks
How to Test Big Data Systems | QualiTest GroupQualitest
Big Data is perceived as a huge amount of data and information but it is a lot more than this. Big Data may be said to be a whole set of approach, tools and methods of processing large volumes of unstructured as well as structured data. The three parameters on which Big Data is defined i.e. Volume, Variety and Velocity describes how you have to process an enormous amount of data in different formats at different rates.
QualiTest is the world’s second largest pure play software testing and QA company. Testing and QA is all that we do! visit us at: www.QualiTestGroup.com
Hortonworks for Financial Analysts PresentationHortonworks
Hortonworks was founded in 2011 by former Yahoo engineers to support the growth of Apache Hadoop. Their strategy is to overcome technology gaps by making Hadoop easier to install and use, enable an ecosystem of partners by defining open APIs, and overcome knowledge gaps by expanding technical content and training. This will help drive wider adoption of Apache Hadoop as the platform for managing big data in the enterprise.
The Importance of Data for DevOps: How TCF Bank Meets Test Data ChallengesCompuware
Generating realistic, privatized test data and delivering it to your teams fast enough to meet the demands of your business is a growing issue at agile organizations.
To help you overcome these challenges, TCF Bank shares how they are innovatively using DevOps-supporting test data management techniques with the help of Compuware to effectively:
• Deliver test data to internal teams with agility
• Develop repeatable processes that fit within two-week sprints
• Privatize data based on nuanced demands from development teams
• Manage an influx of test data requests from internal teams
• Automate processes to ensure test data management aligns with security protocols
• Work across mainframe and distributed teams with their own priorities and deliverables
10 Things You'll Need to Succeed with Information Governance and SharePointRecordLion
This educational presentation discusses not only Information Governance surrounding SharePoint Environments, but expanding beyond to other platforms in your organization.
Testing Big Data: Automated Testing of Hadoop with QuerySurgeRTTS
Are You Ready? Stepping Up To The Big Data Challenge In 2016 - Learn why Testing is pivotal to the success of your Big Data Strategy.
According to a new report by analyst firm IDG, 70% of enterprises have either deployed or are planning to deploy big data projects and programs this year due to the increase in the amount of data they need to manage.
The growing variety of new data sources is pushing organizations to look for streamlined ways to manage complexities and get the most out of their data-related investments. The companies that do this correctly are realizing the power of big data for business expansion and growth.
Learn why testing your enterprise's data is pivotal for success with big data and Hadoop. Learn how to increase your testing speed, boost your testing coverage (up to 100%), and improve the level of quality within your data - all with one data testing tool.
The document discusses a presentation on creating a successful SharePoint recovery plan given by Paul LaPorte of Metalogix. It provides an agenda for the presentation, which includes discussing why a recovery plan is critical, how to create a successful backup and restore plan, learning objectives such as what peer organizations are doing for service level agreements, and a case study on disaster recovery. The presentation emphasizes that testing recovery plans is essential since the vast majority of organizations fail recovery tests or require changes after testing.
This document discusses key performance indicators (KPIs) for client request processing times and system availability at PHH Corporation. It shows that average turnaround times for rating, requirements statements of work (SOWs) and implementation SOWs have decreased in recent months. Client availability is over 99.9% for critical systems and over 99.98% for planned availability. Some applications and vendors have availability below targets, including SourceCorp at 96.56% critical availability. Root cause analyses and corrective actions are documented for several past incidents.
Watch this Webinar to Understand:
- Common Causes for data loss in Microsoft Exchange and Office 365
- How to mitigate data loss caused by user errors
- How these challenges appear when you move to Office 365
- How cloud-to-cloud backup helps minimize data loss in Microsoft Office 365
March 2016 HPE Data Protector
Comprehensive data protection for the modern enterprise
If you pick up the latest datacenter trends reports from ESG, Gartner, and IDC, you will notice that improving backup and recovery appears among the top IT priorities for organizations. The reason for that is simple: as the velocity, variety and complexity of data continue to accelerate, so do the risks of not being able to speedily restore critical systems and applications in case of disaster or data loss.
The document discusses Fishbowl Solutions' Admin Suite, a set of tools to automate and streamline administration of Oracle WebCenter Content. The Admin Suite includes components for batch loading content, enhancing workflows, subscription notifications, and advanced user security mapping. It aims to simplify common admin tasks, increase user adoption and productivity, and improve security, insight and reporting. Customer examples show how the tools helped organizations manage migrations, integrations, reviews and user access management more efficiently.
As a business, your most important asset is your data. But what happens when disaster strikes? Learn how to develop a comprehensive data protection plan to help protect your critical information.
As we all know, more and more organizations are starting to question “Do we or do we not implement Office 365?”. However, as these discussions are taking place; governance is rarely addressed or considered. The main reason is that the majority believe that once they have implemented governance that they are done; unless there is an update such as a server name change or an employee change (such as a departure or addition). During the initial planning around governance it is likely that there were discussions around auditing of the governance document and potential quarterly reviews to ensure that the document is up to date and still fits the business. However, it is common to forget that after that fact; even though it is documented “within the governance document”.
Governance becomes even more important with Office 365 just because its cloud based and ever changing with new and deprecated features on a pretty regular basis. This means all of the content, backup, recovery, etc. are all handled by Microsoft and you have virtually no control over it (Can you say MAJOR SLA impact?). In this session we will review the areas of concern and how they can be addressed within the governance document, the importance of reviewing the document frequently; and ways to make the information available to your internal SharePoint Community. In addition, we will review the features of Office 365 that will have a major impact on SharePoint and Office Apps. We will review each of these applications and the areas of importance that should be addressed in the governance document, as well as why each of them are important.
EPM Cloud in Real Life: 2 Real-world Cloud Migration Case StudiesDatavail
In this presentation at the HugMN user conference, we presented 2 different successful real-world EPM Cloud migration and implementation case studies from different industries. Get a birds-eye view into the practicalities of moving to cloud, and the tools you need to make the business case for your own company.
Webinar: Ten Ways to Enhance Your Salesforce.com Application in 2013Emtec Inc.
The document outlines 10 ways to improve a Salesforce.com application in 2013 according to a webinar presentation by Emtec. The top three recommendations are: 1) Integrate Salesforce.com with key external systems to improve processes and access data from any system, 2) Enrich and cleanse data in Salesforce.com to promote accurate reporting and decision making, and 3) Bring other functional groups onto the Salesforce.com platform to improve collaboration and reduce redundant systems. The webinar also provides guidance on how to implement the recommendations and considerations for each.
Why You Need Intelligent Metadata and Auto-classification in Records ManagementConcept Searching, Inc
Auto-classification removes a burden from IT teams and end users. But what and where is the content being classified? Then what happens?
Auto-classification not only organizes your content but also provides an environment where information governance and compliance policies, and processes, can be implemented enterprise-wide. With automatic multi-term metadata generation and powerful taxonomy tools, the positive impact on your business is quickly realized.
As well as the visible impact of search improvement, the elimination of end user tagging reduces both productivity drain and tagging errors, to safeguard information that should be protected, such as confidential information or records.
Find out how to clean up, optimize, and organize your enterprise content, providing a framework for effective records management.
* Metadata generation – why it is so important
* Auto-classification – why you can’t live without it
* Taxonomy approaches that are manageable – by the staff you already have
Dreamforce - Chaining Approval Processes with Apex Codescottsalesforce
The document discusses using Apex code to chain approval processes in Salesforce. It provides two use cases - chaining two approval processes for a proof of concept opportunity, and using an approval process for content contributions. The presenters demonstrate how to chain approval processes programmatically and submit records for approval. They also discuss additional capabilities like chaining across objects and implementing child processes.
What You Need to Know Before Upgrading to SharePoint 2013Perficient, Inc.
Ready to join the SharePoint 2013 revolution but not sure what is involved? Are you in the middle of a migration that is behind schedule? This presentation walks you through general guidelines and common pitfalls to avoid so your transition to SharePoint 2013 will be successful.
Speaker Suzanne George discusses tips and tricks to ensure a successful SharePoint 2013 implementation and describe common mistakes that organizations make during the transition.
Whether you are in the middle of migrating to SharePoint 2013 or you are just thinking about implementation, this session will give you tools that will help you successfully deploy SharePoint within your organization.
Presenter Suzanne George, MCTS, is a Senior Technical Architect a Perficient. She has developed, administered, and architected website applications since 1995 and has worked with top 100 companies such as Netscape, AOL, Sun Microsystems, and Verio. Her experience includes custom applications and SharePoint integration with applications such as ESRI, Deltek Accounting Software, and SAP. Suzanne sits on the MSL IT Manager Advisory Council, was a contributing author for SharePoint 2010 Administrators and presents at SharePoint Saturdays around the country.
SharePoint migrations rarely turn out as you plan them. They are sometimes risky and too often take longer than planned. Over the last 10 years of migrating from SharePoint 2003, 2007, 2010 to the latest versions of SharePoint/Office 365 we’ve seen a consistent theme -- organizations underestimate the complexity and level of effort required for a successful, smooth migration.
Whether you are planning to complete your own migration, or engaging a vendor to assist, this webinar will discuss precautions you can take to avoid the slippery slope experienced in SharePoint migrations.
An OBIEE Success Story: How a Regional Utility Created Visibility in Supply Chain provides an overview about a project utilizing both OBIEE and Business Intelligence Analytics products. The project’s goal was to provide timely data and reporting to Supply Chain for aiding in strategic decision making. The result was a reduction in overall operational costs, performance and productivity tracking, inventory management in partnership with business operations and the initiation of basic governance practices for the data within the Oracle E-Business Suite.
Did you know that most migration projects run over budget or, even worse, fail? This webinar discusses the potential problems and how to address them.
The ability to mass move content is relatively straight forward but, from an information governance view, simply moving documents from one repository is not enough. Content that was unmanaged will remain unmanaged, continuing to expose organizations to risk.
Understand the sophisticated techniques needed to ensure compliance and records management objectives are met, during the migration process:
• The difference between migration and intelligent migration, and why it matters
• How intelligent migration facilitates the entire migration process
• Why compliance and governance go hand in hand with intelligent migration
• What content optimization is, and why is it a critical component of intelligent migration
• Why intelligent migration dramatically improves search
Speakers:
Michael Paye – Chief Technology Officer at Concept Searching
Robert Piddocke – Vice President of Channel and Business Development
On Open Day, we share our activities of the month with each other and the community. It's when we take a step back and see where we stand. Here's our Open Day for May 2018.
Enterprise SharePoint Program - Architecture Models - (Innovate Vancouver) - ...Innovate Vancouver
Contact Innovate Vancouver to help on your next project!
Knowledge Management in Sharepoint - Article:
https://innovatevancouver.org/2022/10/10/knowledge-management-in-sharepoint/
Travis Barker, MPA GCPM
Consulting@innovatevancouver.org
https://innovatevancouver.org
Solution Centric Architectural Presentation - Implementing a Logical Data War...Denodo
Watch full webinar here: https://bit.ly/3H5AYZf
Implementing a logical data fabric as an architecture makes absolute sense when you have data spread across various sources in the cloud, including data warehouses, data lakes and even realtime data. In this session our customer will discuss the ways in which they implemented Denodo as a logical data fabric and how it helped them reduce risk and speed up time to access data.
Three signs your architecture is too small for big data. Camp IT December 2014Craig Jordan
Three capability gaps that a traditional business intelligence architecture has with respect to processing big data and recommended extensions to address them.
The document discusses a presentation on creating a successful SharePoint recovery plan given by Paul LaPorte of Metalogix. It provides an agenda for the presentation, which includes discussing why a recovery plan is critical, how to create a successful backup and restore plan, learning objectives such as what peer organizations are doing for service level agreements, and a case study on disaster recovery. The presentation emphasizes that testing recovery plans is essential since the vast majority of organizations fail recovery tests or require changes after testing.
This document discusses key performance indicators (KPIs) for client request processing times and system availability at PHH Corporation. It shows that average turnaround times for rating, requirements statements of work (SOWs) and implementation SOWs have decreased in recent months. Client availability is over 99.9% for critical systems and over 99.98% for planned availability. Some applications and vendors have availability below targets, including SourceCorp at 96.56% critical availability. Root cause analyses and corrective actions are documented for several past incidents.
Watch this Webinar to Understand:
- Common Causes for data loss in Microsoft Exchange and Office 365
- How to mitigate data loss caused by user errors
- How these challenges appear when you move to Office 365
- How cloud-to-cloud backup helps minimize data loss in Microsoft Office 365
March 2016 HPE Data Protector
Comprehensive data protection for the modern enterprise
If you pick up the latest datacenter trends reports from ESG, Gartner, and IDC, you will notice that improving backup and recovery appears among the top IT priorities for organizations. The reason for that is simple: as the velocity, variety and complexity of data continue to accelerate, so do the risks of not being able to speedily restore critical systems and applications in case of disaster or data loss.
The document discusses Fishbowl Solutions' Admin Suite, a set of tools to automate and streamline administration of Oracle WebCenter Content. The Admin Suite includes components for batch loading content, enhancing workflows, subscription notifications, and advanced user security mapping. It aims to simplify common admin tasks, increase user adoption and productivity, and improve security, insight and reporting. Customer examples show how the tools helped organizations manage migrations, integrations, reviews and user access management more efficiently.
As a business, your most important asset is your data. But what happens when disaster strikes? Learn how to develop a comprehensive data protection plan to help protect your critical information.
As we all know, more and more organizations are starting to question “Do we or do we not implement Office 365?”. However, as these discussions are taking place; governance is rarely addressed or considered. The main reason is that the majority believe that once they have implemented governance that they are done; unless there is an update such as a server name change or an employee change (such as a departure or addition). During the initial planning around governance it is likely that there were discussions around auditing of the governance document and potential quarterly reviews to ensure that the document is up to date and still fits the business. However, it is common to forget that after that fact; even though it is documented “within the governance document”.
Governance becomes even more important with Office 365 just because its cloud based and ever changing with new and deprecated features on a pretty regular basis. This means all of the content, backup, recovery, etc. are all handled by Microsoft and you have virtually no control over it (Can you say MAJOR SLA impact?). In this session we will review the areas of concern and how they can be addressed within the governance document, the importance of reviewing the document frequently; and ways to make the information available to your internal SharePoint Community. In addition, we will review the features of Office 365 that will have a major impact on SharePoint and Office Apps. We will review each of these applications and the areas of importance that should be addressed in the governance document, as well as why each of them are important.
EPM Cloud in Real Life: 2 Real-world Cloud Migration Case StudiesDatavail
In this presentation at the HugMN user conference, we presented 2 different successful real-world EPM Cloud migration and implementation case studies from different industries. Get a birds-eye view into the practicalities of moving to cloud, and the tools you need to make the business case for your own company.
Webinar: Ten Ways to Enhance Your Salesforce.com Application in 2013Emtec Inc.
The document outlines 10 ways to improve a Salesforce.com application in 2013 according to a webinar presentation by Emtec. The top three recommendations are: 1) Integrate Salesforce.com with key external systems to improve processes and access data from any system, 2) Enrich and cleanse data in Salesforce.com to promote accurate reporting and decision making, and 3) Bring other functional groups onto the Salesforce.com platform to improve collaboration and reduce redundant systems. The webinar also provides guidance on how to implement the recommendations and considerations for each.
Why You Need Intelligent Metadata and Auto-classification in Records ManagementConcept Searching, Inc
Auto-classification removes a burden from IT teams and end users. But what and where is the content being classified? Then what happens?
Auto-classification not only organizes your content but also provides an environment where information governance and compliance policies, and processes, can be implemented enterprise-wide. With automatic multi-term metadata generation and powerful taxonomy tools, the positive impact on your business is quickly realized.
As well as the visible impact of search improvement, the elimination of end user tagging reduces both productivity drain and tagging errors, to safeguard information that should be protected, such as confidential information or records.
Find out how to clean up, optimize, and organize your enterprise content, providing a framework for effective records management.
* Metadata generation – why it is so important
* Auto-classification – why you can’t live without it
* Taxonomy approaches that are manageable – by the staff you already have
Dreamforce - Chaining Approval Processes with Apex Codescottsalesforce
The document discusses using Apex code to chain approval processes in Salesforce. It provides two use cases - chaining two approval processes for a proof of concept opportunity, and using an approval process for content contributions. The presenters demonstrate how to chain approval processes programmatically and submit records for approval. They also discuss additional capabilities like chaining across objects and implementing child processes.
What You Need to Know Before Upgrading to SharePoint 2013Perficient, Inc.
Ready to join the SharePoint 2013 revolution but not sure what is involved? Are you in the middle of a migration that is behind schedule? This presentation walks you through general guidelines and common pitfalls to avoid so your transition to SharePoint 2013 will be successful.
Speaker Suzanne George discusses tips and tricks to ensure a successful SharePoint 2013 implementation and describe common mistakes that organizations make during the transition.
Whether you are in the middle of migrating to SharePoint 2013 or you are just thinking about implementation, this session will give you tools that will help you successfully deploy SharePoint within your organization.
Presenter Suzanne George, MCTS, is a Senior Technical Architect a Perficient. She has developed, administered, and architected website applications since 1995 and has worked with top 100 companies such as Netscape, AOL, Sun Microsystems, and Verio. Her experience includes custom applications and SharePoint integration with applications such as ESRI, Deltek Accounting Software, and SAP. Suzanne sits on the MSL IT Manager Advisory Council, was a contributing author for SharePoint 2010 Administrators and presents at SharePoint Saturdays around the country.
SharePoint migrations rarely turn out as you plan them. They are sometimes risky and too often take longer than planned. Over the last 10 years of migrating from SharePoint 2003, 2007, 2010 to the latest versions of SharePoint/Office 365 we’ve seen a consistent theme -- organizations underestimate the complexity and level of effort required for a successful, smooth migration.
Whether you are planning to complete your own migration, or engaging a vendor to assist, this webinar will discuss precautions you can take to avoid the slippery slope experienced in SharePoint migrations.
An OBIEE Success Story: How a Regional Utility Created Visibility in Supply Chain provides an overview about a project utilizing both OBIEE and Business Intelligence Analytics products. The project’s goal was to provide timely data and reporting to Supply Chain for aiding in strategic decision making. The result was a reduction in overall operational costs, performance and productivity tracking, inventory management in partnership with business operations and the initiation of basic governance practices for the data within the Oracle E-Business Suite.
Did you know that most migration projects run over budget or, even worse, fail? This webinar discusses the potential problems and how to address them.
The ability to mass move content is relatively straight forward but, from an information governance view, simply moving documents from one repository is not enough. Content that was unmanaged will remain unmanaged, continuing to expose organizations to risk.
Understand the sophisticated techniques needed to ensure compliance and records management objectives are met, during the migration process:
• The difference between migration and intelligent migration, and why it matters
• How intelligent migration facilitates the entire migration process
• Why compliance and governance go hand in hand with intelligent migration
• What content optimization is, and why is it a critical component of intelligent migration
• Why intelligent migration dramatically improves search
Speakers:
Michael Paye – Chief Technology Officer at Concept Searching
Robert Piddocke – Vice President of Channel and Business Development
On Open Day, we share our activities of the month with each other and the community. It's when we take a step back and see where we stand. Here's our Open Day for May 2018.
Enterprise SharePoint Program - Architecture Models - (Innovate Vancouver) - ...Innovate Vancouver
Contact Innovate Vancouver to help on your next project!
Knowledge Management in Sharepoint - Article:
https://innovatevancouver.org/2022/10/10/knowledge-management-in-sharepoint/
Travis Barker, MPA GCPM
Consulting@innovatevancouver.org
https://innovatevancouver.org
Solution Centric Architectural Presentation - Implementing a Logical Data War...Denodo
Watch full webinar here: https://bit.ly/3H5AYZf
Implementing a logical data fabric as an architecture makes absolute sense when you have data spread across various sources in the cloud, including data warehouses, data lakes and even realtime data. In this session our customer will discuss the ways in which they implemented Denodo as a logical data fabric and how it helped them reduce risk and speed up time to access data.
Three signs your architecture is too small for big data. Camp IT December 2014Craig Jordan
Three capability gaps that a traditional business intelligence architecture has with respect to processing big data and recommended extensions to address them.
Similar to 7 Steps to a Successful SharePoint Recovery Plan (20)
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Intros….
We created this webinar because we realized there’s not a lot out there on the subject. We’re going to be talking about the meat and potatoes of SharePoint backup and restore – what is necessary to prepare, what are some gotchas, and what are some solutions. I want you all to be able to either create and validate your disaster recovery solution for SharePoint. We all may be in different situations at work – maybe your systems or infrastructure teams are handling the backup process. But you are the one who own SharePoint- you need to make sure that your baby is protected.
That is precisely the use case that Michael will be sharing with us in a bit.
I promise you, this webinar will not be a product pitch but at the end, I will briefly show how a 3rd party product like Metalogix backup can help fill in the gaps.
Alrighty, on to the subject at hand…
Click…
That’s really going to be the theme of todays webinar- to get involved in the strategy. Backup and recovery can be a dry subject. Let’s be honest, it’s not the sexiest thing talk about. It’s pretty doom and gloom. It’s like buying life insurance. But if you’re prepared, not only technically with what steps you’re going to take, but have also prepared the business, then there’s nothing to worry about. You can sleep well knowing your baby is safe.
Click…
So today, we’ll either validate the steps you’ve already taken or we’ll fill in the gaps that have been left out of SharePoint’s recovery objectives. We’ll discuss:
How you’re involved in the process as a SharePoint admin, architect, DBA, or whatever your role may be. A lot of us may have just had SharePoint dumped on us and we’re treating like any other application we have. If you haven’t realized it yet, SharePoint is a massive platform with a lot of moving parts. It does not fit into a one size fits all recovery plan.
We’ll go into about 8 or so pieces that will make you successful. The theme here is going to be focused on creating the right backup strategy for the business. What you’ll notice is that from a technology point of view, backup is pretty simple and you have a lot of options. I always say that backup is a science while recovery is an art. I tell clients and prospects this all the time because anyone can backup SharePoint with a few clicks or scripts. But a surprising few think about preparing for the recovery- like can I restore everything I need? How long will it take to recover from a complete hardware malfunction versus a more common scenario like how long the loss of content will take to restore? When we go through creating a plan here in a bit, the backup piece involves working with the business to come up with a solution while the recovery piece is heavily driven by the technology you choose.
Click…
From there, we’ll go into what options you have (technology-wise) and what will be a good fit for you. Everything from what’s free OOTB to product add ons.
Click…
Then, once you’ve weighed your options and understand how many man hours you are willing to put in and how much you’re willing to spend, we’ll discuss who to involve so that your backup and recovery plan becomes a sharply honed process.
The goal is to have the smallest RPO possible
Let’s look at an example and discuss some of Microsoft best practices. First of, MSFT recommends not using OOTB backup tools for content DBs > 200 GB due to risk of missing backup windows and Recover Point Objectives.
Microsoft tested native SharePoint and SQL backup and was able to back up only 600 gigabytes in six hours using a high-end server. In my opinion, a 600 GB database is smaller than a typical SharePoint farm these days.
MS even states that if you’re using OOTB techniques, limit your content database size to 100 GB and site collection backup should not be used on anything larger than 85 GB.
That means you can’t granularly restore content reliably if a site collection is too large. It’s bc the process is too resource intensive and will take too long.
You need to understand your role. Are you responsible if the power goes out and you can’t even communicate with your servers? That’s probably not your responsibility and it will be handled by the infrastructure team. But that team is taking backups and you need to know what state they can get you back to after a disaster.
Ask them what tools they’re using and work with them to find the limitations they have with SharePoint
Maybe they don’t support only restoring individual content dbs, or individual objects like sites, lists and items in full fidelity- with workflow, versions, and such.
This is a true test of a good recovery strategy. End users can be demanding, and often don’t care about the limitations of SharePoint and its toolset. There are always particular industries – I’m looking at the world of finance and law and such verticals – who’s users are not very understanding and demand a very low RTO.
So where do you start?
There’s a lot that must be prepared so we’ll run through the major planning aspects from who to involve to analyzing what you can do to lessen your risk
Get yourself an Executive sponsor
Find a tech savvy executive to roll out your plan as part of overall SharePoint governance
As I said in the opening objectives, the backup piece of your strategy involves more human interaction than working with the technology.
You just need to decide what is possible- here are the RPOs and RTOs we can technically meet, and the business must decide if that aligns with the knowledge workers.
This way, you’re not making decisions in a vacuum.
Create the dreaded Service Level Agreement (SLA) and get it signed off by executive sponsors and other stakeholders
Define acceptable levels of what can be recovered and how long that recovery process will take.
Define certain locations of SharePoint that you know contain documents that are heavily edited and constantly changing and may need to be backed up more often than stale locations.
Keep in mind to separate disaster from day to day recovery…The majority of recovery work you’ll do is simple document, list or site recovery. These are simple to backup and restore on the surface but they will probably be different from environment to environment. Maybe there’s a customization you didn’t even know about because it was implemented before you got there. Are there certain dependencies on this content that need to be restored as well or are is the business ok with simply getting the document back online?
All of this needs to be considered and documented in the SLA
Document the ownership of tasks, responsibilities, demarcation points, and handoffs
This will involve different parts of the business. We need to ask questions and assign responsibilities to the different part of the business who have a stake in SharePoint. Have quarterly meetings with key stakeholders in each department. Go over their section of the farm to know what is important to them.
Give ownership so the SLA can be updated and signed off by these folks. These people can also be your QA after a restore is finished.
Taking these steps will involve making the SLA a public document. The onus does not fall squarely on you once this document is public. You’ve provided steps to take along the way from a backup plan to a restore methodology but it’s a team effort that all are aware of.
Next, Conduct an impact and risk analysis of current environment
For example, I had a client who had a site collection that housed financial reports and data for the executive team. Content was added to it every day between 8am and noon. The admin knew he needed the shortest RPO possible on this section of the farm. After discussing the actual business case with the executives, he realized that the content was only being added to the site for read only publication and was not being edited AND it wasn’t the only system of record of the document. It wasn’t SharePoint’s responsibility to immediately back up these documents and once a day would suffice for the business. He saved himself a lot of time and money by having a conversation.
On the other hand, you may have some databases or sites that need to backed up more often then the rest of your farm. Ensure that these sections of the farm get special treatment and have shorter RPOs.
Yes, you must prove it! All this planning and research you’ve done needs to be tested and constantly reviewed….
Proving out your SLA goes a long way so you wont have to act like this guy in the cartoon and scream for help.
Did I mention you need to prove it?! I can’t stress this point enough- mostly because it’s rarely accomplished: Conduct ongoing fire drills!
Continually review the outcomes. SharePoint has probably changed since you last did a test and you don’t want to be caught with your pants down.
Once you conduct fire drills, and based on their outcomes, update the SLA document with the changes to the farm and changes to RPOs and RTOs…..Then you can start the process all over again…
It will be worth it in the end. You’re not ignoring the fact that a loss of data will happen. It’s when, not if. And if you’ve put these processes into place, you’ll be prepared.
My clients always find something to change when conducting fire drills. I once worked with someone who noticed that their backup sets were corrupted but there was no clue in the actual backup file itself. It was only surfaced once the restore failed. This ended up being exponentially detrimental because he was using incremental backups. To remind you, incremental backups go back to the last backup taken no matter what type, and only backup what has changed. This means that if one backup is corrupted, the rest of the future backups are useless since they’re all dependent on each other.
A best practice I usually recommend is to conduct these fire drills on a secondary recovery environment. In a true disaster, you can even restore the databases to this recovery environment and redirect users there.
Don’t forget that a large portion of a successful strategy involves communication and ensuring the expectations of the business is set. The point of preparing these documents and doing all this planning is that the expectations of the users don’t outstep reality. Your job is on the line and you need a way to ensure end users to not get angry because of their outlandish expectations.
Funny story, I had a client who would send an email that said “Site back, relax and grab a cup of coffee. Your content will be with you shortly” when he received a ticket to restore lost data. Attached to the email was the SLA.
Setting expectations to the business is key to your strategy. It’s not all about the technology.
A review of what we’ve been discussing but use this as your checklist…
Attain an Executive sponsor
Service Level Agreement (SLA) signed off by executive sponsor and stakeholders
Documented ownership of tasks, responsibilities, demarcation points, and handoffs
Conduct risk analysis of current environment
Fully tested, documented and sign off
Ongoing fire drills and updates to SLA
So those recommendations are well and good but it doesn’t address the root of the problem. BLOBs or Binary Large Objects make up 90-95% of your databases. This is the content. It’s considered the unstructured data while the metadata of the document is the structured portion. Basically, BLOBs are what is growing your environment so rapidly and extending your RPOs.
The interesting thing about them is that blobs are immutable. They are never updated, only created and delete. If they never change, why do we have to back them up every time? You don’t. Think about it- every time you back your farm up for granularly recovery, you’re backing up data you know has not changed. In reality, you only need to backup the blobs that have been added to SharePoint because new documents were added or existing documents were edited.
Microsoft provides a couple of different libraries to externalize these BLOBs outside of the content database. Docs can be put on devices that have faster read/writes, i/o, and are much cheaper. This is a win for performance and cost and if done correctly, can drastically shorten your RPOs.
Metalogix backup integrates with our or RBS product called StoragePoint. With StoragePoint, blobs are backed up continuously as they are added to SharePoint. They are not included as part of the backup because they have not changed. Thus, once a restore is initiated, a call will be made to grab the BLOB based on the pointer to it in the database.
Your backup time is drastically reduced because you only have to backup your databases now. And guess what, once BLOBs are externalized, your database shrinks by 95%. Thus your backup window shrinks drastically.
So back to our example a couple of slides ago, your 1 TB content database has become 50GB. BLOBs are immediately backed up and ready at any point to be restored.
Clearly this changes your backup strategy so any RBS product you look at should do a couple of things:
1) Retain blobs even if they’re deleted in SharePoint for a specified amount of time. This means that you can continue to use out of the box methods for restores. Let’s say you define a blob retention period of 30 days. This means you can restore your SQL or SharePoint backups from anytime within the last 30 days and those backups will have references to blobs that have been retained.
Or
2) Ensure your RBS product integrates with your 3rd party backup product. Obviously this provides more automation and can really be the subject of a whole other webinar.