Ron Charity will present on increasing user adoption of SharePoint through replication. Replication can copy SharePoint content to other regions to reduce workload and publishing errors while making content closer to users. It also enables active/active disaster recovery environments and offsite backups. The presentation will cover common replication reasons, types of replication, information architecture and technical considerations, and operational best practices for sustaining a replication solution.
This document provides an overview of key considerations for planning and implementing a SharePoint backup and recovery solution. It discusses scoping requirements with stakeholders, defining service level agreements, technical architecture options, policy and process documentation, testing procedures, training, and governance. The presentation aims to give attendees a holistic view of the end-to-end backup lifecycle for SharePoint.
1) Data warehousing aims to bring together information from multiple sources to provide a consistent database for decision support queries and analytical applications, offloading these tasks from operational transaction systems.
2) OLAP is focused on efficient multidimensional analysis of large data volumes for decision making, while OLTP is aimed at reliable processing of high-volume transactions.
3) A data warehouse is a subject-oriented, integrated collection of historical and summarized data used for analysis and decision making, separate from operational databases.
A data warehouse is a collection of integrated data from multiple sources organized to support management decision making. It contains subject-oriented, integrated, time-variant and non-volatile data stored in a way that is optimized for query and analysis. There are different types of data warehouses including data marts, operational data stores and enterprise data warehouses. Key components of a data warehouse include data sources, extraction, loading, a comprehensive database, metadata and middleware tools.
E&P data management: Implementing data standardsETLSolutions
Many oil and gas companies have different regional standards for data values stored across various data stores, which can cause issues when aggregating reports or transferring engineers between regions. While standardizing data is important, it is difficult to assign costs to fixing inconsistencies. ETL Solutions proposes using their Transformation Manager software to automate the standardization process. Transformation Manager can analyze metadata to discover which data needs updating, automatically generate transformation scripts, and centrally manage standardization projects across multiple data stores. This metadata-driven approach reduces costs, risks and errors compared to a manual standardization process.
Pr dc 2015 sql server is cheaper than open sourceTerry Bunio
SQL Server was found to be cheaper than open source options for a data warehouse project with the following requirements:
- Serve 100% operational reports from 1TB of data
- No need for advanced features like big data support
- Requirement was for basic textual reporting
An investigation was conducted of SQL Server, Oracle, Sybase, MySQL, and PostgreSQL. SQL Server and PostgreSQL were evaluated further based on costs and functionality. After a 10 year total cost of ownership analysis, SQL Server was found to be cheaper despite having a higher initial license cost. The lessons learned were that open source options are not always cheaper, to test options yourself rather than rely on biased reports, and that Oracle is very expensive.
1) A data warehouse is a collection of data from multiple sources used to enable informed decision making. It contains data, metadata, dimensions, facts and aggregates.
2) The typical processes in a data warehouse are extract and load, data cleaning and transformation, user queries, and data archiving.
3) The key components that manage these processes are the load manager, warehouse manager and query manager. The load manager extracts, loads and does simple transformations on the data. The warehouse manager performs more complex transformations, integrity checks and generates summaries. The query manager directs user queries to the appropriate data.
This document provides an overview of key considerations for planning and implementing a SharePoint backup and recovery solution. It discusses scoping requirements with stakeholders, defining service level agreements, technical architecture options, policy and process documentation, testing procedures, training, and governance. The presentation aims to give attendees a holistic view of the end-to-end backup lifecycle for SharePoint.
1) Data warehousing aims to bring together information from multiple sources to provide a consistent database for decision support queries and analytical applications, offloading these tasks from operational transaction systems.
2) OLAP is focused on efficient multidimensional analysis of large data volumes for decision making, while OLTP is aimed at reliable processing of high-volume transactions.
3) A data warehouse is a subject-oriented, integrated collection of historical and summarized data used for analysis and decision making, separate from operational databases.
A data warehouse is a collection of integrated data from multiple sources organized to support management decision making. It contains subject-oriented, integrated, time-variant and non-volatile data stored in a way that is optimized for query and analysis. There are different types of data warehouses including data marts, operational data stores and enterprise data warehouses. Key components of a data warehouse include data sources, extraction, loading, a comprehensive database, metadata and middleware tools.
E&P data management: Implementing data standardsETLSolutions
Many oil and gas companies have different regional standards for data values stored across various data stores, which can cause issues when aggregating reports or transferring engineers between regions. While standardizing data is important, it is difficult to assign costs to fixing inconsistencies. ETL Solutions proposes using their Transformation Manager software to automate the standardization process. Transformation Manager can analyze metadata to discover which data needs updating, automatically generate transformation scripts, and centrally manage standardization projects across multiple data stores. This metadata-driven approach reduces costs, risks and errors compared to a manual standardization process.
Pr dc 2015 sql server is cheaper than open sourceTerry Bunio
SQL Server was found to be cheaper than open source options for a data warehouse project with the following requirements:
- Serve 100% operational reports from 1TB of data
- No need for advanced features like big data support
- Requirement was for basic textual reporting
An investigation was conducted of SQL Server, Oracle, Sybase, MySQL, and PostgreSQL. SQL Server and PostgreSQL were evaluated further based on costs and functionality. After a 10 year total cost of ownership analysis, SQL Server was found to be cheaper despite having a higher initial license cost. The lessons learned were that open source options are not always cheaper, to test options yourself rather than rely on biased reports, and that Oracle is very expensive.
1) A data warehouse is a collection of data from multiple sources used to enable informed decision making. It contains data, metadata, dimensions, facts and aggregates.
2) The typical processes in a data warehouse are extract and load, data cleaning and transformation, user queries, and data archiving.
3) The key components that manage these processes are the load manager, warehouse manager and query manager. The load manager extracts, loads and does simple transformations on the data. The warehouse manager performs more complex transformations, integrity checks and generates summaries. The query manager directs user queries to the appropriate data.
- Data warehousing aims to help knowledge workers make better decisions by integrating data from multiple sources and providing historical and aggregated data views. It separates analytical processing from operational processing for improved performance.
- A data warehouse contains subject-oriented, integrated, time-variant, and non-volatile data to support analysis. It is maintained separately from operational databases. Common schemas include star schemas and snowflake schemas.
- Online analytical processing (OLAP) supports ad-hoc querying of data warehouses for analysis. It uses multidimensional views of aggregated measures and dimensions. Relational and multidimensional OLAP are common architectures. Measures are metrics like sales, and dimensions provide context like products and time periods.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
A data warehouse is a central repository for storing historical and integrated data from multiple sources to be used for analysis and reporting. It contains a single version of the truth and is optimized for read access. In contrast, operational databases are optimized for transaction processing and contain current detailed data. A key aspect of data warehousing is using a dimensional model with fact and dimension tables. This allows for analyzing relationships between measures and dimensions in a multi-dimensional structure known as a data cube.
From our experience, performance is the most critical issue in SAP BW. Thus, those are answered with new technologies such as BWA or SAP HANA. Right data management enables you to bring your BW to the top form even before you decide to invest in new technologies, especially with nearline storage and housekeeping.
William Inmon is considered the father of data warehousing. He has over 35 years of experience in database technology and data warehouse design. Inmon has written over 650 articles and published 45 books on topics related to building, using, and maintaining data warehouses and information factories. A data warehouse is a collection of integrated, subject-oriented databases designed to support decision-making. It contains data that is non-volatile, time-variant, integrated, and summarized for analysis. Key components of a data warehouse environment include the data store, data marts, and metadata.
The document discusses various techniques for tuning data warehouse performance. It recommends tuning the data loading process to speed up queries and optimize hardware usage. Specific strategies mentioned include loading data in batches during off-peak hours, using parallel loading and direct path inserts to bulk load data faster, preallocating tablespace, and temporarily disabling indexes and constraints. The document also provides examples of using SQL*Loader and parallel direct path loads to efficiently bulk load data from files into tables.
Catherine Railey is a software analyst, programmer, and production support analyst based in Holden, MO. She has over 20 years of experience in software project management, documentation, application support, requirements analysis, and production support. Her technical skills include IBM mainframe systems, COBOL, SQL, and various Microsoft and Unix-based tools. She has worked for Sprint and IBM, where she performed duties like addressing production issues, managing software projects, and improving business processes. Railey has a BSBA in Accounting and Computer Science as well as an MBA in Management.
ETIS09 - Data Quality: Common Problems & Checks - PresentationDavid Walker
The document discusses common data quality problems that occur in data warehousing systems and how to check for them. It describes 11 common problem types like referential issues, data type issues, and data content issues. It recommends implementing automated checks that regularly run across source systems, staging areas, and the data warehouse. Additional profiling checks run manually include checking for outliers, minimums and maximums, sequential keys, and data types. Continuous monitoring and prevention is key to ensuring high quality data.
This document provides an overview of data warehousing. It defines data warehousing as collecting data from multiple sources into a central repository for analysis and decision making. The document outlines the history of data warehousing and describes its key characteristics like being subject-oriented, integrated, and time-variant. It also discusses the architecture of a data warehouse including sources, transformation, storage, and reporting layers. The document compares data warehousing to traditional DBMS and explains how data warehouses are better suited for analysis versus transaction processing.
The document describes DataVard's BW Fitness Test and HeatMap products which provide analysis and recommendations to optimize SAP BW systems. The BW Fitness Test analyzes key performance indicators, system usage, and data distribution. It benchmarks the system against others. The HeatMap visualizes query usage and runtimes to identify performance optimization opportunities. Both tools help with data management, testing, compliance, and preparing for upgrades like SAP HANA.
A Brief Introduction to Enterprise Architecture Daljit Banger
Presentation to Metropolitan University (London) on the 16th Feb 2017.
The purpose of the session was to introduce core basic concepts around Enterprise Architecture and discuss the role of the Enterprise Architect .
What You Need to Know Before Upgrading to SharePoint 2013Perficient, Inc.
Ready to join the SharePoint 2013 revolution but not sure what is involved? Are you in the middle of a migration that is behind schedule? This presentation walks you through general guidelines and common pitfalls to avoid so your transition to SharePoint 2013 will be successful.
Speaker Suzanne George discusses tips and tricks to ensure a successful SharePoint 2013 implementation and describe common mistakes that organizations make during the transition.
Whether you are in the middle of migrating to SharePoint 2013 or you are just thinking about implementation, this session will give you tools that will help you successfully deploy SharePoint within your organization.
Presenter Suzanne George, MCTS, is a Senior Technical Architect a Perficient. She has developed, administered, and architected website applications since 1995 and has worked with top 100 companies such as Netscape, AOL, Sun Microsystems, and Verio. Her experience includes custom applications and SharePoint integration with applications such as ESRI, Deltek Accounting Software, and SAP. Suzanne sits on the MSL IT Manager Advisory Council, was a contributing author for SharePoint 2010 Administrators and presents at SharePoint Saturdays around the country.
Trudy Thompson is an experienced Database Administrator seeking a challenging position. She has over 18 years of experience in information technology and expertise in database design, implementation, and management. Her technical skills include experience with Teradata and Oracle databases as well as software like SQL and PL/SQL.
This document discusses Enterprise Resource Planning (ERP) systems. It provides definitions and examples of ERP functionality and modules. It describes how ERP systems can be customized and expanded upon. It discusses factors to consider in the vendor selection process such as functionality, costs, and vendor support. It also summarizes key aspects of a successful ERP implementation including change management, process redesign, and realizing benefits through business process improvements rather than just technology changes.
Enterprise Architecture - An Introduction from the Real World Daljit Banger
This document provides an overview of enterprise architecture. It begins with an agenda for the overview presentation. It then discusses several public architectural frameworks that can provide guidance. Next, it explains that enterprise architecture aims to align an organization's technology landscape with its strategic goals. It provides an example of how enterprise architecture could help ensure compliance with new privacy regulations. The document outlines the typical products and deliverables of an enterprise architecture practice, including various types of models, assessments, roadmaps and more. It discusses the roles and responsibilities of enterprise architects, solution architects and technical architects. Finally, it emphasizes that enterprise architecture realization depends on the specific organization and is supported by frameworks, patterns and best practices.
Supporting material for my Webinar to the ACS - June2017Daljit Banger
The attached slide deck was used to Support a webinar for the Australian Computer Society (Queensland) on June 1st 2017.
Some previously used slides with modified content and some additional slides to support the webinar theme
Full Webinar Video can be seen at https://youtu.be/_41-izCm5rw
Mahammad Shabbeer is an experienced Oracle DBA with over 4 years of experience working with IBM India Pvt Ltd and Hi-Tech Solutions Pvt Ltd. He has expertise in Oracle 9i, 10g, 11g, and 12c and is proficient in SQL, C, HTML, UNIX, AIX, Linux, Solaris, and Windows. As an Oracle DBA, his responsibilities include database configuration, installation, administration, monitoring, backups, security, patching, and more. He is looking for a challenging role that allows him to utilize his Oracle DBA knowledge and expertise.
SharePoint Governance: stories, myths, legends and real lifeToni Frankola
SharePoint governance starts with a 600-page document. At our 30-person company, we need a 40-person SharePoint Governance committee, and nobody can determine why a housekeeper has access to the governance document.
Have you heard this type of statement? We most certainly have. In this session, Toni will bust myths like these by providing a workable approach to SharePoint governance in small and large enterprises. We will talk about setting policies as well as what makes sense and what doesn’t. We will break down the governance plan and examine its pieces. Most importantly, we will talk about implementing these policies based on real-life use cases, where no one reads 600-page documents.
Session highlights:
- Developing a workable governance plan
- Setting realistic governance policies
- Automatizing policies implementation
SharePoint 2013 Governance Planning - SharePoint governance is the set of policies, roles, responsibilities, and processes that guides, directs, and controls how an organization's business divisions and IT teams cooperate to achieve business goals.
Ramesh S Togari is seeking a position as an Informatica developer. He has over 4 years of experience in ETL development using Informatica and other tools. His experience includes developing mappings between various data sources like Oracle and flat files, transforming data, testing mappings, and supporting production systems. He has worked on projects in various domains for clients like ITC InfoTech, Monocept Consulting, NCR, and GrubHub Seamless.
- Data warehousing aims to help knowledge workers make better decisions by integrating data from multiple sources and providing historical and aggregated data views. It separates analytical processing from operational processing for improved performance.
- A data warehouse contains subject-oriented, integrated, time-variant, and non-volatile data to support analysis. It is maintained separately from operational databases. Common schemas include star schemas and snowflake schemas.
- Online analytical processing (OLAP) supports ad-hoc querying of data warehouses for analysis. It uses multidimensional views of aggregated measures and dimensions. Relational and multidimensional OLAP are common architectures. Measures are metrics like sales, and dimensions provide context like products and time periods.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
A data warehouse is a central repository for storing historical and integrated data from multiple sources to be used for analysis and reporting. It contains a single version of the truth and is optimized for read access. In contrast, operational databases are optimized for transaction processing and contain current detailed data. A key aspect of data warehousing is using a dimensional model with fact and dimension tables. This allows for analyzing relationships between measures and dimensions in a multi-dimensional structure known as a data cube.
From our experience, performance is the most critical issue in SAP BW. Thus, those are answered with new technologies such as BWA or SAP HANA. Right data management enables you to bring your BW to the top form even before you decide to invest in new technologies, especially with nearline storage and housekeeping.
William Inmon is considered the father of data warehousing. He has over 35 years of experience in database technology and data warehouse design. Inmon has written over 650 articles and published 45 books on topics related to building, using, and maintaining data warehouses and information factories. A data warehouse is a collection of integrated, subject-oriented databases designed to support decision-making. It contains data that is non-volatile, time-variant, integrated, and summarized for analysis. Key components of a data warehouse environment include the data store, data marts, and metadata.
The document discusses various techniques for tuning data warehouse performance. It recommends tuning the data loading process to speed up queries and optimize hardware usage. Specific strategies mentioned include loading data in batches during off-peak hours, using parallel loading and direct path inserts to bulk load data faster, preallocating tablespace, and temporarily disabling indexes and constraints. The document also provides examples of using SQL*Loader and parallel direct path loads to efficiently bulk load data from files into tables.
Catherine Railey is a software analyst, programmer, and production support analyst based in Holden, MO. She has over 20 years of experience in software project management, documentation, application support, requirements analysis, and production support. Her technical skills include IBM mainframe systems, COBOL, SQL, and various Microsoft and Unix-based tools. She has worked for Sprint and IBM, where she performed duties like addressing production issues, managing software projects, and improving business processes. Railey has a BSBA in Accounting and Computer Science as well as an MBA in Management.
ETIS09 - Data Quality: Common Problems & Checks - PresentationDavid Walker
The document discusses common data quality problems that occur in data warehousing systems and how to check for them. It describes 11 common problem types like referential issues, data type issues, and data content issues. It recommends implementing automated checks that regularly run across source systems, staging areas, and the data warehouse. Additional profiling checks run manually include checking for outliers, minimums and maximums, sequential keys, and data types. Continuous monitoring and prevention is key to ensuring high quality data.
This document provides an overview of data warehousing. It defines data warehousing as collecting data from multiple sources into a central repository for analysis and decision making. The document outlines the history of data warehousing and describes its key characteristics like being subject-oriented, integrated, and time-variant. It also discusses the architecture of a data warehouse including sources, transformation, storage, and reporting layers. The document compares data warehousing to traditional DBMS and explains how data warehouses are better suited for analysis versus transaction processing.
The document describes DataVard's BW Fitness Test and HeatMap products which provide analysis and recommendations to optimize SAP BW systems. The BW Fitness Test analyzes key performance indicators, system usage, and data distribution. It benchmarks the system against others. The HeatMap visualizes query usage and runtimes to identify performance optimization opportunities. Both tools help with data management, testing, compliance, and preparing for upgrades like SAP HANA.
A Brief Introduction to Enterprise Architecture Daljit Banger
Presentation to Metropolitan University (London) on the 16th Feb 2017.
The purpose of the session was to introduce core basic concepts around Enterprise Architecture and discuss the role of the Enterprise Architect .
What You Need to Know Before Upgrading to SharePoint 2013Perficient, Inc.
Ready to join the SharePoint 2013 revolution but not sure what is involved? Are you in the middle of a migration that is behind schedule? This presentation walks you through general guidelines and common pitfalls to avoid so your transition to SharePoint 2013 will be successful.
Speaker Suzanne George discusses tips and tricks to ensure a successful SharePoint 2013 implementation and describe common mistakes that organizations make during the transition.
Whether you are in the middle of migrating to SharePoint 2013 or you are just thinking about implementation, this session will give you tools that will help you successfully deploy SharePoint within your organization.
Presenter Suzanne George, MCTS, is a Senior Technical Architect a Perficient. She has developed, administered, and architected website applications since 1995 and has worked with top 100 companies such as Netscape, AOL, Sun Microsystems, and Verio. Her experience includes custom applications and SharePoint integration with applications such as ESRI, Deltek Accounting Software, and SAP. Suzanne sits on the MSL IT Manager Advisory Council, was a contributing author for SharePoint 2010 Administrators and presents at SharePoint Saturdays around the country.
Trudy Thompson is an experienced Database Administrator seeking a challenging position. She has over 18 years of experience in information technology and expertise in database design, implementation, and management. Her technical skills include experience with Teradata and Oracle databases as well as software like SQL and PL/SQL.
This document discusses Enterprise Resource Planning (ERP) systems. It provides definitions and examples of ERP functionality and modules. It describes how ERP systems can be customized and expanded upon. It discusses factors to consider in the vendor selection process such as functionality, costs, and vendor support. It also summarizes key aspects of a successful ERP implementation including change management, process redesign, and realizing benefits through business process improvements rather than just technology changes.
Enterprise Architecture - An Introduction from the Real World Daljit Banger
This document provides an overview of enterprise architecture. It begins with an agenda for the overview presentation. It then discusses several public architectural frameworks that can provide guidance. Next, it explains that enterprise architecture aims to align an organization's technology landscape with its strategic goals. It provides an example of how enterprise architecture could help ensure compliance with new privacy regulations. The document outlines the typical products and deliverables of an enterprise architecture practice, including various types of models, assessments, roadmaps and more. It discusses the roles and responsibilities of enterprise architects, solution architects and technical architects. Finally, it emphasizes that enterprise architecture realization depends on the specific organization and is supported by frameworks, patterns and best practices.
Supporting material for my Webinar to the ACS - June2017Daljit Banger
The attached slide deck was used to Support a webinar for the Australian Computer Society (Queensland) on June 1st 2017.
Some previously used slides with modified content and some additional slides to support the webinar theme
Full Webinar Video can be seen at https://youtu.be/_41-izCm5rw
Mahammad Shabbeer is an experienced Oracle DBA with over 4 years of experience working with IBM India Pvt Ltd and Hi-Tech Solutions Pvt Ltd. He has expertise in Oracle 9i, 10g, 11g, and 12c and is proficient in SQL, C, HTML, UNIX, AIX, Linux, Solaris, and Windows. As an Oracle DBA, his responsibilities include database configuration, installation, administration, monitoring, backups, security, patching, and more. He is looking for a challenging role that allows him to utilize his Oracle DBA knowledge and expertise.
SharePoint Governance: stories, myths, legends and real lifeToni Frankola
SharePoint governance starts with a 600-page document. At our 30-person company, we need a 40-person SharePoint Governance committee, and nobody can determine why a housekeeper has access to the governance document.
Have you heard this type of statement? We most certainly have. In this session, Toni will bust myths like these by providing a workable approach to SharePoint governance in small and large enterprises. We will talk about setting policies as well as what makes sense and what doesn’t. We will break down the governance plan and examine its pieces. Most importantly, we will talk about implementing these policies based on real-life use cases, where no one reads 600-page documents.
Session highlights:
- Developing a workable governance plan
- Setting realistic governance policies
- Automatizing policies implementation
SharePoint 2013 Governance Planning - SharePoint governance is the set of policies, roles, responsibilities, and processes that guides, directs, and controls how an organization's business divisions and IT teams cooperate to achieve business goals.
Ramesh S Togari is seeking a position as an Informatica developer. He has over 4 years of experience in ETL development using Informatica and other tools. His experience includes developing mappings between various data sources like Oracle and flat files, transforming data, testing mappings, and supporting production systems. He has worked on projects in various domains for clients like ITC InfoTech, Monocept Consulting, NCR, and GrubHub Seamless.
Gururajan Venkataraman is a techno-managerial professional with 18 years of experience leading diverse programs for telecom and banking service providers. He has expertise in strategy, innovation, program management, customer delivery, and transformation. He is certified in project management, CMMI, ITIL, and database administration. Currently he works as a consulting manager and head of the mainframe infrastructure and database team at Temenos India.
EPM Cloud in Real Life: 2 Real-world Cloud Migration Case StudiesDatavail
In this presentation at the HugMN user conference, we presented 2 different successful real-world EPM Cloud migration and implementation case studies from different industries. Get a birds-eye view into the practicalities of moving to cloud, and the tools you need to make the business case for your own company.
Vishwanath Mallanagouda is a data warehouse application developer with over 4 years of experience working with technologies like Informatica, Oracle, SQL, and Hadoop. He has expertise in ETL tool development, data modeling, and database administration. Currently working as an application developer at Deloitte India Consulting, his past experience includes projects for banking, insurance, and public sector clients at IBM.
Ramesh S Togari is seeking a position as an Informatica developer. He has over 5 years of experience in ETL development using Informatica and other tools like Snaplogic and Komodo. He has worked on projects in various domains including pharmaceutical, banking, and retail. His responsibilities have included requirement gathering, developing ETL mappings, testing, and production support. His most recent role was as an EEE Lead at ITC InfoTech where he developed ETL programs for IMS Health implementing their business requirements.
Best Practices for Becoming an Exceptional Postgres DBA EDB
Drawing from our teams who support hundreds of Postgres instances and production database systems for customers worldwide, this presentation provides real-real best practices from the nation's top DBAs. Learn top-notch monitoring and maintenance practices, get resource planning advice that can help prevent, resolve, or eliminate common issues, learning top database tuning tricks for increasing system performance and ultimately, gain greater insight into how to improve your effectiveness as a DBA.
This document provides an overview of Oracle's Information Management Reference Architecture. It includes a conceptual view of the main architectural components, several design patterns for implementing different types of information management solutions, a logical view of the components in an information management system, and descriptions of how data flows through ingestion, interpretation, and different data layers.
A Practical Guide to Selecting a Stream Processing Technology confluent
Presented by Michael Noll, Product Manager, Confluent.
Why are there so many stream processing frameworks that each define their own terminology? Are the components of each comparable? Why do you need to know about spouts or DStreams just to process a simple sequence of records? Depending on your application’s requirements, you may not need a full framework at all.
Processing and understanding your data to create business value is the ultimate goal of a stream data platform. In this talk we will survey the stream processing landscape, the dimensions along which to evaluate stream processing technologies, and how they integrate with Apache Kafka. Particularly, we will learn how Kafka Streams, the built-in stream processing engine of Apache Kafka, compares to other stream processing systems that require a separate processing infrastructure.
This document provides an agenda for an Oracle Text tuning presentation. It introduces the presenter and their background in Oracle databases. The document then outlines some common problems businesses face with slow applications and identifies application tuning as the highest priority area to focus on, which can resolve 80% of performance issues. It provides an overview of the proposed solution of using application tuning tools, best practices, monitoring, and consulting to improve performance.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
2. Read me (Remove when presenting)
• This is a draft document
– Reviews are required by Penton and Metalogix
– Review each page and notes
– Edit as you see fit and highlight change in RED
• The presenter has 15-20 minutes to present
• The presentation contains 15-20 slides to
meet time slot
3. Abstract (Remove when presenting)
Organizations with remote or global field offices are
finding that despite massive growth in collaboration
and content sharing, speed and accessibility has been
decreasing. And that means less productivity for your
organization.
The solution is replication. Join SharePoint expert, Ron
Charity as he walks IT professionals through a cursory
overview of what replication is, how to identify
replication as a solution, best practices for technical
solutions and operational activities required to sustain
a replication solution.
4. BIO
Ron Charity
A published Technologist with 20 + years in
infrastructure and application consulting.
Experience working in the US, Canada,
Australia and Europe. Has worked with
SharePoint and related technologies since
2000.
Currently he is responsible for a large global
SharePoint environment consisting of several
farms that service a financial institution.
Plays guitar in a band, rides a Harley
Nightster, owns a Superbird, and enjoys
travel, especially to beach destinations.
5. Agenda
• Common reasons for replication
• General type of replication
• Information architecture considerations
• Technical architecture considerations
• Operational considerations
• Next steps
• Further reading
• Contact information
6.
7.
8. Reasons for Replication
•Content Replication – Copying SharePoint
content to other regions to reduce content
authoring workload, costs and reduce
publishing errors. Also replicate content closer
to end users improving user experience.
•Disaster Recovery – active / active multi-farm
environment to creates a highly resilient
service offering.
•Data Backup – Copying content databases to
off site to comply with offsite backup policy.
9. Types of Replication
•Storage-level replication - At the storage level
(focused on a block of binary data typically
offered by Storage venders)
•Database-level replication - Provided by your
RDBMS (e.g. Microsoft SQL Server)
•Application level – Replication at application
enables replication in a more granular
manner
10. Types of Replication
•Storage-level replication - At the storage level
(focused on a block of binary data typically
offered by Storage venders).
•Database-level replication - Provided by your
RDBMS (e.g. Microsoft SQL Server).
•Application level – Replication at application
enables replication in a more granular fashion
at the site collection, sites, lists and library
level.
11. Categories for best practices
•Information Architecture – content level
practices and steps you must take.
•Technical architecture – technology design
practices and steps you must take.
•Operational – operational practices you must
take.
12. Information architecture
considerations
•Identify content (site collections, sites, lists and
libraries to be replicated).
•Document publishing intervals for content to
understand how often content is refreshed.
•Document source content owners / publishers.
•Obtain service levels for decision support
regarding solution – metrics,
•Know your company policy specific to records
management and privacy.
13. Technical Best Practices
•Run a risk workshop with stakeholders to
create a risk plan (technical and operational).
•If replicating for a warm standby solution (e.g.
SharePoint) make sure inventory all the
databases required.
•If replicating for off-site backups know your
company policy regarding retention
•Document and track data (e.g. database) size
and growth patterns.
14. Technical Best Practices
•Document current operational jobs such as
indexing, profile imports etc.
•Investigate your network bandwidth and
latency – will impact replication times.
•Make sure the product(s) have capability to
error check in case of corruption.
•Make sure the product(s) log replication times
and duration – tie into helpdesk system for
notification and reporting.
15. Technical Best Practices
•Speak with Network staff to obtain information
regarding network bandwidth and latency.
•Script and where possible – minimize chance of
human error and reduce operational workload.
•Test the solution in an environment that closely
mimics the production design.
•Document product architecture and
configuration for production support and
reference purposes.
16. Technical Best Practices
•Create a document a legal hold process jointly
with records management / compliance and
audit
•Document operational procedures for day to
day support and verification of correct
operation.
•If using for warm standby document the
recover procedures (e.g. rebuilds, jobs to be
run, URL pointers to be changed, testing for
correct operation etc.)
17. Technical Best Practices Con’t
•Utilize encryption to protect data – follow
company policy and or vender
recommendations as required.
•Utilize compression – utilize as required based
on network and job window available - follow
vender recommendations and plan as required.
18. Operational Best Practices Con’t
•Track and report on data (e.g. database) growth
and replication times.
•Log events related to data replication (e.g.
start, end, duration and errors).
•Use help desk software to log and send
messages to staff regarding status and
success/errors.
•Test the replication solution on a regular basis
(e.g. yearly or after major technical and or
operational changes).
19. Operational Best Practices Con’t
•Test legal hold process on a regular basis and
involve record management / compliance office
and audit.
•Document the operational procedures for day
to day operations/administration and
troubleshooting.
•If operations are outsourced make sure the
contract includes the responsibilities and skill
set required.
20. Operational Best Practices Con’t
•Make commercial arrangements for support
and software license maintenance.
•If you’re in a very large organization consider
utilizing a product manager to look after
product lifecycle.
•Keep diligent eye on operational jobs and
possible overlap that could impact
performance and proper completion of jobs
(e.g. backup, replication, virus scans etc.).
21. Next steps
•Assemble a business case for replication
•Work with stakeholders and or sponsors
•Scope a proof of concept
•Deploy proof of concept
•Deploy pilot
•Deploy to production
•Documentation for each step with rigorous
communication
22. Further Reading
•Metalogix Replicator
•SQL Server Replication
•Fundamentals of SQL Server 2012 Replication
•Pro SharePoint Disaster Recovery and High
Availability (Expert's Voice in Sharepoint)
23. Contact Information
• Questions? Ideas or suggestions you want to
share?
• Text chat or contact me at
– roncharity@gmail.com
– ca.linkedin.com/in/ronjcharity/
Editor's Notes
Final
Version 2.0
Date 10/6/2014
Left blank intentionally
Left blank intentionally
Mental note >> What's the point? Why should they care?
There are many ways to approach this topic
Being allotted 20 minutes, I must briefly touch on important areas
I’m a consultant / architect – take a holistic approach – multiple view points
Your level of success depends what you’re managing to as success criteria
I will be prescriptive throughout the webinar and will be available through email
Left blank intentionally
Mental note >> What's the point? Why should they care?
You require a strategy and plan to be truly successful.
Success often depends on specific points of view.
Especially in large organizations without governance or those using outsourced resources.
An executive sponsor is key to your success.
Leverage the team, company policy, stakeholders.
Maneuver carefully around fiefdoms and other politics.
Mental note >> What's the point? Why should they care?
Successful people usually
Have a strategy
Have a solid network
Have some help
Think of the it this way…
Coyote as your sponsor and Gorn as all the politics
Mental note >> What's the point? Why should they care?
The following are the common reasons for replication:
Content Replication
Copying SharePoint content to other regions to reduce content authoring workload, costs and reduce publishing errors.
Also replicate content closer to end users improving user experience - reduce distance/latency between content and user.
Disaster Recovery
Active / active multi-farm environment creating a highly resilient service offering.
Copy SharePoint content to another farm located in another geographic region that is acting as a warm standby.
Data Backup – Copying content databases to another site to comply with company offsite backup policy.
Mental note >> What's the point? Why should they care?
The following are the common types of replication:
Storage-level replication - At the storage level (focused on a block of binary data typically offered by Storage venders) – fast but does allow for granular control of application content replication. Also storage level replication is generally not within the control of the SharePoint team.
Database-level replication - Provided by your RDBMS (e.g. Microsoft SQL Server) . Still fast but does and does provide more granular control of replication (at database level). RDMS level replication is generally not within the control of the SharePoint team.
Application level – Replication at application enables replication in a more granular fashion at the site collection, sites, lists and library level. Application level replication is generally within the control of the SharePoint team.
Mental note >> What's the point? Why should they care?
The following are the common types of replication:
Storage-level replication - At the storage level (focused on a block of binary data typically offered by Storage venders) – fast but does allow for granular control of application content replication. Also storage level replication is generally not within the control of the SharePoint team.
Database-level replication - Provided by your RDBMS (e.g. Microsoft SQL Server) . Still fast but does and does provide more granular control of replication (at database level). RDMS level replication is generally not within the control of the SharePoint team.
Application level – Replication at application enables replication in a more granular fashion at the site collection, sites, lists and library level. Application level replication is generally within the control of the SharePoint team.
Mental note >> What's the point? Why should they care?
The best practices and steps are broken down as follows:
Information Architecture – content level practices and steps you must take.
Technical architecture – technology design practices and steps you must take.
Operational – operational practices you must take.
Mental note >> What's the point? Why should they care?
Information architecture considerations:
Identify content (site collections, sites, lists and libraries to be replicated).
Document publishing intervals for content to understand how often content is refreshed.
Document source content owners / publishers.
Obtain service levels for decision support regarding solution – metrics that will communicated to stakeholders (expectations regarding replication of content).
Know your company policy specific to records management and privacy – especially when replicating content between countries.
general disposition schedules for content
Mental note >> What's the point? Why should they care?
Technical considerations:
Run a risk workshop with stakeholders to create a risk plan (technical and operational).
If replicating for a warm standby solution (e.g. SharePoint) make sure inventory all the databases required. Also know which databases not to replicate (e.g. profile database) and related rebuild/job times required to re-populate (e.g . profile and search) .
If replicating for off-site backups know your company policy regarding retention and follow it.
Document and track data (e.g. database) size and growth patterns.
Document current operational jobs such as indexing, profile imports, virus scans and backup jobs for later use when planning replication jobs.
Investigate your network bandwidth and latency – will impact replication times.
Make sure the product(s) have capability to error check in case of corruption.
Make sure the product(s) log replication times and duration – tie into helpdesk system for notification and reporting.
Speak with Network staff to obtain information regarding network bandwidth and latency.
Script and where possible – minimize chance of human error and reduce operational workload.
Test the solution in an environment that closely mimics the production design.
Document product architecture and configuration for production support and reference purposes.
Create a document a legal hold process jointly with records management / compliance and audit
Document operational procedures for day to day support and verification of correct operation.
If using for warm standby document the recover procedures (e.g. rebuilds, jobs to be run, URL pointers to be changed, testing for correct operation etc.)
Utilize encryption to protect data – follow company policy and or vender recommendations as required.
Utilize compression – utilize as required based on network and job window available - follow vender recommendations and plan as required.
Mental note >> What's the point? Why should they care?
Technical considerations:
Run a risk workshop with stakeholders to create a risk plan (technical and operational).
If replicating for a warm standby solution (e.g. SharePoint) make sure inventory all the databases required. Also know which databases not to replicate (e.g. profile database) and related rebuild/job times required to re-populate (e.g . profile and search) .
If replicating for off-site backups know your company policy regarding retention and follow it.
Document and track data (e.g. database) size and growth patterns.
Document current operational jobs such as indexing, profile imports, virus scans and backup jobs for later use when planning replication jobs.
Investigate your network bandwidth and latency – will impact replication times.
Make sure the product(s) have capability to error check in case of corruption.
Make sure the product(s) log replication times and duration – tie into helpdesk system for notification and reporting.
Speak with Network staff to obtain information regarding network bandwidth and latency.
Script and where possible – minimize chance of human error and reduce operational workload.
Test the solution in an environment that closely mimics the production design.
Document product architecture and configuration for production support and reference purposes.
Create a document a legal hold process jointly with records management / compliance and audit
Document operational procedures for day to day support and verification of correct operation.
If using for warm standby document the recover procedures (e.g. rebuilds, jobs to be run, URL pointers to be changed, testing for correct operation etc.)
Utilize encryption to protect data – follow company policy and or vender recommendations as required.
Utilize compression – utilize as required based on network and job window available - follow vender recommendations and plan as required.
Mental note >> What's the point? Why should they care?
Technical considerations:
Run a risk workshop with stakeholders to create a risk plan (technical and operational).
If replicating for a warm standby solution (e.g. SharePoint) make sure inventory all the databases required. Also know which databases not to replicate (e.g. profile database) and related rebuild/job times required to re-populate (e.g . profile and search) .
If replicating for off-site backups know your company policy regarding retention and follow it.
Document and track data (e.g. database) size and growth patterns.
Document current operational jobs such as indexing, profile imports, virus scans and backup jobs for later use when planning replication jobs.
Investigate your network bandwidth and latency – will impact replication times.
Make sure the product(s) have capability to error check in case of corruption.
Make sure the product(s) log replication times and duration – tie into helpdesk system for notification and reporting.
Speak with Network staff to obtain information regarding network bandwidth and latency.
Script and where possible – minimize chance of human error and reduce operational workload.
Test the solution in an environment that closely mimics the production design.
Document product architecture and configuration for production support and reference purposes.
Create a document a legal hold process jointly with records management / compliance and audit
Document operational procedures for day to day support and verification of correct operation.
If using for warm standby document the recover procedures (e.g. rebuilds, jobs to be run, URL pointers to be changed, testing for correct operation etc.)
Utilize encryption to protect data – follow company policy and or vender recommendations as required.
Utilize compression – utilize as required based on network and job window available - follow vender recommendations and plan as required.
Mental note >> What's the point? Why should they care?
Technical considerations:
Run a risk workshop with stakeholders to create a risk plan (technical and operational).
If replicating for a warm standby solution (e.g. SharePoint) make sure inventory all the databases required. Also know which databases not to replicate (e.g. profile database) and related rebuild/job times required to re-populate (e.g . profile and search) .
If replicating for off-site backups know your company policy regarding retention and follow it.
Document and track data (e.g. database) size and growth patterns.
Document current operational jobs such as indexing, profile imports, virus scans and backup jobs for later use when planning replication jobs.
Investigate your network bandwidth and latency – will impact replication times.
Make sure the product(s) have capability to error check in case of corruption.
Make sure the product(s) log replication times and duration – tie into helpdesk system for notification and reporting.
Speak with Network staff to obtain information regarding network bandwidth and latency.
Script and where possible – minimize chance of human error and reduce operational workload.
Test the solution in an environment that closely mimics the production design.
Document product architecture and configuration for production support and reference purposes.
Create a document a legal hold process jointly with records management / compliance and audit
Document operational procedures for day to day support and verification of correct operation.
If using for warm standby document the recover procedures (e.g. rebuilds, jobs to be run, URL pointers to be changed, testing for correct operation etc.)
Utilize encryption to protect data – follow company policy and or vender recommendations as required.
Utilize compression – utilize as required based on network and job window available - follow vender recommendations and plan as required.
Mental note >> What's the point? Why should they care?
Obtain service levels for decision support regarding solution – metrics that will communicated to stakeholders.
Run a risk workshop with stakeholders to create a risk plan (technical and operational).
If replicating for a warm standby solution (e.g. SharePoint) make sure inventory all the databases required. Also know which databases not to replicate (e.g. profile database) and related rebuild/job times required to re-populate (e.g . profile and search) .
If replicating for off-site backups know your company policy regarding retention and follow it. Also document and track data (e.g. database) size and growth patterns.
Investigate your network bandwidth and latency – will impact replication times.
Make sure the product(s) have capability to error check incase of corruption.
Make sure the product(s) log replication times and duration – tie into helpdesk system for notification and reporting.
Speak with Network staff to obtain information regarding network bandwidth and latency.
Script and where possible – minimize chance of human error and reduce operational workload.
Test the solution in an environment that closely mimics the production design.
Document product architecture and configuration for production support and reference purposes.
Document operational procedures for day to day support and verification of correct operation.
If using for warm standby document the recover procedures (e.g. rebuilds, jobs to be run, URL pointers to be changed, testing for correct operation etc.)
Utilize encryption to protect data – follow company policy and or vender recommendations as required.
Utilize compression – utilize as required based on network and job window available - follow vender recommendations and plan as required.
Mental note >> What's the point? Why should they care?
Operational best practices:
Track and report on data (e.g. database) growth and replication times.
Log events related to data replication (e.g. start, end, duration and errors).
Use help desk software to log and send messages to staff regarding status and success/errors.
Test the replication solution on a regular basis (e.g. yearly or after major technical and or operational changes) by accessing replicated data (e.g. database) or testing the failover of the warm standby system.
Test legal hold process on a regular basis and involve record management / compliance office and audit.
Document the operational procedures for day to day operations/administration and troubleshooting.
If operations are outsourced make sure the contract includes the responsibilities and skill set required.
Make commercial arrangements for support and software license maintenance.
If you’re in a very large organization consider utilizing a product manager to look after product lifecycle.
Keep diligent eye on operational jobs and possible overlap that could impact performance and proper completion of jobs (e.g. backup, replication, virus scans etc.).
Mental note >> What's the point? Why should they care?
Operational best practices:
Track and report on data (e.g. database) growth and replication times.
Log events related to data replication (e.g. start, end, duration and errors).
Use help desk software to log and send messages to staff regarding status and success/errors.
Test the replication solution on a regular basis (e.g. yearly or after major technical and or operational changes) by accessing replicated data (e.g. database) or testing the failover of the warm standby system.
Test legal hold process on a regular basis and involve record management / compliance office and audit.
Document the operational procedures for day to day operations/administration and troubleshooting.
If operations are outsourced make sure the contract includes the responsibilities and skill set required.
Make commercial arrangements for support and software license maintenance.
If you’re in a very large organization consider utilizing a product manager to look after product lifecycle.
Keep diligent eye on operational jobs and possible overlap that could impact performance and proper completion of jobs (e.g. backup, replication, virus scans etc.).
Mental note >> What's the point? Why should they care?
Operational best practices:
Track and report on data (e.g. database) growth and replication times.
Log events related to data replication (e.g. start, end, duration and errors).
Use help desk software to log and send messages to staff regarding status and success/errors.
Test the replication solution on a regular basis (e.g. yearly or after major technical and or operational changes) by accessing replicated data (e.g. database) or testing the failover of the warm standby system.
Test legal hold process on a regular basis and involve record management / compliance office and audit.
Document the operational procedures for day to day operations/administration and troubleshooting.
If operations are outsourced make sure the contract includes the responsibilities and skill set required.
Make commercial arrangements for support and software license maintenance.
If you’re in a very large organization consider utilizing a product manager to look after product lifecycle.
Keep diligent eye on operational jobs and possible overlap that could impact performance and proper completion of jobs (e.g. backup, replication, virus scans etc.).
Mental note >> What's the point? Why should they care?
Assemble a business case for replication
Align with business need / key selling points financially and technically
Identify gaps and friction points
Work with stakeholders and or sponsors
Get to know the commercial aspects
Know what they want from replication / more leverage the better
Scope a proof of concept
Keep it small and inexpensive
Focus on demonstrating concept / value
Deploy proof of concept
Limited run for 30/60 days
Review against success criteria
Deploy pilot
Deploy to production
Mental note >> What's the point? Why should they care?
Metalogix Replicator
http://www.metalogix.com/Products/Replicator/Replicator-for-SharePoint.aspx
SQL Server Replication
http://msdn.microsoft.com/en-us/library/ms151198.aspx
Fundamentals of SQL Server 2012 Replication
http://www.amazon.com/Fundamentals-SQL-Server-2012-Replication/dp/1906434999/ref=sr_1_1?s=books&ie=UTF8&qid=1410903907&sr=1-1&keywords=sql+server+replication
Pro SharePoint Disaster Recovery and High Availability (Expert's Voice in Sharepoint)
http://www.amazon.com/SharePoint-Disaster-Recovery-Availability-Sharepoint/dp/1430263288