This talk, given to the SharePoint Users Group of DC in July 2013, describes the approach Exostar took to migrating a client's 8TB site collection to a new SharePoint 2010 environment.
This document outlines an integration between the Contur and HEOS software. The integration is focused on allowing scientists to record experimental data in Contur and register compounds to HEOS as part of their workflow. It describes the Contur REST API and protocol execution framework that can extract and create Contur content. It also describes the HEOS SOAP API that can extract and create content in HEOS, including registering compounds. Components and protocols are provided that use these APIs to facilitate transferring data directly from Contur experiments to HEOS compound registration, without needing to re-enter information, in order to save time and reduce errors.
(ATS6-PLAT07) Managing AEP in an enterprise environmentBIOVIA
Deployments can range from personal laptop usage to large enterprise environments. The installer allows both interactive and unattended installations. Key folders include Users for individual data, Jobs for temporary execution data, Shared Public for shared resources, and XMLDB for the database. Logs record job executions, authentication events, and errors. Tools like DbUtil allow backup/restore of data, pkgutil creates packages for application delivery, and regress enables test automation. Planning folder locations and maintenance is important for managing resources in an enterprise environment.
(ATS6-DEV05) Building Interactive Web Applications with the Reporting CollectionBIOVIA
The document discusses building interactive web applications using the Reporting Collection. It describes components like forms, data connectors, interactive elements and AJAX capabilities that allow adding interactivity. The reporting components generate reports in formats like HTML, PDF from data and layouts. Interactive components allow generating full web applications without additional coding. Forms capture user input. The data connector synchronizes selections across visualizations. Protocol links and functions enable drill-down and AJAX functionality. JavaScript attributes and components add custom scripting.
Office Track: Exchange 2013 in the real world - Michael Van HorenbeeckITProceed
This document summarizes a presentation about deploying and managing Exchange 2013 in a real-world environment. It discusses planning the namespace design and server topology across multiple datacenters for high availability. It also covers installing Exchange 2013 and ensuring interoperability with older Exchange versions. Finally, it describes the new "Managed Availability" monitoring and remediation features in Exchange 2013.
This document summarizes a presentation about building RESTful applications using PHP. REST (Representational State Transfer) is a software architectural style that uses HTTP verbs to manipulate resources. The presentation covers the six constraints that define REST - client-server separation, statelessness, cacheability, uniform interface, layered system, and code on demand. It provides tips for implementing REST in PHP like using the header() function, determining the request method, and encoding/decoding JSON. Questions from the audience are answered at the end.
(ATS6-DEV06) Using Packages for Protocol, Component, and Application DeliveryBIOVIA
Delivering protocols, components, and applications to users and other developers on an AEP server can be very challenging. Accelrys delivers the majority of its AEP services in the form of packages. This talk will discuss the methods that anyone can use to deliver bundled applications in the form of packages and the benefits of doing so. The discussion will include how to create packages, modifying existing packages, deploying packages to servers, and tools that can be used for ensuring the quality of the packages.
(ATS6-DEV04) Building Web MashUp applications that include Accelrys Applicati...BIOVIA
One of the biggest challenges in most corporate environments is providing a way for users to access all the data they need, usually stored in multiple disparate locations, from one interface that they are comfortable with. As web applications have become more popular, RESTful APIs have emerged as the preffered web service format in recent years. Many Accelrys applications now provide RESTful APIs that allow developers to build mashup applications. This session will explore some of these APIs and how to use them to build a simple application.
1. The document discusses Discngine's Tibco Spotfire Pipeline Pilot connector, which allows graphs stored in Pipeline Pilot to be accessed and visualized in Spotfire.
2. It describes the architecture of the connector and how it executes Pipeline Pilot protocols to generate HTML pages for visualization in Spotfire.
3. Challenges in integrating the large Spotfire API and synchronizing client and server datasets are also discussed.
This document outlines an integration between the Contur and HEOS software. The integration is focused on allowing scientists to record experimental data in Contur and register compounds to HEOS as part of their workflow. It describes the Contur REST API and protocol execution framework that can extract and create Contur content. It also describes the HEOS SOAP API that can extract and create content in HEOS, including registering compounds. Components and protocols are provided that use these APIs to facilitate transferring data directly from Contur experiments to HEOS compound registration, without needing to re-enter information, in order to save time and reduce errors.
(ATS6-PLAT07) Managing AEP in an enterprise environmentBIOVIA
Deployments can range from personal laptop usage to large enterprise environments. The installer allows both interactive and unattended installations. Key folders include Users for individual data, Jobs for temporary execution data, Shared Public for shared resources, and XMLDB for the database. Logs record job executions, authentication events, and errors. Tools like DbUtil allow backup/restore of data, pkgutil creates packages for application delivery, and regress enables test automation. Planning folder locations and maintenance is important for managing resources in an enterprise environment.
(ATS6-DEV05) Building Interactive Web Applications with the Reporting CollectionBIOVIA
The document discusses building interactive web applications using the Reporting Collection. It describes components like forms, data connectors, interactive elements and AJAX capabilities that allow adding interactivity. The reporting components generate reports in formats like HTML, PDF from data and layouts. Interactive components allow generating full web applications without additional coding. Forms capture user input. The data connector synchronizes selections across visualizations. Protocol links and functions enable drill-down and AJAX functionality. JavaScript attributes and components add custom scripting.
Office Track: Exchange 2013 in the real world - Michael Van HorenbeeckITProceed
This document summarizes a presentation about deploying and managing Exchange 2013 in a real-world environment. It discusses planning the namespace design and server topology across multiple datacenters for high availability. It also covers installing Exchange 2013 and ensuring interoperability with older Exchange versions. Finally, it describes the new "Managed Availability" monitoring and remediation features in Exchange 2013.
This document summarizes a presentation about building RESTful applications using PHP. REST (Representational State Transfer) is a software architectural style that uses HTTP verbs to manipulate resources. The presentation covers the six constraints that define REST - client-server separation, statelessness, cacheability, uniform interface, layered system, and code on demand. It provides tips for implementing REST in PHP like using the header() function, determining the request method, and encoding/decoding JSON. Questions from the audience are answered at the end.
(ATS6-DEV06) Using Packages for Protocol, Component, and Application DeliveryBIOVIA
Delivering protocols, components, and applications to users and other developers on an AEP server can be very challenging. Accelrys delivers the majority of its AEP services in the form of packages. This talk will discuss the methods that anyone can use to deliver bundled applications in the form of packages and the benefits of doing so. The discussion will include how to create packages, modifying existing packages, deploying packages to servers, and tools that can be used for ensuring the quality of the packages.
(ATS6-DEV04) Building Web MashUp applications that include Accelrys Applicati...BIOVIA
One of the biggest challenges in most corporate environments is providing a way for users to access all the data they need, usually stored in multiple disparate locations, from one interface that they are comfortable with. As web applications have become more popular, RESTful APIs have emerged as the preffered web service format in recent years. Many Accelrys applications now provide RESTful APIs that allow developers to build mashup applications. This session will explore some of these APIs and how to use them to build a simple application.
1. The document discusses Discngine's Tibco Spotfire Pipeline Pilot connector, which allows graphs stored in Pipeline Pilot to be accessed and visualized in Spotfire.
2. It describes the architecture of the connector and how it executes Pipeline Pilot protocols to generate HTML pages for visualization in Spotfire.
3. Challenges in integrating the large Spotfire API and synchronizing client and server datasets are also discussed.
Tika is an open source project that provides a generic API for extracting metadata and structured text content from various document formats. It uses automatic content type detection to parse documents without needing to know the file type in advance. The project aims to pool efforts across various Apache projects like Apache POI and Apache PDFBox to provide a common solution for parsing different file types.
The Query Service is the new platform solution for querying a variety of data sources. The goal of Query Service is that administrators can configure a metadata description of the data source that can then be used by end users without detailed knowledge of the underlying data source. This session explains how to configure Query Service data sources and use them with the RESTful API or component collection.
2012.10 Liferay Europe Symposium, Alistair OldfieldEmeldi Group
This document outlines the approach and execution of migrating a Microsoft SharePoint site into Liferay. It describes inspecting the SharePoint environment and exporting content like pages, documents, and images into XML packages. Custom SharePoint web parts are reimplemented as Java portlets in Liferay. During import, URLs are rewritten and markup is filtered to replace links to the original SharePoint site. The automated, repeatable process allows full migration of content while reusing ported applications in new Liferay installations.
This deck was created by David Draper for Alfresco TTL 70 on October 2, 2013.
It covers enhancements to the Spring Surf framework as used by Alfresco Share.
This document outlines an approach for migrating content and functionality from a Microsoft SharePoint site into Liferay. It describes exporting SharePoint pages, libraries, and web parts into XML packages, then importing them into Liferay using custom importers. Web part functionality is reimplemented as Java portlets, and SharePoint URLs in content are rewritten during import. The automated, repeatable process allows full migration of a SharePoint site structure and content into Liferay.
Sitecore9 key features by jitendra soni - Presented in Sitecore User Group UKJitendra Soni
Sitecore 9 key features and upcoming updates were presented. The presentation included:
- An overview of Sitecore 9 features like using Microsoft SQL, improved search, and rule-based configuration.
- Demonstrations of the Sitecore Installation Framework (SIF) and how to install Sitecore 9 in 15 minutes.
- Explanations of Sitecore Forms, the Data Exchange Framework (DEF), xConnect, and role-based configuration.
- Information on the JavaScript Services (JSS) framework, marketing automation, Cortex machine learning, and Experience Commerce (XC).
- Details of upcoming products Horizon and Zenith which will focus on insights-powered experiences and headless capabilities.
This document discusses automating the import of source metadata into Oracle Warehouse Builder (OWB) through sampling to reduce errors, effort, and unnecessary work. It describes sampling metadata from Oracle modules by creating a module, database link, and importing selected object types and objects. It also covers sampling metadata from flat files by importing metadata, sampling a number of records with different delimiters, record types, and headers to generate a control file.
The document discusses web servers and their key components and functions. It covers:
1) The definition of a web server as a program that generates and transmits responses to client requests for web resources by parsing requests, authorizing access, and constructing responses.
2) How web servers handle client requests through steps like parsing requests, authorizing access, and transmitting responses. They can also dynamically generate responses through server-side includes and server scripts.
3) Techniques web servers use like access control through authentication and authorization, passing data to scripts, using cookies, caching responses, and allocating resources through event-driven, process-driven, and hybrid architectures.
Developing, Debugging and Administrating Your Integration Scenarios with WSO2...WSO2
This document provides an overview and agenda for developing, debugging, and administering integration scenarios with WSO2 Enterprise Integrator. It discusses the key features of WSO2 EI including interoperability, tooling, scenarios, reliability, and observability. The tooling section describes the developer studio IDE and data transformation capabilities. Scenarios covered include data integration, transformations, guaranteed delivery, and workflows. Deployment and analytics dashboards are also summarized.
Kafka Connect allows data ingestion into Kafka from external systems by using connectors. It provides scalability, fault tolerance, and exactly-once semantics. Connectors are run as tasks within workers that can run in either standalone or distributed mode. The Schema Registry works with Kafka Connect to handle schema validation and evolution.
Troubleshooting and Best Practices with WSO2 Enterprise IntegratorWSO2
This slide deck discusses how to troubleshoot an issue in WSO2 Enterprise Integrator and follow best practices in order to optimize output and avoid failure.
Watch webinar here: https://wso2.com/library/webinars/2018/10/troubleshooting-and-best-practices-with-wso2-enterprise-integrator
This document discusses Oracle Warehouse Builder (OWB) repositories, including their design, runtime, logical versus physical aspects, and installation process. It describes importing source system metadata, building logical mappings, deploying mappings, and running mappings. Repositories contain modules, mappings, transformations, process flows, data structures, and file structures. The runtime repository installs and runs mappings, monitors processes, and audits results. Logical and physical repositories differ in their contents, with physical repositories containing actual database objects.
Enterprise Integration Patterns - Spring wayDragan Gajic
We are living in a connected word, where unrestricted sharing of data becomes an ultimate goal.
This reflects the architecture of the software systems where the main focus is on building enterprise integrated solutions.
Enterprise Integration Patterns (EIP) are used to architecture and design integrated solutions.
During the past decades following integration styles have been identified:
- file transfer
- shared DB
- RMI
- messaging
Asynchronous messaging is the most used architectural style for enterprise integration. It allows building of the loosely coupled solutions which overcomes limitations of the remote communication.
This talk is about messaging, and the way Spring supports EIP via Spring Integration project.
Kafka Connect: Real-time Data Integration at Scale with Apache Kafka, Ewen Ch...confluent
Many companies are adopting Apache Kafka to power their data pipelines, including LinkedIn, Netflix, and Airbnb. Kafka’s ability to handle high throughput real-time data makes it a perfect fit for solving the data integration problem, acting as the common buffer for all your data and bridging the gap between streaming and batch systems.
However, building a data pipeline around Kafka today can be challenging because it requires combining a wide variety of tools to collect data from disparate data systems. One tool streams updates from your database to Kafka, another imports logs, and yet another exports to HDFS. As a result, building a data pipeline can take significant engineering effort and has high operational overhead because all these different tools require ongoing monitoring and maintenance. Additionally, some of the tools are simply a poor fit for the job: the fragmented nature of the data integration tools ecosystem lead to creative but misguided solutions such as misusing stream processing frameworks for data integration purposes.
We describe the design and implementation of Kafka Connect, Kafka’s new tool for scalable, fault-tolerant data import and export. First we’ll discuss some existing tools in the space and why they fall short when applied to data integration at large scale. Next, we will explore Kafka Connect’s design and how it compares to systems with similar goals, discussing key design decisions that trade off between ease of use for connector developers, operational complexity, and reuse of existing connectors. Finally, we’ll discuss how standardizing on Kafka Connect can ultimately lead to simplifying your entire data pipeline, making ETL into your data warehouse and enabling stream processing applications as simple as adding another Kafka connector.
eventbrite_kafka_summit_event_logo_v3-035858-edited.png
Syntergy upgrade open text content server with replicator - 7-3-2016Vijay Sharma
The Smart Way to Upgrade Content Server - Zero Downtime, Single Hop - In our experience, production outages, disruptions and resource availability are major barriers to organizations performing a timely Livelink or Content Server upgrade to the latest releases of OpenText Content Server 10.X & 16.
Over a number of years, Syntergy has developed a proven upgrade methodology which includes the use of the Syntergy's Replicator for OpenText Content Server software. This approach gives you the capability to perform upgrades directly from older versions of Livelink and Content Server to the latest releases in a Single Hop (no need up "hop" through multiple version upgrades) with Zero Downtime
Spring Web Service, Spring Integration and Spring BatchEberhard Wolff
This presentation shows Spring Web Services, Spring Integration and Spring Batch applied to a typical scenario. It walks through the advantages of the technologies and their sweet spots.
Migrating Very Large Site Collections (SPSDC)kiwiboris
This document discusses migrating a large 8 TB SharePoint site collection to a new farm within a 96 hour maintenance window. Key points:
- The site collection is too large to migrate as-is, so it will be split by promoting some subsites to new site collections.
- Metalogix Content Matrix will be used to script the migration in parallel batches to complete it on time.
- Challenges include maintaining performance over the large data set and validating a 99% accurate migration within the narrow window. Careful scripting and testing is required to successfully migrate such a large amount of content.
What Makes Migrating to the Cloud Different Than On-PremisesChristian Buckley
My second presentation from #SPTechCon Boston 2014, focusing on the limitations and performance concerns of migration to SharePoint Online (part of Office 365).
Tika is an open source project that provides a generic API for extracting metadata and structured text content from various document formats. It uses automatic content type detection to parse documents without needing to know the file type in advance. The project aims to pool efforts across various Apache projects like Apache POI and Apache PDFBox to provide a common solution for parsing different file types.
The Query Service is the new platform solution for querying a variety of data sources. The goal of Query Service is that administrators can configure a metadata description of the data source that can then be used by end users without detailed knowledge of the underlying data source. This session explains how to configure Query Service data sources and use them with the RESTful API or component collection.
2012.10 Liferay Europe Symposium, Alistair OldfieldEmeldi Group
This document outlines the approach and execution of migrating a Microsoft SharePoint site into Liferay. It describes inspecting the SharePoint environment and exporting content like pages, documents, and images into XML packages. Custom SharePoint web parts are reimplemented as Java portlets in Liferay. During import, URLs are rewritten and markup is filtered to replace links to the original SharePoint site. The automated, repeatable process allows full migration of content while reusing ported applications in new Liferay installations.
This deck was created by David Draper for Alfresco TTL 70 on October 2, 2013.
It covers enhancements to the Spring Surf framework as used by Alfresco Share.
This document outlines an approach for migrating content and functionality from a Microsoft SharePoint site into Liferay. It describes exporting SharePoint pages, libraries, and web parts into XML packages, then importing them into Liferay using custom importers. Web part functionality is reimplemented as Java portlets, and SharePoint URLs in content are rewritten during import. The automated, repeatable process allows full migration of a SharePoint site structure and content into Liferay.
Sitecore9 key features by jitendra soni - Presented in Sitecore User Group UKJitendra Soni
Sitecore 9 key features and upcoming updates were presented. The presentation included:
- An overview of Sitecore 9 features like using Microsoft SQL, improved search, and rule-based configuration.
- Demonstrations of the Sitecore Installation Framework (SIF) and how to install Sitecore 9 in 15 minutes.
- Explanations of Sitecore Forms, the Data Exchange Framework (DEF), xConnect, and role-based configuration.
- Information on the JavaScript Services (JSS) framework, marketing automation, Cortex machine learning, and Experience Commerce (XC).
- Details of upcoming products Horizon and Zenith which will focus on insights-powered experiences and headless capabilities.
This document discusses automating the import of source metadata into Oracle Warehouse Builder (OWB) through sampling to reduce errors, effort, and unnecessary work. It describes sampling metadata from Oracle modules by creating a module, database link, and importing selected object types and objects. It also covers sampling metadata from flat files by importing metadata, sampling a number of records with different delimiters, record types, and headers to generate a control file.
The document discusses web servers and their key components and functions. It covers:
1) The definition of a web server as a program that generates and transmits responses to client requests for web resources by parsing requests, authorizing access, and constructing responses.
2) How web servers handle client requests through steps like parsing requests, authorizing access, and transmitting responses. They can also dynamically generate responses through server-side includes and server scripts.
3) Techniques web servers use like access control through authentication and authorization, passing data to scripts, using cookies, caching responses, and allocating resources through event-driven, process-driven, and hybrid architectures.
Developing, Debugging and Administrating Your Integration Scenarios with WSO2...WSO2
This document provides an overview and agenda for developing, debugging, and administering integration scenarios with WSO2 Enterprise Integrator. It discusses the key features of WSO2 EI including interoperability, tooling, scenarios, reliability, and observability. The tooling section describes the developer studio IDE and data transformation capabilities. Scenarios covered include data integration, transformations, guaranteed delivery, and workflows. Deployment and analytics dashboards are also summarized.
Kafka Connect allows data ingestion into Kafka from external systems by using connectors. It provides scalability, fault tolerance, and exactly-once semantics. Connectors are run as tasks within workers that can run in either standalone or distributed mode. The Schema Registry works with Kafka Connect to handle schema validation and evolution.
Troubleshooting and Best Practices with WSO2 Enterprise IntegratorWSO2
This slide deck discusses how to troubleshoot an issue in WSO2 Enterprise Integrator and follow best practices in order to optimize output and avoid failure.
Watch webinar here: https://wso2.com/library/webinars/2018/10/troubleshooting-and-best-practices-with-wso2-enterprise-integrator
This document discusses Oracle Warehouse Builder (OWB) repositories, including their design, runtime, logical versus physical aspects, and installation process. It describes importing source system metadata, building logical mappings, deploying mappings, and running mappings. Repositories contain modules, mappings, transformations, process flows, data structures, and file structures. The runtime repository installs and runs mappings, monitors processes, and audits results. Logical and physical repositories differ in their contents, with physical repositories containing actual database objects.
Enterprise Integration Patterns - Spring wayDragan Gajic
We are living in a connected word, where unrestricted sharing of data becomes an ultimate goal.
This reflects the architecture of the software systems where the main focus is on building enterprise integrated solutions.
Enterprise Integration Patterns (EIP) are used to architecture and design integrated solutions.
During the past decades following integration styles have been identified:
- file transfer
- shared DB
- RMI
- messaging
Asynchronous messaging is the most used architectural style for enterprise integration. It allows building of the loosely coupled solutions which overcomes limitations of the remote communication.
This talk is about messaging, and the way Spring supports EIP via Spring Integration project.
Kafka Connect: Real-time Data Integration at Scale with Apache Kafka, Ewen Ch...confluent
Many companies are adopting Apache Kafka to power their data pipelines, including LinkedIn, Netflix, and Airbnb. Kafka’s ability to handle high throughput real-time data makes it a perfect fit for solving the data integration problem, acting as the common buffer for all your data and bridging the gap between streaming and batch systems.
However, building a data pipeline around Kafka today can be challenging because it requires combining a wide variety of tools to collect data from disparate data systems. One tool streams updates from your database to Kafka, another imports logs, and yet another exports to HDFS. As a result, building a data pipeline can take significant engineering effort and has high operational overhead because all these different tools require ongoing monitoring and maintenance. Additionally, some of the tools are simply a poor fit for the job: the fragmented nature of the data integration tools ecosystem lead to creative but misguided solutions such as misusing stream processing frameworks for data integration purposes.
We describe the design and implementation of Kafka Connect, Kafka’s new tool for scalable, fault-tolerant data import and export. First we’ll discuss some existing tools in the space and why they fall short when applied to data integration at large scale. Next, we will explore Kafka Connect’s design and how it compares to systems with similar goals, discussing key design decisions that trade off between ease of use for connector developers, operational complexity, and reuse of existing connectors. Finally, we’ll discuss how standardizing on Kafka Connect can ultimately lead to simplifying your entire data pipeline, making ETL into your data warehouse and enabling stream processing applications as simple as adding another Kafka connector.
eventbrite_kafka_summit_event_logo_v3-035858-edited.png
Syntergy upgrade open text content server with replicator - 7-3-2016Vijay Sharma
The Smart Way to Upgrade Content Server - Zero Downtime, Single Hop - In our experience, production outages, disruptions and resource availability are major barriers to organizations performing a timely Livelink or Content Server upgrade to the latest releases of OpenText Content Server 10.X & 16.
Over a number of years, Syntergy has developed a proven upgrade methodology which includes the use of the Syntergy's Replicator for OpenText Content Server software. This approach gives you the capability to perform upgrades directly from older versions of Livelink and Content Server to the latest releases in a Single Hop (no need up "hop" through multiple version upgrades) with Zero Downtime
Spring Web Service, Spring Integration and Spring BatchEberhard Wolff
This presentation shows Spring Web Services, Spring Integration and Spring Batch applied to a typical scenario. It walks through the advantages of the technologies and their sweet spots.
Migrating Very Large Site Collections (SPSDC)kiwiboris
This document discusses migrating a large 8 TB SharePoint site collection to a new farm within a 96 hour maintenance window. Key points:
- The site collection is too large to migrate as-is, so it will be split by promoting some subsites to new site collections.
- Metalogix Content Matrix will be used to script the migration in parallel batches to complete it on time.
- Challenges include maintaining performance over the large data set and validating a 99% accurate migration within the narrow window. Careful scripting and testing is required to successfully migrate such a large amount of content.
What Makes Migrating to the Cloud Different Than On-PremisesChristian Buckley
My second presentation from #SPTechCon Boston 2014, focusing on the limitations and performance concerns of migration to SharePoint Online (part of Office 365).
The document discusses adopting the AnswerModules Suite for OpenText Content Server. It describes two scenarios: 1) For a new Content Server installation, the Suite can improve setup efficiency, quickly add functionality, and support data migration from legacy systems. 2) For an existing Content Server, the Suite can extend existing modules, workflows, and the user interface. Key benefits mentioned include automating environment setup using Content Script, enabling rapid prototyping, and integrating Content Script with WebReports and workflows.
The document discusses new features in IBM Lotus Web Content Management (WCM) 6.1, including performance improvements, enhanced authoring capabilities, new rendering tags, expanded integration, additional APIs, custom workflow actions, integration with WebSphere Portal pages, and enhanced search capabilities.
2011.10 Liferay European Symposium. Alistair OldfieldEmeldi Group
The document summarizes a case study of implementing new enterprise features for Liferay portal at CSOB bank. Key points include migrating from a legacy SharePoint portal to Liferay, developing features like legacy link management, reference management, and link integrity checks. Google Analytics integration and a monitoring portlet were also created to integrate analytics and monitor backend resources from within the portal. The migration and new features enhanced the portal's capabilities for CSOB's enterprise needs.
How to – wrap soap web service around a databaseSon Nguyen
This document provides steps to create a SOAP web service API that acts as an abstraction layer for a database. It describes configuring a Mule application with a CXF component using a WSDL, adding a database connector to query data, and transforming the response to the SOAP message format. The API decouples front-end applications from changes in the backend database.
Cross Site Collection Navigation using SPFx, Powershell PnP & PnP-JSThomas Daly
The document summarizes Thomas Daly's presentation on using SPFx, PowerShell PnP, and PnP-JS to create cross-site collection navigation in SharePoint. It discusses using a SharePoint list as the data source for global navigation and creating an SPFx application customizer to render the navigation. It also covers enhancing the solution with additional data sources and caching for performance.
The document summarizes Thomas Daly's presentation on creating cross-site collection navigation using SharePoint Framework (SPFx) extensions, PowerShell PnP, and PnP-JS. It discusses using a SharePoint list as the data source for global navigation and creating templates for it using PnP Powershell. It also covers building an SPFx Application Customizer to connect to the list and render the navigation, as well as options for caching and deploying the solution across a tenant.
SharePoint Saturday Toronto 2015 - Inside the mind of a SharePoint ArchitectNoorez Khamis
This document contains the speaker bio and presentation for Noorez Khamis on the topic of SharePoint architecture best practices. The speaker is a SharePoint architect with over 14 years of experience designing and implementing SharePoint solutions. The presentation covers a wide range of architectural topics for SharePoint including hardware requirements, deployment models, scaling, security, services, and tools like PowerShell. It provides guidance on choosing the right approach for various scenarios.
This document discusses content deployment in SharePoint, including:
- An overview of content deployment and why it is used to deploy changes between environments like development, testing, and production.
- The basics of how content deployment works, including content deployment paths and jobs that define when and where content is deployed.
- Different deployment strategies and scenarios for using content deployment between environments with varying permissions and roles.
- A walkthrough of setting up a typical content deployment topology between an authoring, staging, and production farm.
- Common questions about how content deployment handles things like custom code and configurations.
This document discusses new client configuration and extension points introduced in Alfresco Share 4.0. It provides an overview of the goals, which were to make customizing client-side JavaScript and extending Share easier. It describes the new approaches, including customizing existing files, using component configuration, and introducing sub-components. It also discusses extensions, modules, tooling like SurfBug, and demos the capabilities.
The document discusses various components in Mule ESB including the File, Database, Web Service, REST, and DataWeave components.
The File component allows exchanging files with the file system and can act as an inbound or outbound endpoint. The Database component connects to relational databases using JDBC to perform SQL operations. The Web Service component allows consuming and building web services. The REST component enables configuring Mule as a RESTful service. The DataWeave component replaces the DataMapper and uses a JSON-like language to transform data.
In this webinar, we review the steps necessary to design, set up, and deploy IT cloud infrastructure for running a multi-server, Microsoft SharePoint Server farm on AWS. In this webinar we will also cover how to architect for high availability and provision the relevant AWS services and resources to run SharePoint Server workloads at scale on the AWS Cloud. You will find out where to access available content and tools, such as AWS CloudFormation templates and the Advanced Implementation Guide that will help you quickly implement and customize a scalable, enterprise-class SharePoint Server farm on AWS. This webinar is designed for a technical audience. After the presentation, you will have an opportunity to participate in a live Q&A discussion, where you may write in questions to AWS team members.
Envision IT - Application Lifecycle Management for SharePoint in the EnterpriseEnvision IT
SharePoint has become mission critical, complex, and wide-ranging in most enterprises. How do we apply the best practices of ALM in this environment?
Learn more from this presentation, delivered by Envision IT, Leaders in SharePoint Solutions
Best Practices to SharePoint Architecture Fundamentals NZ & AUSguest7c2e070
This document provides an overview of SharePoint architecture best practices presented by Joel Oleson, an 8-year veteran of SharePoint. It discusses SharePoint logical and physical architecture fundamentals, common mistakes to avoid, and differences between Windows SharePoint Services 3.0 and SharePoint Server 2007. Key concepts covered include web applications, site collections, sites, servers having specific roles, and topology planning based on usage and requirements. The presentation aims to help admins properly plan their SharePoint implementations.
Planning Your Migration to SharePoint Online #SPBiz60Christian Buckley
Session from SPBiz.com online event on June 18th, 2015. It’s always best to begin with a plan, and this session will provide a framework for developing your own migration plan. While tools will help automate some aspects of the content move, much of the complexity of a SharePoint migration happens before a tool is installed. This session will help analysts, project managers and admin of SharePoint to reduce migration time and increase success.
FED is a TYPO3 framework that allows developers to build TYPO3 websites using Fluid templates and Extbase, providing view helpers, services and content elements to replace TemplaVoila. It is lightweight, portable between extensions, and offers flexible configuration of pages and content through Fluid and FlexForms for the backend editing experience. FED aims to provide the combined power of Extbase, Fluid and optimized TYPO3 performance for developers while enabling designers to easily implement customizable templates.
The document provides an overview of SharePoint development, including its various versions over time, what SharePoint is used for, and different approaches to customizing and developing for SharePoint. It discusses configuration vs customization vs development. It then covers key development approaches like using Visual Studio templates, solution packages, features, farm vs sandboxed solutions, and the server-side vs client-side object models. It also discusses the SharePoint app model and different app locations like host webs and app webs.
Nuts and bolts of running a popular site in the aws cloudDavid Veksler
I will share how we develop and host a popular publishing platform in the cloud with a limited budget and technology team.
We'll cover architecture, including a variety of services at Amazon Web Services such as elastic load balancing, S3, Elastic Beanstalk, and RDS in the context of a real site.
We'll cover how we control costs with Spot and burstable instances and scale up with distributed caching.
Finally we'll discuss continuous deployment strategies for Windows and Linux-based cloud applications in the context of a distributed team using an agile process.
Similar to Migrating very large site collections (20)
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
2. Boris A. Velikovich – Software Architect
Email: boris.velikovich@exostar.com
LinkedIn: www.linkedin.com/in/bvelikovich/
Blog: http://kiwiboris.blogspot.com
Twitter: @BVelikovich
Since 2007, I have been working for Exostar
Involved in A&D and Big Pharma projects
3. Leading provider of secure collaboration
solutions and business process integration
throughout the extended value chain.
Exostar’s ForumPass is a cloud-based, enterprise-
class, complete B2B project collaboration service
offering.
ForumPass executes within Exostar’s Community
Cloud, a connect-once environment anchored by
Exostar’s Identity Hub that brings companies and
their customers, partners, and suppliers together.
4. One of the ForumPass site collections is 8 TB
This is twice as large as the recommended maximum
More than 30,000 users
Migrating the farm to SharePoint 2010
The huge site collection needs to be split
For this reason, this kind of migration cannot be
done using the conventional methods, such as in-
place migration or database attach
At least 99% of data should be preserved during the
migration
5. We chose Metalogix Content Matrix as our
migration software
Allows read-only direct connection to the source
database - important for performance reasons
Metalogix allows scripting migration activities
Provides PowerShell cmdlets
Allows running several migration activities
simultaneously, thus speeding up the process
Allows full and incremental copies
Important because incremental copies take less time
than full copies
Each script can take parameters
6. The new
environment has to
be fully functional
•SharePoint farm installation
•Web application configuration
•Service application configuration
•Firewalls configured
•Etc.
Code has to be
migrated
•Feature IDs need to be preserved
•If migrating from MOSS 2007, code has to be compatible with SharePoint 2010
•In particular, code that refers to user profiles or search
•All the solutions need to be deployed
PowerShell has to be
prepared
•Use Content Matrix PowerShell Console
•Make sure your powershell.exe.config file contains the settings necessary to initialize features
7. Each first-level subsite is promoted to a site
collection
Some but not all second-level subsites are
promoted to site collections
No other subsites are promoted to site
collections (for complexity reasons)
The content of the top-level site of the site
collection (libraries, lists, images, etc.) is
NOT migrated
8. •Create a new content database
•In this content database, create a new site
collection based on the standard template
•Then, two options:
•1) copy the content of the subsite to this new
site collection
•Since some second-level subsites are
promoted to their own site collection, a site
filter is required
•or
•2) copy the subsite to this new site collection
For
each
first-
level
subsite
9. Copy-MLAllSharePointSiteContent or
Copy-MLSharePointSite
The specific parameters depends on the choice of the
cmdlet, as well as your migration requirements
E.g., you don’t want to migrate themes if you are
migrating from MOSS 2007 to SharePoint 2010
Make sure that the SiteFilterExpression is present if
you plan to promote certain subsites to their own
site collections
Certain parameters might affect performance
Sometimes it is worth to prototype the migration
operation in the GUI
10. Use Copy-MLAllSharePointSiteContent when
The URL of the new site collection has to stay exactly
the same as in the first-level subsite, or
You want the first-level subsite content on the root
level of the newly-created site collection, and the site
template of that subsite does not interfere with the
site template of the root subsite
In all other cases, use Copy-MLSharePointSite
12. At the very least, it should include:
Server-relative source url
E.g., /sites/mycompany/SomeCoolSite
Managed path
E.g., /customers/ or /sites/mycompany
Site Name
E.g., SomeCoolSite
Site Description
E.g., Some Cool Site
Whether migration is full or incremental
13. At the very least, it should contain the site-
collection-relative URLs of excluded subsites
14. • Input CSV file path
• Exclusion CSV file path
• Source information
• DB Server, content DB, root URL,
template path, etc.
• Target information
• DB Server, farm administrator,
root url
• Metalogix job history path
Should
contain:
15.
16. Some second-level subsites are promoted to site
collections
These site collections’ URLs are new
A separate script is needed
Script configuration similar to what we’ve seen
Input CSV should include the URL of the new site
collection, as well as the web template of the site
copied
The Copy-MLSharePointSite cmdlet is used in the
script
New site collections are created in new content
databases
17.
18. Be careful with Team Sites
-MergeSiteFeatures parameter
If it is true and you migrate from MOSS 2007 to
SharePoint 2010, then the web parts from
default.aspx will move to SitePages/Home.aspx and
default.aspx will be empty - causes great confusion
for users
If it is false and you used the Copy-
MLAllSharePointContent cmdlet, you need to make
sure that all necessary site collection features are
activated
19. Full copy: Workflow associations are copied, workflow
instances are NOT
Possible to copy Nintex or SharePoint Designer workflow
associations
Incremental copy: Workflow associations are NOT
copied
Thus, the users should NOT create new workflow
associations after the full copy ran
LegacyWorkflows feature needs to be activated on
newly-created site collections
20. Make sure you add site collection admins to
the newly-created site collections
Involve users (CFT)
Their feedback will identify the problem areas
Run incremental migrations as needed
21. Metalogix allows comparison reports to verify
completeness of the migration job
Also, Metalogix provides logs for each job
When your testers identify a migration issue,
the reports and logs will help you troubleshoot
Sometimes, an additional incremental copy might be
needed
22. The hardest thing to troubleshoot
Migrating a 8 TB site collection may well take more than 1024
times than migrating a 8 GB site collection
Migration rate can go down with time
C:UsersSomeUserAppDataRoamingMetalogix
Content Matrix Console – SharePoint
EditionApplicationSettings.xml
PerActionResourceUse - Controls how many migration
activities are run in parallel
Trade-off - Higher value means more parallelism but less
predictability
Since parallelism is available where possible, the variance of
load within a job is less predictable).
SQLQueryTimeoutTime – You can also lose data if the timeout
time is too low
Disable verbose logging
23. Migrating a very large site collection:
Typically involves splits, which means that a third-
party product such as Metalogix Content Matrix will
be needed
Can be scripted, with scripts running in parallel
Requires comparison reports to ensure completeness
Presents performance challenges as the migration
rate tends to go down