This document summarizes Joe Karam's presentation on Princeton University's transition from using NTLM authentication to Active Directory Federation Services (ADFS) with SharePoint 2010. Some key points of the transition included improving security, enabling single sign-on, and preparing for a future migration to Office 365. The presentation covered considerations for configuring ADFS and SharePoint, issues migrating data and users between the two environments, and troubleshooting authentication problems. It provided recommendations for testing and monitoring the new ADFS-enabled SharePoint deployment.
1) Alfresco Desktop V3.0 is a desktop application that provides a simple interface for navigating, viewing, editing, and searching content on an Alfresco server.
2) It features a familiar layout with panes for navigation, metadata, previews, and more. It aims to require no training and make extended search capabilities and content access easy.
3) The application can also integrate with workflows and processes on the Alfresco server to provide smart features that guide users and support governance, while combining server and desktop capabilities.
1) Alfresco Desktop V3.0 is a software that provides a simple interface for navigating, viewing, editing, and searching content with functionality designed for ease of use and no required training.
2) It aims to guide users and support corporate governance while combining the power of the server and PC through integration with Outlook and other software.
3) The software is designed for super fast browsing, searching, and synchronization of content from any location to improve work efficiency.
Ross and Rachel Terman of XEQ Information Solutions presented on their HTML5, JavaScript, CSS, and web services application. They discussed the application's components including the SQL database, .NET web service, HTML5 pages, CSS stylesheet, and JavaScript files. They explained how the application follows the MVC2 pattern with model classes mapped to the database, a controller in the web service, views as HTML pages, and view controllers as JavaScript files. They then demonstrated the application and provided resources for further learning on HTML5, CSS, and web development.
Share was originally built as a collaboration application on top of the Alfresco Platform. Because Share is a more modern interface than Alfresco Explorer, many customers have adopted customizing Share as their strategy for building solutions on Alfresco. To be successful, such solutions need to understand that Share is a complete collaboration application with a specific Information Architecture. This session will explore leveraging the Share UI while creating your own Information Architecture, including for non-collaborative use cases.
2012-04-28 (SQL Saturday 140 Perth) Migrating Deployment and Config to SSIS 2...Bhavik Merchant
This document outlines a presentation on migrating SSIS packages from 2008 R2 to 2012. It discusses the challenges of previous configuration methods and how the new project deployment model in SSIS 2012 addresses these. This is demonstrated through examples focusing on projects, parameters, configurations and running packages in different environments. Resources for learning more about the new features are also provided.
SSIS provides capabilities for ETL operations using a control flow and data flow engine. It allows importing and exporting data, integrating heterogeneous data sources, and supporting BI solutions. Key concepts include packages, control flow, data flow, variables, and event handlers. SSIS can be optimized for scalability through techniques like parallelism, avoiding blocking transformations, and leveraging SQL for aggregations. Performance can be monitored using tools like SQL Server logs, WMI, and MOM. SSIS is interoperable with data sources like Oracle, Excel, and flat files.
The document discusses dataflow analysis and liveness analysis. It defines liveness analysis as determining which variables are "live" or may be needed in the future at different points in a program. This allows optimizations like register allocation by mapping live variables that do not overlap in time to the same register. The document outlines the formal definition of liveness, including live-in and live-out variables at each node, and provides an algorithm to compute liveness information through a fixed point iteration on the control flow graph.
This document summarizes Joe Karam's presentation on Princeton University's transition from using NTLM authentication to Active Directory Federation Services (ADFS) with SharePoint 2010. Some key points of the transition included improving security, enabling single sign-on, and preparing for a future migration to Office 365. The presentation covered considerations for configuring ADFS and SharePoint, issues migrating data and users between the two environments, and troubleshooting authentication problems. It provided recommendations for testing and monitoring the new ADFS-enabled SharePoint deployment.
1) Alfresco Desktop V3.0 is a desktop application that provides a simple interface for navigating, viewing, editing, and searching content on an Alfresco server.
2) It features a familiar layout with panes for navigation, metadata, previews, and more. It aims to require no training and make extended search capabilities and content access easy.
3) The application can also integrate with workflows and processes on the Alfresco server to provide smart features that guide users and support governance, while combining server and desktop capabilities.
1) Alfresco Desktop V3.0 is a software that provides a simple interface for navigating, viewing, editing, and searching content with functionality designed for ease of use and no required training.
2) It aims to guide users and support corporate governance while combining the power of the server and PC through integration with Outlook and other software.
3) The software is designed for super fast browsing, searching, and synchronization of content from any location to improve work efficiency.
Ross and Rachel Terman of XEQ Information Solutions presented on their HTML5, JavaScript, CSS, and web services application. They discussed the application's components including the SQL database, .NET web service, HTML5 pages, CSS stylesheet, and JavaScript files. They explained how the application follows the MVC2 pattern with model classes mapped to the database, a controller in the web service, views as HTML pages, and view controllers as JavaScript files. They then demonstrated the application and provided resources for further learning on HTML5, CSS, and web development.
Share was originally built as a collaboration application on top of the Alfresco Platform. Because Share is a more modern interface than Alfresco Explorer, many customers have adopted customizing Share as their strategy for building solutions on Alfresco. To be successful, such solutions need to understand that Share is a complete collaboration application with a specific Information Architecture. This session will explore leveraging the Share UI while creating your own Information Architecture, including for non-collaborative use cases.
2012-04-28 (SQL Saturday 140 Perth) Migrating Deployment and Config to SSIS 2...Bhavik Merchant
This document outlines a presentation on migrating SSIS packages from 2008 R2 to 2012. It discusses the challenges of previous configuration methods and how the new project deployment model in SSIS 2012 addresses these. This is demonstrated through examples focusing on projects, parameters, configurations and running packages in different environments. Resources for learning more about the new features are also provided.
SSIS provides capabilities for ETL operations using a control flow and data flow engine. It allows importing and exporting data, integrating heterogeneous data sources, and supporting BI solutions. Key concepts include packages, control flow, data flow, variables, and event handlers. SSIS can be optimized for scalability through techniques like parallelism, avoiding blocking transformations, and leveraging SQL for aggregations. Performance can be monitored using tools like SQL Server logs, WMI, and MOM. SSIS is interoperable with data sources like Oracle, Excel, and flat files.
The document discusses dataflow analysis and liveness analysis. It defines liveness analysis as determining which variables are "live" or may be needed in the future at different points in a program. This allows optimizations like register allocation by mapping live variables that do not overlap in time to the same register. The document outlines the formal definition of liveness, including live-in and live-out variables at each node, and provides an algorithm to compute liveness information through a fixed point iteration on the control flow graph.
This session provides an introduction to using SSIS. This is an update to my older presentation on the topic: http://www.slideshare.net/rmaclean/sql-server-integration-services-2631027
This document discusses implementing Agile methodology for business intelligence (BI) projects. It begins by addressing common misconceptions about Agile BI, noting that it does not require specific tools or methodologies and can be applied using existing technologies. The document then examines extract, transform, load (ETL) tools and how some may not be well-suited for Agile due to issues like proprietary coding and lack of integration with version control and continuous integration practices. However, ETL tools can still be used when appropriate. The document provides recommendations for setting up an Agile BI environment, including using ETL tools judiciously and mitigating issues through practices like sandboxed development environments and test data sets to enable test-driven development.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Data Warehouse Design and Best PracticesIvo Andreev
A data warehouse is a database designed for query and analysis rather than for transaction processing. An appropriate design leads to scalable, balanced and flexible architecture that is capable to meet both present and long-term future needs. This session covers a comparison of the main data warehouse architectures together with best practices for the logical and physical design that support staging, load and querying.
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Getting Your DB Schema Under Control With SSDT.pptxPeter Schott
This document discusses using SQL Server Data Tools (SSDT) to manage database schemas. It covers installing SSDT, creating a project, importing database objects, common issues that arise, and how to build, publish and manage projects. Key points include using SSDT to create a standardized structure for database objects, fixing errors after importing an existing database, and deploying changes across environments.
Jean-René Roy: Integrate Legacy App with Dynamic CRMMSDEVMTL
24 novembre 2014
Groupe SQL
Sujet: Integrate Legacy App with Dynamic CRM
Conférencier: Jean-René Roy
Dynamic CRM is more and more popular in enterprises. Some people say, ‘’It will be the next SharePoint cow for MS’’. But how do you integrate external legacy application in CRM and how to you transfer your legacy database in the CRM Database. This session introduce CRM concept and framework. Show how you can use SSIS to write and read data in CRM database and how you can integrate legacy application with a CRM solution.
The document discusses modernizing IBM DB2 for i applications by re-engineering DDS files to use SQL and DDL. Key points include:
1. Using the CA Plex Model API Wizard to generate DDL from DDS to define database objects with SQL indexes, views, constraints and other features.
2. Converting to a data-centric programming approach using SQL triggers, stored procedures and eliminating program-centric coding.
3. Tips are provided on indexing, foreign keys, identity columns and timestamps to improve the database design.
Solve Todays Problems with 10 New SharePoint 2010 FeaturesCory Peters
This document summarizes 10 new features in SharePoint 2010 that help solve problems organizations face today. It discusses features like unique document IDs, managed metadata, metadata navigation, content type hubs, unattached database recovery, new object models, and business connectivity services. For each feature, it provides a brief description of what it does, how it works, and why the feature was needed to address challenges in previous versions of SharePoint. It also includes information on requirements and best practices for upgrading to SharePoint 2010.
Jeroen Schoenmakers is a SharePoint and SQL expert with 15 years of experience. The document discusses optimizing SharePoint performance, including:
- Setting up SharePoint farms for optimal performance
- Preventing future performance issues through database structuring and hardware configuration
- Troubleshooting slow performance by gathering information, understanding the problem, and using tools like Sp_AskBrent to identify issues
- A case study where triaging a performance problem took around 30 minutes using the recommended methodology
The document provides tips from Mike Hillwig on how to avoid being a cranky DBA. Some key tips include: avoiding the use of auto shrink on databases as it can fragment indexes; using SQL Agent to automate manual tasks; setting minimum and maximum RAM on servers; testing restores frequently; and limiting access to SA and service accounts. The document stresses the importance of consistency across environments for easier troubleshooting and remembering configurations. It also recommends databases servers be dedicated solely to database operations.
This document provides an overview of using Sybase WorkSpace to develop applications for Sybase IQ. It discusses WorkSpace features for enterprise modeling, database development, and migrating data and schemas from Sybase ASE to IQ. Specific capabilities covered include conceptual and physical data modeling, SQL development and debugging, schema development, and using WorkSpace to model replication environments and stage data migration to IQ. Links are provided to learn more about Sybase IQ, WorkSpace, and related products.
SharePoint Intelligence Real World Business Workflow With Share Point Designe...Ivan Sanders
This session introduces the basics of SharePoint Designer 2010 workflows. When you understand the building blocks of workflow actions, conditions, and steps you can quickly add workflows to automate processes and help improve your organization’s productivity and efficiency.
The document provides an agenda for a 3-day training on data warehousing and business intelligence using Microsoft SQL Server 2005. Day 3 focuses on SQL Server Integration Services (SSIS), including an introduction to SSIS, workshops and exercises on SSIS and SQL Server Analysis Services (SSAS). It also discusses how to create SSIS packages to extract, transform and load data.
Practical Business Intelligence in SharePoint 2013 - HonoluluIvan Sanders
This document provides an overview of a presentation on practical business intelligence in SharePoint 2013 given by Ivan Sanders. Ivan Sanders is introduced as a SharePoint MVP/MCT author with over 20 years of experience designing and developing Microsoft solutions, including business intelligence dashboards. The presentation covers topics such as the hardware requirements for SharePoint 2013, the business intelligence architecture including Excel Services, PerformancePoint Services, and Visio Services. It also discusses best practices for installation and configuration as well as techniques for gathering requirements and designing dimensional models, ETL processes, and analytics solutions. Codeplex links are provided for related demo content and source code.
Practical Business Intelligence in SharePoint 2013 - Helsinki FinalndIvan Sanders
This document provides information about a presentation on practical business intelligence in SharePoint 2013 in Helsinki. It includes contact information for the presenter, Ivan Sanders, who is a SharePoint MVP with over 20 years of experience designing and developing business intelligence dashboards and Microsoft solutions. Requirements and comparisons for SharePoint 2010 and 2013 hardware are listed. Architectures for BI components like Excel Services, PerformancePoint Services, and Visio Services are described. Installation best practices and links to demo content are also provided. The document ends with a list of trusted SharePoint experts and thanks sponsors of the event.
The document discusses agile approaches to data warehousing. It defines a data warehouse as a place to store consolidated, cleansed data from across a company to serve as a single source of truth. It advocates adopting agile principles like rapid delivery and embracing changes. Engineering disciplines like design patterns, ETL automation, and unit testing are presented as ways to achieve agility while maintaining quality. The speaker promotes these approaches and will present on related topics at an upcoming conference.
Take your reports to the next dimension! In this session we will discuss how to combine the power of SSRS and SSAS to create cube driven reports. We will talk about using SSAS as a data source, writing MDX queries, using report parameters, passing parameters for drill down reports, performance tuning, and the pro’s and con’s of using a cube as your data source.
Jeff Prom is a Senior Consultant with Magenic Technologies. He holds a bachelor’s degree, three SQL Server certifications, and is an active PASS member. Jeff has been working in the IT industry for over 14 years and currently specializes in data and business intelligence.
Query Tuning for Database Pros & DevelopersCode Mastery
This document discusses query tuning for database professionals and developers. It addresses areas of ownership between developers and DBAs, how to optimize T-SQL code with assistance from DBAs, and how DBAs can help with hardware tuning. The document provides guidance on monitoring for blocking, deadlocks, timeouts and long-running queries. It emphasizes that query optimization is the most effective area for tuning and covers how to optimize execution plans by addressing high cost operations, preventing issues, and finding and addressing existing problems.
More Related Content
Similar to Loading a data warehouse using ssis 2012
This session provides an introduction to using SSIS. This is an update to my older presentation on the topic: http://www.slideshare.net/rmaclean/sql-server-integration-services-2631027
This document discusses implementing Agile methodology for business intelligence (BI) projects. It begins by addressing common misconceptions about Agile BI, noting that it does not require specific tools or methodologies and can be applied using existing technologies. The document then examines extract, transform, load (ETL) tools and how some may not be well-suited for Agile due to issues like proprietary coding and lack of integration with version control and continuous integration practices. However, ETL tools can still be used when appropriate. The document provides recommendations for setting up an Agile BI environment, including using ETL tools judiciously and mitigating issues through practices like sandboxed development environments and test data sets to enable test-driven development.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Data Warehouse Design and Best PracticesIvo Andreev
A data warehouse is a database designed for query and analysis rather than for transaction processing. An appropriate design leads to scalable, balanced and flexible architecture that is capable to meet both present and long-term future needs. This session covers a comparison of the main data warehouse architectures together with best practices for the logical and physical design that support staging, load and querying.
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Getting Your DB Schema Under Control With SSDT.pptxPeter Schott
This document discusses using SQL Server Data Tools (SSDT) to manage database schemas. It covers installing SSDT, creating a project, importing database objects, common issues that arise, and how to build, publish and manage projects. Key points include using SSDT to create a standardized structure for database objects, fixing errors after importing an existing database, and deploying changes across environments.
Jean-René Roy: Integrate Legacy App with Dynamic CRMMSDEVMTL
24 novembre 2014
Groupe SQL
Sujet: Integrate Legacy App with Dynamic CRM
Conférencier: Jean-René Roy
Dynamic CRM is more and more popular in enterprises. Some people say, ‘’It will be the next SharePoint cow for MS’’. But how do you integrate external legacy application in CRM and how to you transfer your legacy database in the CRM Database. This session introduce CRM concept and framework. Show how you can use SSIS to write and read data in CRM database and how you can integrate legacy application with a CRM solution.
The document discusses modernizing IBM DB2 for i applications by re-engineering DDS files to use SQL and DDL. Key points include:
1. Using the CA Plex Model API Wizard to generate DDL from DDS to define database objects with SQL indexes, views, constraints and other features.
2. Converting to a data-centric programming approach using SQL triggers, stored procedures and eliminating program-centric coding.
3. Tips are provided on indexing, foreign keys, identity columns and timestamps to improve the database design.
Solve Todays Problems with 10 New SharePoint 2010 FeaturesCory Peters
This document summarizes 10 new features in SharePoint 2010 that help solve problems organizations face today. It discusses features like unique document IDs, managed metadata, metadata navigation, content type hubs, unattached database recovery, new object models, and business connectivity services. For each feature, it provides a brief description of what it does, how it works, and why the feature was needed to address challenges in previous versions of SharePoint. It also includes information on requirements and best practices for upgrading to SharePoint 2010.
Jeroen Schoenmakers is a SharePoint and SQL expert with 15 years of experience. The document discusses optimizing SharePoint performance, including:
- Setting up SharePoint farms for optimal performance
- Preventing future performance issues through database structuring and hardware configuration
- Troubleshooting slow performance by gathering information, understanding the problem, and using tools like Sp_AskBrent to identify issues
- A case study where triaging a performance problem took around 30 minutes using the recommended methodology
The document provides tips from Mike Hillwig on how to avoid being a cranky DBA. Some key tips include: avoiding the use of auto shrink on databases as it can fragment indexes; using SQL Agent to automate manual tasks; setting minimum and maximum RAM on servers; testing restores frequently; and limiting access to SA and service accounts. The document stresses the importance of consistency across environments for easier troubleshooting and remembering configurations. It also recommends databases servers be dedicated solely to database operations.
This document provides an overview of using Sybase WorkSpace to develop applications for Sybase IQ. It discusses WorkSpace features for enterprise modeling, database development, and migrating data and schemas from Sybase ASE to IQ. Specific capabilities covered include conceptual and physical data modeling, SQL development and debugging, schema development, and using WorkSpace to model replication environments and stage data migration to IQ. Links are provided to learn more about Sybase IQ, WorkSpace, and related products.
SharePoint Intelligence Real World Business Workflow With Share Point Designe...Ivan Sanders
This session introduces the basics of SharePoint Designer 2010 workflows. When you understand the building blocks of workflow actions, conditions, and steps you can quickly add workflows to automate processes and help improve your organization’s productivity and efficiency.
The document provides an agenda for a 3-day training on data warehousing and business intelligence using Microsoft SQL Server 2005. Day 3 focuses on SQL Server Integration Services (SSIS), including an introduction to SSIS, workshops and exercises on SSIS and SQL Server Analysis Services (SSAS). It also discusses how to create SSIS packages to extract, transform and load data.
Practical Business Intelligence in SharePoint 2013 - HonoluluIvan Sanders
This document provides an overview of a presentation on practical business intelligence in SharePoint 2013 given by Ivan Sanders. Ivan Sanders is introduced as a SharePoint MVP/MCT author with over 20 years of experience designing and developing Microsoft solutions, including business intelligence dashboards. The presentation covers topics such as the hardware requirements for SharePoint 2013, the business intelligence architecture including Excel Services, PerformancePoint Services, and Visio Services. It also discusses best practices for installation and configuration as well as techniques for gathering requirements and designing dimensional models, ETL processes, and analytics solutions. Codeplex links are provided for related demo content and source code.
Practical Business Intelligence in SharePoint 2013 - Helsinki FinalndIvan Sanders
This document provides information about a presentation on practical business intelligence in SharePoint 2013 in Helsinki. It includes contact information for the presenter, Ivan Sanders, who is a SharePoint MVP with over 20 years of experience designing and developing business intelligence dashboards and Microsoft solutions. Requirements and comparisons for SharePoint 2010 and 2013 hardware are listed. Architectures for BI components like Excel Services, PerformancePoint Services, and Visio Services are described. Installation best practices and links to demo content are also provided. The document ends with a list of trusted SharePoint experts and thanks sponsors of the event.
The document discusses agile approaches to data warehousing. It defines a data warehouse as a place to store consolidated, cleansed data from across a company to serve as a single source of truth. It advocates adopting agile principles like rapid delivery and embracing changes. Engineering disciplines like design patterns, ETL automation, and unit testing are presented as ways to achieve agility while maintaining quality. The speaker promotes these approaches and will present on related topics at an upcoming conference.
Similar to Loading a data warehouse using ssis 2012 (20)
Take your reports to the next dimension! In this session we will discuss how to combine the power of SSRS and SSAS to create cube driven reports. We will talk about using SSAS as a data source, writing MDX queries, using report parameters, passing parameters for drill down reports, performance tuning, and the pro’s and con’s of using a cube as your data source.
Jeff Prom is a Senior Consultant with Magenic Technologies. He holds a bachelor’s degree, three SQL Server certifications, and is an active PASS member. Jeff has been working in the IT industry for over 14 years and currently specializes in data and business intelligence.
Query Tuning for Database Pros & DevelopersCode Mastery
This document discusses query tuning for database professionals and developers. It addresses areas of ownership between developers and DBAs, how to optimize T-SQL code with assistance from DBAs, and how DBAs can help with hardware tuning. The document provides guidance on monitoring for blocking, deadlocks, timeouts and long-running queries. It emphasizes that query optimization is the most effective area for tuning and covers how to optimize execution plans by addressing high cost operations, preventing issues, and finding and addressing existing problems.
Exploring, Visualizing and Presenting Data with Power ViewCode Mastery
Power View is a new feature of SQL Server 2012 Reporting Services that allows users to interactively explore, visualize, and present data. It provides an easy-to-use point and click interface for business users and analysts to work with data models and create presentation-ready reports with multiple interactive views. Power View is tightly integrated with PowerPivot and Analysis Services and helps fill a gap in Microsoft's BI reporting tools by allowing interactive data exploration without programming.
Building a SSAS Tabular Model DatabaseCode Mastery
This document provides an overview and agenda for a presentation on creating a tabular model using SQL Server 2012 Analysis Services. It discusses the different types of models in SSAS, the differences between multidimensional and tabular models, and demonstrates creating a basic tabular model using the AdventureWorks sample database. The presentation covers basics of SSAS, using Visual Studio for multidimensional and PowerPivot models, key features of tabular models like DirectQuery and xVelocity, and concludes with a Q&A section.
Designer and Developer Collaboration with Visual Studio 2012 and Expression B...Code Mastery
This document discusses improving collaboration between designers and developers by having designers use Visual Studio 2012 and Expression Blend. It proposes moving away from a development cycle where designers only provide static assets and instead having designers deliver final interactive UI code. This will put designers more in charge of the design process and relieve burden from developers. It suggests designers learn source control, development concepts, and using Blend to create code assets in order to facilitate this new collaborative workflow.
- Build automation helps ensure consistent builds, prevents errors, and speeds up the release process. It helps development teams integrate and deliver changes continuously.
- Common tools for build automation include MSBuild, Team Foundation Server, CruiseControl.NET, and Hudson. These tools help with continuous integration (CI), running tests, code analysis, versioning, and deploying builds.
- Best practices include CI on every code check-in, running unit tests as part of the build to prevent bugs, and continuously delivering integrated builds to environments for testing. This supports rapid and reliable software delivery.
Keynote Rockford Lhotka on the Microsoft Development PlatftormCode Mastery
This document summarizes Microsoft's developer platform strategy. It discusses how Microsoft is evolving its computing models to Software as a Service, Platform as a Service, and Infrastructure as a Service delivered through offerings like Windows Azure, Office 365, and SQL Azure. It outlines Microsoft's plans to converge Windows, Windows Phone and Windows Server onto a common Windows 8 kernel and development platform called WinRT. It also previews new developer tools and technologies for building applications across devices, including Windows 8, Windows Phone 8, HTML5, and cloud services.
Session 5 Systems Integration Architectures: BizTalk VS Windows Workflow Foun...Code Mastery
The presentation focuses on a quick explanation of what BizTalk is and what it does well; what WF is and what it does well. The presentation will also dive into the advantages of adding AppFabric into the mix. During the presentation we will also discuss and explore what scenarios are best suited for BizTalk, WF or a combination of both.
Session 4 Future of BizTalk and the CloudCode Mastery
Microsoft is committed to continuing development of BizTalk Server for years to come. BizTalk Server 2010 R2 will be released 6 months after Windows 8 and will certify BizTalk Server 2010 R2 for the Azure Infrastructure as a Service virtual machine role. Microsoft is also innovating new Platform as a Service capabilities on Azure for enterprise application integration and electronic data interchange, typical uses of BizTalk Server. Going forward, Microsoft will bring together BizTalk on-premises with Azure PaaS and Azure Service Bus to enable hybrid integration solutions.
This document discusses using federated identity management with Azure AppFabric Access Control Service (ACS) and Windows Identity Foundation (WIF) for single sign-on in software as a service applications. The solution allows leveraging popular identity providers like Google and Yahoo for authentication while avoiding the need to manage user accounts. ACS acts as an aggregator between identity providers and relying parties. WIF is used to integrate applications with ACS and manage claims. The approach favors proven security standards over custom code and avoids storing sensitive user data.
This document discusses using a Lean software development methodology for a SaaS application built on Azure. Key reasons for using Lean included having a distributed team with non-dedicated resources. The methodology focused on short 2-hour tasks, limited ceremony, and continuous delivery through "Show and Tell" environments. Initial tasks involved planning releases, stories, and base architecture. Development involved decomposing stories into small tasks in TFS and continuous integration testing. Benefits included opportunities for learning and delivering in small increments with a distributed team.
This document provides an overview of a SaaS application developed using lean software development principles. It discusses why the application was created as a demo product, describes the generic supply chain application and major entities, and covers considerations for SaaS including pricing models, costing, tenant separation, technical support, signups/payments, customer service, and security.
Session 2 Integrating SharePoint 2010 and Windows AzureCode Mastery
In this session we cover various ways of integrating SharePoint 2010 with Windows Azure and look at benefits and limitations of the different options. From simple to complex integration patterns, session will include SQL Azure and SharePoint integration with External Lists and Visual Web Parts, Azure hosted WCF Services integration with SharePoint using Silverlight and jQuery, and using SharePoint Event Handlers and workflows with Azure.
Presented by Andrey Nikiforov
This talk covers the topics around cloud planning and architectures. Covering how to achieve attaining the various abilities, understand the alternatives, popular frameworks, etc.
Exploring, visualizing and presenting data with power viewCode Mastery
At Code Mastery Boston Stevo Smocilac of Magenic highlights: New feature of SQL Server 2012 Reporting Services, Interactive data exploration, visualization, and presentation experience, Point and click interface, End user orientated, Supplements current Microsoft tools, Fills gap in the current Microsoft BI reporting toolset, Tightly integrated with PowerPivot & BISM
Creating a Tabular Model Using SQL Server 2012 Analysis ServicesCode Mastery
At Code Mastery Boston Steve Hughes, Principal Consultant at Magenic, highlights: Basics of SQL Server 2012 Analysis Services, Multidimensional Model, VS PowerPivot, Creating a Tabular Model
Extending Your Reach using the Cloud and Mobile DevicesCode Mastery
This document provides an overview of extending applications using cloud computing and mobile devices. It discusses moving applications to the cloud, using Microsoft Azure for computing and storage, and developing applications for mobile devices that interface with cloud services. The presentation includes sections on cloud concepts, Azure roles and AppFabric, developing for Azure, mobile platforms, and a demonstration.
Creating Tomorrow’s Web Applications Using Today’s Technologies Code Mastery
At Code Mastery Boston Mike Suarez, Senior Consultant at Magenic talked about Patterns & Technologies including: MVC Pattern, ASP.Net, MVC 3, HTML5, Modernizr, and jQuery
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Nordic Marketo Engage User Group_June 13_ 2024.pptx
Loading a data warehouse using ssis 2012
1. Loading a Data Warehouse using
SSIS 2012
Presented By:
James Phillips
May 2, 2012
2. Planning Planning Planning
How much data is going to be processed?
How often will the data be processed?
What is the SLA (Service Level Agreement) of your Data Warehouse
You can never have too much Memory!!!!!
Create or Choose a Framework and stick with it.
http://sqlblog.com/blogs/andy_leonard
3. Designing your ETL
Don’t forget the T in ETL
Keep individual SSIS Packages streamlined.
Don’t rush!! Don’t take shortcuts!!
Track your dependencies and document.
Make it flexible
You can never have enough logging
5. What not to do!
Overuse Sorts
Excessive lookups
Not using Dataflow
Using multiple frameworks or none at all
Skip planning phases
Try to tackle everything at once
Make decisions for the business
8. What’s new in 2012
Data Quality Services component
SSIS Parameters
New Deployment model
New GUI
Shared Data Connections
Flat File Connection Manager supports ragged rows