This document provides an overview of the mass additions process in Oracle Assets, which involves five main steps: 1) Create Mass Additions imports asset data into the FA_MASS_ADDITIONS table from external sources like legacy systems or Oracle modules, 2) Prepare Mass Additions adds additional data fields to the mass additions, 3) Post Mass Additions creates actual assets by posting the data, 4) Delete Mass Additions removes mass additions no longer needed, and 5) Purge Mass Additions completely removes deleted mass additions from the system. The document describes each step and available reports to help manage the mass additions process.
This document provides information about Tableau, a data visualization software. It discusses Tableau's prerequisites, products, and architecture. Tableau allows users to easily connect to various data sources and transform data into interactive visualizations and dashboards. Key Tableau concepts covered include data sources, worksheets, dashboards, stories, filters, marks, color and size properties. The document also explains Tableau's desktop and server products, and the stages of importing data, analyzing it, and sharing results.
The document discusses developing dynamic integrations for loading metadata from multiple Hyperion Planning applications. It describes the default ODI development process which involves separate interfaces for each dimension. The motivation is to create a more flexible solution that can load any number of applications and dimensions without rewriting interfaces. Key aspects of the dynamic solution include gathering metadata from various sources into common tables, comparing to existing data to identify changes, and using dynamic options in the ODI integration to specify which application and dimension to load.
Sun Trainings is one of the best coaching center in Hyderabad. Join our online training sessions with our real time faculty of Informatica. Practical sessions will also be provided for hands on experience. We provide training courses ideal for software and data management professionals. Our training sessions covers all information from basic to advanced level. Don’t wait anymore and mail your queries on contact@suntrainings.com / (M) 9642434362 .
Kishore Chaganti provides information on Tableau products, architecture, licensing, performance, and optimization. The document discusses Tableau Desktop, Server, architecture which includes gateways, application servers, VizQL servers, data servers, background processes, data engines, repositories, and search. It also covers licensing considerations for single node, 3 node, and 5 node topologies as well as guidelines for optimizing query performance.
SAP Cloud Platform Integration allows users to integrate business processes and data across on-premise and cloud applications in a flexible way. It provides capabilities for both process integration and data integration through its data services offering. For data integration, it allows users to efficiently move data between on-premise systems and the cloud using extract, transform, load tasks. This is done through its agent architecture which provides connectivity to on-premise sources and manages secure data transfer to cloud targets. Users can design and test data flows using its web-based user interface to meet their integration needs.
The document provides information about SAP HANA, including what it is, its architecture, and development scenarios. SAP HANA is an in-memory database that can be deployed on-premise or in the cloud. It allows for real-time analysis of large data volumes. The architecture includes components like the index server, XS runtime, and name server. Development in SAP HANA involves using calculation views to define slices of data and Studio as an development environment. Time dimensions and graphical views can also be generated.
This document provides an overview of Informatica Designer, which is used to create mappings and transformations to move and transform data between sources and targets. It describes the key components and tools in Designer including the navigator, workspace, status bar, and output windows. It also covers how to work with sources, targets, transformations, mappings, and mapplets. Additionally, it discusses tasks like debugging mappings, viewing dependencies, and using the designer tools.
This document provides an overview of the mass additions process in Oracle Assets, which involves five main steps: 1) Create Mass Additions imports asset data into the FA_MASS_ADDITIONS table from external sources like legacy systems or Oracle modules, 2) Prepare Mass Additions adds additional data fields to the mass additions, 3) Post Mass Additions creates actual assets by posting the data, 4) Delete Mass Additions removes mass additions no longer needed, and 5) Purge Mass Additions completely removes deleted mass additions from the system. The document describes each step and available reports to help manage the mass additions process.
This document provides information about Tableau, a data visualization software. It discusses Tableau's prerequisites, products, and architecture. Tableau allows users to easily connect to various data sources and transform data into interactive visualizations and dashboards. Key Tableau concepts covered include data sources, worksheets, dashboards, stories, filters, marks, color and size properties. The document also explains Tableau's desktop and server products, and the stages of importing data, analyzing it, and sharing results.
The document discusses developing dynamic integrations for loading metadata from multiple Hyperion Planning applications. It describes the default ODI development process which involves separate interfaces for each dimension. The motivation is to create a more flexible solution that can load any number of applications and dimensions without rewriting interfaces. Key aspects of the dynamic solution include gathering metadata from various sources into common tables, comparing to existing data to identify changes, and using dynamic options in the ODI integration to specify which application and dimension to load.
Sun Trainings is one of the best coaching center in Hyderabad. Join our online training sessions with our real time faculty of Informatica. Practical sessions will also be provided for hands on experience. We provide training courses ideal for software and data management professionals. Our training sessions covers all information from basic to advanced level. Don’t wait anymore and mail your queries on contact@suntrainings.com / (M) 9642434362 .
Kishore Chaganti provides information on Tableau products, architecture, licensing, performance, and optimization. The document discusses Tableau Desktop, Server, architecture which includes gateways, application servers, VizQL servers, data servers, background processes, data engines, repositories, and search. It also covers licensing considerations for single node, 3 node, and 5 node topologies as well as guidelines for optimizing query performance.
SAP Cloud Platform Integration allows users to integrate business processes and data across on-premise and cloud applications in a flexible way. It provides capabilities for both process integration and data integration through its data services offering. For data integration, it allows users to efficiently move data between on-premise systems and the cloud using extract, transform, load tasks. This is done through its agent architecture which provides connectivity to on-premise sources and manages secure data transfer to cloud targets. Users can design and test data flows using its web-based user interface to meet their integration needs.
The document provides information about SAP HANA, including what it is, its architecture, and development scenarios. SAP HANA is an in-memory database that can be deployed on-premise or in the cloud. It allows for real-time analysis of large data volumes. The architecture includes components like the index server, XS runtime, and name server. Development in SAP HANA involves using calculation views to define slices of data and Studio as an development environment. Time dimensions and graphical views can also be generated.
This document provides an overview of Informatica Designer, which is used to create mappings and transformations to move and transform data between sources and targets. It describes the key components and tools in Designer including the navigator, workspace, status bar, and output windows. It also covers how to work with sources, targets, transformations, mappings, and mapplets. Additionally, it discusses tasks like debugging mappings, viewing dependencies, and using the designer tools.
Tony von Gusmann is seeking opportunities to implement Microsoft Business Intelligence solutions. He has experience using a variety of Microsoft BI tools including SQL Server Integration Services, SQL Server Analysis Services, SQL Server Reporting Services, and Microsoft Office PerformancePoint Server. He has implemented BI solutions for clients across several industries and is available to travel as needed.
This portfolio document outlines a Business Intelligence project that involves extracting data from various sources into a SQL Server database, building an Analysis Services cube with dimensions and measures to analyze company data, creating Reporting Services reports on the data, and developing PerformancePoint dashboards and Excel Services reports to visualize key metrics. The project transfers raw data into a data warehouse, performs analysis with SSAS, generates reports with SSRS, and builds dashboards with PPS and Excel Services to provide business intelligence insights. Samples and screenshots are provided of the ETL processes, cube design, MDX queries, reports, and dashboards created in the project.
This document provides definitions and explanations of key concepts in ABAP (Advanced Business Application Programming) and SAP. It defines terms like master data, transactional data, workflow, cost objects, and G/L accounts. It also explains database tables, views, matchcodes, locking, and the data dictionary. The data dictionary manages data definitions and ensures data integrity. Views combine data from multiple tables without duplicating it physically. Matchcodes and locking help control concurrent access to data.
The document provides an overview of the skills and experience of Elmer Donavan related to business intelligence and SQL Server technologies. It includes sections summarizing his skills in SQL Server Integration Services, SQL Server Analysis Services, SQL Server Reporting Services, and Microsoft PerformancePoint. Sample projects are described to showcase work with SSIS, SSAS, SSRS and dashboards in SharePoint.
1. Images can be stored in SAP HANA using BLOB objects up to 2GB in size.
2. To use images in SAP Analytics Cloud, they must be in ST_MEMORY_LOB format in HANA.
3. A Java program is used to upload sample images saved locally into a HANA table by connecting to the database, specifying credentials, and running the code.
This portfolio showcases skills in Microsoft Business Intelligence, including SQL Server Integration Services (SSIS), Analysis Services (SSAS), and Reporting Services (SSRS). The document outlines projects involving:
1) Designing an ETL process in SSIS to load data from various sources into a SQL database.
2) Building a data warehouse cube in SSAS with dimensions, measures, and KPIs.
3) Creating SSRS reports including a sales scorecard, maps, and matrices and displaying them on a PerformancePoint dashboard in SharePoint.
The document summarizes the development of business intelligence reports for a project. It involved creating dashboards using Performance Point Server (PPS) and publishing them to SharePoint. SQL Server Reporting Services (SSRS) reports were also created and published. Excel reports were integrated into PPS dashboards. Data connections, filters, and scheduling were established to provide automated daily generation and viewing of reports.
This document provides information on various components and features of Oracle Reports 6i including: main report objects, building report queries, the live previewer tool, adding page numbers and dates to reports, different types of report columns, adding charts, runtime parameter forms, trigger categories, the PL/SQL editor, managing report templates, and creating additional report layouts.
This document provides an overview of data warehousing and ETL concepts like OLTP vs OLAP, data warehouse architecture, and Informatica PowerCenter. It defines key terms, describes why organizations implement data warehouses to help with analytics and decision making, and outlines the typical layers of a data warehouse including the ETL process. The document also provides high-level information on Informatica PowerCenter's architecture and functionality for automating ETL jobs, and discusses some common errors and Unix commands for monitoring and managing Informatica services.
This document provides an agenda and overview materials for an SAP BI end user training. The agenda covers topics such as an introduction to SAP BI, Web Intelligence reporting basics and advanced features, and downloading, printing and scheduling Webi reports. The overview materials define key BI terminology, explain the BI landscape and tools, demonstrate how to navigate the BI portal and save reports to folders, and provide more details on various BI concepts covered in the agenda. The training is aimed at teaching end users how to work with and analyze data using SAP's BI reporting tools.
William Schaffrans Bus Intelligence Portfoliowschaffr
This document provides an overview and examples of the author's work with Microsoft's Business Intelligence Suite, including SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS), SQL Server Reporting Services (SSR), Performance Point Server 2007 (PPS), and Microsoft Office SharePoint Server (MOSS). It showcases various packages, data flows, cubes, dimensions, measures, reports, scorecards, and dashboards created by the author using these tools to analyze and report on business data.
This document is part of Oracle BI Publisher Certification Program from Adiva Consulting Inc. contact
info@adivaconsulting.com for you corporate training needs and reduce your training cost by 75%
SAP Business Objects XIR3.0/3.1, BI 4.0 & 4.1 Course Content
SAP Business Objects Web Intelligence and BI Launch Pad 4.0
Introducing Web Intelligence
BI launch pad: What's new in 4.0
Customizing BI launch pad
Creating Web Intelligence Documents with Queries
Restricting Data Returned by a Query
Report Design in the Java Report Panel
Enhancing the Presentation of Reports
Formatting Reports
Creating Formulas and Variables
Synchronizing Data
Analyzing Data
Drilling
Filtering data
Alerts
Input Control
Scheduling (email)
Data Refresh introduction
Sharing Web Intelligence Documents
SAP Business Objects BI Information Design Tool 4.0
Create a project
Create a connection to a relational database
Create a data foundation based on a single source relational database
Create a business layer based on a single relational data source
Publish a new universe file based on a single data source
Retrieve a universe from a repository location
Publish a universe to a local folder
Retrieve a universe from a local folder
Open a local project
Delete a local project
Convert a repository universe from a UNV to a UNX
Convert a local universe from a UNV to a UNX
Connecting to Data Sources
Create a connection shortcut
View and filter data source values in the connection editor
Create a connection to an OLAP data source
Create a BICS connection to SAP BW for client tools
Create a relational connection to SQL Server using OLEDB providers
Building the Structure of a Universe
Arrange tables in a data foundation
View table values in a data foundation
View values from multiple tables in a data foundation
Filter table values in a data foundation
Filter values from multiple tables in a data foundation
Apply a wildcard to filter table values in a data foundation
Apply a wildcard to filter values from multiple tables in a data foundation
Sort and re-order table columns in a data foundation
Edit table values in a data foundation
Create an equi-join, theta join, outer join, shortcut join
Create a self-restricting join using a column filter
Modify and remove a column filter
Detect join cardinalities in a data foundation
Manually set join cardinalities in a data foundation
Refresh the structure of a universe
Creating the Business Layer of a Universe
Create business layer folders and subfolders
Create a business layer folder and objects automatically from a table
Create a business layer subfolder and objects automatically from a table
Create dimension objects automatically from a table
Create a dimension, attribute , measure
Hide folders and objects in a business layer
Organize folders and subfolders in a business layer
View table and object dependencies
Create a custom navigation path
Create a dimensional business layer from an OLAP data source
Copy and paste folders and objects in a business layer
Filtering Data in Objects
Create a pre-defined
Oracle XML Publisher allows integration with PeopleSoft for template-based reporting. It separates data extraction from report layouts, allowing reuse of extracted data across multiple report templates. Key steps include setting up XML Publisher, creating and registering data sources, developing report templates, defining report definitions, running and viewing reports. Benefits include meeting business needs, reducing complexity and maintenance costs.
Flink Forward San Francisco 2019: Build a Table-centric Apache Flink Ecosyste...Flink Forward
Flink Table API was initially created to address the relational query use case. It has been a good addition to DataStream and DataSet API for users to write declarative queries. Moreover, Table API provides a unified API for batch and stream processing. We have been exploring extending the capability of Flink Table API to go beyond the classical relational query. With these work, we are establishing an ecosystem on top of the Table API. This talk will introduce the following enhancements we have made on Table API to expand its horizon. Most of the work has been or will be contributed back to Apache Flink. We will also share our experience of building an ecosystem around Flink Table API, and our vision for Table API in the future.
Non-relational processing API
Relational query is natively supported by Table API. It is also very powerful to express complicated computation logic. However, non-relational API become handy to perform a general purpose computation. We have introduced a set of non-relational methods, such as map() and flatMap(), to Table API in a systematic manner to improve the user experience in general.
Interactive programming
Ad-hoc queries is a pretty common use case for processing engines, especially for batch processing. In order to meet the requirements for such use cases, we introduced interactive programming to Table API, which allows users to cache the intermediate result. We envision the underlying service, which caches the intermediate Flink Table, will grow significantly to provide more sophisticated capabilities.
Iterative processing
Compared with DataSet and DataStream, one thing missing from Table is native iteration support. Instead of naively copying the native iteration API from DataSet / DataStream, we designed a new API to address the caveats that we have seen in the existing iteration support in DataStream and DataSet.
ML on Table API
One important part of the Flink ecosystem is ML. We have proposed to build a ML on top of Table API, so that the algorithm engineers can also benefit from the optimizations provided by Flink, in both batch and stream jobs.
This document provides an overview and samples of a business intelligence project using SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS), and SQL Server Reporting Services (SSRS). It includes descriptions of ETL packages in SSIS to load and transform data, a cube with dimensions and calculations in SSAS, and sample MDX queries and reports. The goals are to track, analyze, and report on facets of a simulated construction company.
Best Implementation Practices with BI PublisherMohan Dutt
The document discusses best practices for implementing Oracle Business Intelligence Publisher (BI Publisher). It provides an overview of BI Publisher and discusses tips like getting to the latest BI Publisher version, understanding delivery options, using the correct tools, knowing what BI Publisher can do in different applications, and how to troubleshoot issues. It also describes an implementation case study of converting Oracle E-Business Suite reports to BI Publisher.
Sap hana modelling Online Training is Offering at Glory IT Technologies. We have Certified Working Professionals on this Modules. They trained so many Global Students. We also Provides Corporate Training, Job/Project Support Services to sap hana modelling. We are Only Institute Delivering Best Online Training Services to this Module.
An introduction to SQL Server in-memory OLTP EngineKrishnakumar S
This is an introduction to Microsoft SQL Server In-memory Engine that was earlier code named Hekaton. It describes the basic concepts and technologies involved in the in-memory engine - This has presented in Kerala - Microsoft Users Group Meeting on May 31, 2014
The best DBAs tune SQL Server for performance at the server, instance, and database layers. This allows for both the logical and physical database designs to meet performance expectations. But it can be difficult to know which configuration options are better than others. Learn expert tips from Microsoft Certified Masters Tim Chapman and Thomas LaRock.
Tony von Gusmann is seeking opportunities to implement Microsoft Business Intelligence solutions. He has experience using a variety of Microsoft BI tools including SQL Server Integration Services, SQL Server Analysis Services, SQL Server Reporting Services, and Microsoft Office PerformancePoint Server. He has implemented BI solutions for clients across several industries and is available to travel as needed.
This portfolio document outlines a Business Intelligence project that involves extracting data from various sources into a SQL Server database, building an Analysis Services cube with dimensions and measures to analyze company data, creating Reporting Services reports on the data, and developing PerformancePoint dashboards and Excel Services reports to visualize key metrics. The project transfers raw data into a data warehouse, performs analysis with SSAS, generates reports with SSRS, and builds dashboards with PPS and Excel Services to provide business intelligence insights. Samples and screenshots are provided of the ETL processes, cube design, MDX queries, reports, and dashboards created in the project.
This document provides definitions and explanations of key concepts in ABAP (Advanced Business Application Programming) and SAP. It defines terms like master data, transactional data, workflow, cost objects, and G/L accounts. It also explains database tables, views, matchcodes, locking, and the data dictionary. The data dictionary manages data definitions and ensures data integrity. Views combine data from multiple tables without duplicating it physically. Matchcodes and locking help control concurrent access to data.
The document provides an overview of the skills and experience of Elmer Donavan related to business intelligence and SQL Server technologies. It includes sections summarizing his skills in SQL Server Integration Services, SQL Server Analysis Services, SQL Server Reporting Services, and Microsoft PerformancePoint. Sample projects are described to showcase work with SSIS, SSAS, SSRS and dashboards in SharePoint.
1. Images can be stored in SAP HANA using BLOB objects up to 2GB in size.
2. To use images in SAP Analytics Cloud, they must be in ST_MEMORY_LOB format in HANA.
3. A Java program is used to upload sample images saved locally into a HANA table by connecting to the database, specifying credentials, and running the code.
This portfolio showcases skills in Microsoft Business Intelligence, including SQL Server Integration Services (SSIS), Analysis Services (SSAS), and Reporting Services (SSRS). The document outlines projects involving:
1) Designing an ETL process in SSIS to load data from various sources into a SQL database.
2) Building a data warehouse cube in SSAS with dimensions, measures, and KPIs.
3) Creating SSRS reports including a sales scorecard, maps, and matrices and displaying them on a PerformancePoint dashboard in SharePoint.
The document summarizes the development of business intelligence reports for a project. It involved creating dashboards using Performance Point Server (PPS) and publishing them to SharePoint. SQL Server Reporting Services (SSRS) reports were also created and published. Excel reports were integrated into PPS dashboards. Data connections, filters, and scheduling were established to provide automated daily generation and viewing of reports.
This document provides information on various components and features of Oracle Reports 6i including: main report objects, building report queries, the live previewer tool, adding page numbers and dates to reports, different types of report columns, adding charts, runtime parameter forms, trigger categories, the PL/SQL editor, managing report templates, and creating additional report layouts.
This document provides an overview of data warehousing and ETL concepts like OLTP vs OLAP, data warehouse architecture, and Informatica PowerCenter. It defines key terms, describes why organizations implement data warehouses to help with analytics and decision making, and outlines the typical layers of a data warehouse including the ETL process. The document also provides high-level information on Informatica PowerCenter's architecture and functionality for automating ETL jobs, and discusses some common errors and Unix commands for monitoring and managing Informatica services.
This document provides an agenda and overview materials for an SAP BI end user training. The agenda covers topics such as an introduction to SAP BI, Web Intelligence reporting basics and advanced features, and downloading, printing and scheduling Webi reports. The overview materials define key BI terminology, explain the BI landscape and tools, demonstrate how to navigate the BI portal and save reports to folders, and provide more details on various BI concepts covered in the agenda. The training is aimed at teaching end users how to work with and analyze data using SAP's BI reporting tools.
William Schaffrans Bus Intelligence Portfoliowschaffr
This document provides an overview and examples of the author's work with Microsoft's Business Intelligence Suite, including SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS), SQL Server Reporting Services (SSR), Performance Point Server 2007 (PPS), and Microsoft Office SharePoint Server (MOSS). It showcases various packages, data flows, cubes, dimensions, measures, reports, scorecards, and dashboards created by the author using these tools to analyze and report on business data.
This document is part of Oracle BI Publisher Certification Program from Adiva Consulting Inc. contact
info@adivaconsulting.com for you corporate training needs and reduce your training cost by 75%
SAP Business Objects XIR3.0/3.1, BI 4.0 & 4.1 Course Content
SAP Business Objects Web Intelligence and BI Launch Pad 4.0
Introducing Web Intelligence
BI launch pad: What's new in 4.0
Customizing BI launch pad
Creating Web Intelligence Documents with Queries
Restricting Data Returned by a Query
Report Design in the Java Report Panel
Enhancing the Presentation of Reports
Formatting Reports
Creating Formulas and Variables
Synchronizing Data
Analyzing Data
Drilling
Filtering data
Alerts
Input Control
Scheduling (email)
Data Refresh introduction
Sharing Web Intelligence Documents
SAP Business Objects BI Information Design Tool 4.0
Create a project
Create a connection to a relational database
Create a data foundation based on a single source relational database
Create a business layer based on a single relational data source
Publish a new universe file based on a single data source
Retrieve a universe from a repository location
Publish a universe to a local folder
Retrieve a universe from a local folder
Open a local project
Delete a local project
Convert a repository universe from a UNV to a UNX
Convert a local universe from a UNV to a UNX
Connecting to Data Sources
Create a connection shortcut
View and filter data source values in the connection editor
Create a connection to an OLAP data source
Create a BICS connection to SAP BW for client tools
Create a relational connection to SQL Server using OLEDB providers
Building the Structure of a Universe
Arrange tables in a data foundation
View table values in a data foundation
View values from multiple tables in a data foundation
Filter table values in a data foundation
Filter values from multiple tables in a data foundation
Apply a wildcard to filter table values in a data foundation
Apply a wildcard to filter values from multiple tables in a data foundation
Sort and re-order table columns in a data foundation
Edit table values in a data foundation
Create an equi-join, theta join, outer join, shortcut join
Create a self-restricting join using a column filter
Modify and remove a column filter
Detect join cardinalities in a data foundation
Manually set join cardinalities in a data foundation
Refresh the structure of a universe
Creating the Business Layer of a Universe
Create business layer folders and subfolders
Create a business layer folder and objects automatically from a table
Create a business layer subfolder and objects automatically from a table
Create dimension objects automatically from a table
Create a dimension, attribute , measure
Hide folders and objects in a business layer
Organize folders and subfolders in a business layer
View table and object dependencies
Create a custom navigation path
Create a dimensional business layer from an OLAP data source
Copy and paste folders and objects in a business layer
Filtering Data in Objects
Create a pre-defined
Oracle XML Publisher allows integration with PeopleSoft for template-based reporting. It separates data extraction from report layouts, allowing reuse of extracted data across multiple report templates. Key steps include setting up XML Publisher, creating and registering data sources, developing report templates, defining report definitions, running and viewing reports. Benefits include meeting business needs, reducing complexity and maintenance costs.
Flink Forward San Francisco 2019: Build a Table-centric Apache Flink Ecosyste...Flink Forward
Flink Table API was initially created to address the relational query use case. It has been a good addition to DataStream and DataSet API for users to write declarative queries. Moreover, Table API provides a unified API for batch and stream processing. We have been exploring extending the capability of Flink Table API to go beyond the classical relational query. With these work, we are establishing an ecosystem on top of the Table API. This talk will introduce the following enhancements we have made on Table API to expand its horizon. Most of the work has been or will be contributed back to Apache Flink. We will also share our experience of building an ecosystem around Flink Table API, and our vision for Table API in the future.
Non-relational processing API
Relational query is natively supported by Table API. It is also very powerful to express complicated computation logic. However, non-relational API become handy to perform a general purpose computation. We have introduced a set of non-relational methods, such as map() and flatMap(), to Table API in a systematic manner to improve the user experience in general.
Interactive programming
Ad-hoc queries is a pretty common use case for processing engines, especially for batch processing. In order to meet the requirements for such use cases, we introduced interactive programming to Table API, which allows users to cache the intermediate result. We envision the underlying service, which caches the intermediate Flink Table, will grow significantly to provide more sophisticated capabilities.
Iterative processing
Compared with DataSet and DataStream, one thing missing from Table is native iteration support. Instead of naively copying the native iteration API from DataSet / DataStream, we designed a new API to address the caveats that we have seen in the existing iteration support in DataStream and DataSet.
ML on Table API
One important part of the Flink ecosystem is ML. We have proposed to build a ML on top of Table API, so that the algorithm engineers can also benefit from the optimizations provided by Flink, in both batch and stream jobs.
This document provides an overview and samples of a business intelligence project using SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS), and SQL Server Reporting Services (SSRS). It includes descriptions of ETL packages in SSIS to load and transform data, a cube with dimensions and calculations in SSAS, and sample MDX queries and reports. The goals are to track, analyze, and report on facets of a simulated construction company.
Best Implementation Practices with BI PublisherMohan Dutt
The document discusses best practices for implementing Oracle Business Intelligence Publisher (BI Publisher). It provides an overview of BI Publisher and discusses tips like getting to the latest BI Publisher version, understanding delivery options, using the correct tools, knowing what BI Publisher can do in different applications, and how to troubleshoot issues. It also describes an implementation case study of converting Oracle E-Business Suite reports to BI Publisher.
Sap hana modelling Online Training is Offering at Glory IT Technologies. We have Certified Working Professionals on this Modules. They trained so many Global Students. We also Provides Corporate Training, Job/Project Support Services to sap hana modelling. We are Only Institute Delivering Best Online Training Services to this Module.
An introduction to SQL Server in-memory OLTP EngineKrishnakumar S
This is an introduction to Microsoft SQL Server In-memory Engine that was earlier code named Hekaton. It describes the basic concepts and technologies involved in the in-memory engine - This has presented in Kerala - Microsoft Users Group Meeting on May 31, 2014
The best DBAs tune SQL Server for performance at the server, instance, and database layers. This allows for both the logical and physical database designs to meet performance expectations. But it can be difficult to know which configuration options are better than others. Learn expert tips from Microsoft Certified Masters Tim Chapman and Thomas LaRock.
The document discusses SQL Server 2014's in-memory OLTP feature. It begins by explaining the need for an in-memory architecture due to hardware trends. It then covers how the in-memory tables store and access data via optimized structures and algorithms. Native compiled stored procedures are also discussed. The benefits are high performance for hot datasets that fit entirely in memory, while limitations include unsupported data types and inability to partially store tables.
The document discusses SQL Server 2012's new indirect checkpoint algorithm. The current checkpoint algorithm flushes all dirty buffers during a checkpoint, causing unpredictable recovery times and IO spikes. The indirect checkpoint algorithm calculates a new minimum recovery LSN during checkpoints rather than flushing buffers. It uses a background recovery writer thread to flush pages when the dirty page count exceeds a threshold, maintaining predictable recovery times without checkpoint IO spikes. The presenter advocates testing and monitoring the new algorithm's performance benefits when enabling it on databases requiring guaranteed recovery time objectives.
Hekaton is Microsoft SQL Server's in-memory OLTP engine. It allows for creating memory-optimized tables to fully leverage RAM and provide faster performance than disk-based tables. Memory-optimized tables use new row formats and indexing structures like hash and range indexes that are optimized for memory. Stored procedures can be natively compiled for maximum speed when operating on memory-optimized tables. There are some limitations around data types and features supported. Diagnostic objects like DMVs provide visibility into Hekaton's memory usage and performance.
With the release of SQL Server 2012 the landscape changed with your ability to provide High Availability and/or Disaster Recoverability.
In this presentation Warwick has a look at some existing technologies that you may be already using in your environment to meet your High Availability or Disaster Recoverability requirements, He then introduces you to AlwaysOn Availability Groups looking at the technologies that make up the new technology before looking at how you can manage your environment.
Warwick will round out the presentation with a demo on how you can configure / build an AlwaysOn environment
This document discusses database security. It begins by stating that as threats to databases have increased, security of databases is increasingly important. It then defines database security as protecting the confidentiality, integrity, and availability of database data. The document outlines some common database security threats like SQL injection, unauthorized access, password cracking, and network eavesdropping. It then discusses some methods of securing databases, including through firewalls and data encryption. Firewalls work by filtering database traffic according to rules, while data encryption scrambles data so it can only be read by authorized users. The document stresses the importance of restricting database access to authorized users and applications.
There’s just so much to do to get your systems to run in an optimal fashion, but where do you start. This session will walk you through an extensive checklist of things you can do to better manage your servers, databases and code. We’ll start with server configurations and what you can do with them. We’ll then move through standard administrative tasks that will help with performance. Then it’s off to the intricacies of database design and how that can affect performance. We’ll finish up with T-SQL and where it can hurt or help your systems. Get your own systems to run faster with the information you’ll receive.
Antonios Chatzipavlis is a database architect and SQL Server expert with over 30 years of experience working with SQL Server. The document provides tips for installing and configuring SQL Server correctly, including selecting the appropriate server hardware, installing Windows, configuring disks and storage, installing and configuring SQL Server, and creating user databases. The goal is to optimize performance and reliability based on best practices.
The document discusses SQL Server monitoring and troubleshooting. It provides an overview of SQL Server monitoring, including why it is important and common monitoring tools. It also describes the SQL Server threading model, including threads, schedulers, states, the waiter list, and runnable queue. Methods for using wait statistics like the DMVs sys.dm_os_waiting_tasks and sys.dm_os_wait_stats are presented. Extended Events are introduced as an alternative to SQL Trace. The importance of establishing a performance baseline is also noted.
Choosing the Right Business Intelligence Tools for Your Data and Architectura...Victor Holman
This document discusses various business intelligence tools for data analysis including ETL, OLAP, reporting, and metadata tools. It provides evaluation criteria for selecting tools, such as considering budget, requirements, and technical skills. Popular tools are identified for each category, including Informatica, Cognos, and Oracle Warehouse Builder. Implementation requires determining sources, data volume, and transformations for ETL as well as performance needs and customization for OLAP and reporting.
The document provides an overview of key concepts in data warehousing and business intelligence, including:
1) It defines data warehousing concepts such as the characteristics of a data warehouse (subject-oriented, integrated, time-variant, non-volatile), grain/granularity, and the differences between OLTP and data warehouse systems.
2) It discusses the evolution of business intelligence and key components of a data warehouse such as the source systems, staging area, presentation area, and access tools.
3) It covers dimensional modeling concepts like star schemas, snowflake schemas, and slowly and rapidly changing dimensions.
The document discusses database security and provides an overview of key concepts. It defines database security and the data security lifecycle. It also outlines various countermeasures for database security including authorization, views, backup and recovery, integrity, encryption, and RAID technology. The overall goals are to understand security issues in database systems and consider how to address threats and protect against risks like theft, fraud, and data loss or exposure.
SQL Server 2012 High Availability with AlwaysOn Availability GroupsEdwin M Sarmiento
This document discusses high availability options in SQL Server, including database mirroring, replication, and log shipping. It notes challenges with database mirroring, including inefficient resource utilization, multiple copies of data, and lack of automatic failover. It then introduces AlwaysOn Availability Groups, a new high availability feature in SQL Server 2012 that uses failover clustering and allows for multiple synchronized secondary replicas with automatic failover. A demo of AlwaysOn Availability Groups is provided.
If you really want to understand what exactly Database Security is all about,this presentation is yours.
You will understand it just by having one look at the slides.
Presentation contains things which are really simple to understand.
Sql server performance tuning and optimizationManish Rawat
Sql server performance tuning and optimization
SQL Server Concepts/Structure
Performance Measuring & Troubleshooting Tools
Locking
Performance Problem : CPU
Performance Problem : Memory
Performance Problem : I/O
Performance Problem : Blocking
Query Tuning
Indexing
This document provides information about an upcoming presentation on Columnstore Indexes in SQL Server 2014. It notes that the presentation will be recorded so that those who could not attend live can view it later. It requests that anyone with issues about being recorded should leave immediately, and remaining will be taken as consent to the recording. It also states the presentation will be free and will begin in 1 minute.
The document discusses database security. It covers main aspects of database security including integrity, confidentiality, and availability. It also discusses access control methods like discretionary access control and mandatory access control. The document lists various threats to database security from hardware, software, networks, users, and programmers/operators. These threats include things like fires, unauthorized data access, data theft, and inadequate security policies.
Example of the BI application technology comparison based on customer needs and application capabilities performed by DWApplications.
This is one of 3 deliverables in the free BI Roadmap Assessment provided by DWApplications.
- BI application technology comparison
- Current and future state assessment
- Timeline, resource and implementation plan
If you are interested in a free BI roadmap assessment
Contact: scott.mitchell@dwapplications.com
This document outlines the author's experience with business intelligence tools including data modeling, T-SQL, SQL Server Integration Services, SQL Server Analysis Services, MDX programming, SQL Server Reporting Services, Performance Point Server, and SharePoint Server. Specific examples provided include designing an OLAP data warehouse schema, developing ETL processes in SSIS, building and deploying an SSAS cube, writing MDX queries, creating parameterized reports in SSRS, developing reports in Performance Point Server published to SharePoint, and integrating various reporting solutions using SharePoint. The author has over 20 years of IT experience including requirements gathering, database and application design, development, testing, documentation, and support.
This document outlines the design of a business intelligence portfolio for All Works Construction Company including a data warehouse, ETL processes, OLAP cube, and reports. The data warehouse will use a snowflake schema design. The ETL processes will load dimension and fact data into a staging area. An OLAP cube will be created using SQL Server Analysis Services with a MOLAP storage mode and partitioning strategy. Reports will be created in SQL Server Reporting Services, Excel Services, and Performance Point and published to SharePoint.
We compare the traditional ETL approach to the newer Business Rules-driven E-LT paradigm, the answer whether conventional ETL tools should be considered obsolete and phased out of the Enterprise Architecture, and tools based on Business Rules and E-LT take their place.
This document summarizes Hong-Bing Li's portfolio of business intelligence projects using Microsoft BI tools. It includes 3 SQL Server Reporting Services reports, 8 dashboards in SharePoint including scorecards and KPIs, 18 examples of SQL programming, and 25 SQL Server Integration Services packages for data integration. The document provides detailed descriptions and screenshots of sample reports, dashboards, SQL code, and SSIS packages developed by the author.
This document summarizes Hong-Bing Li's portfolio of business intelligence projects using Microsoft BI tools. It includes 3 SQL Server Reporting Services reports, 8 dashboards in SharePoint including scorecards and KPIs, 18 examples of SQL programming, and 25 SQL Server Integration Services packages for data integration. The portfolios contains details of the reports, dashboards, SQL code samples, and SSIS packages developed by the author.
This document summarizes Hong-Bing Li's portfolio of business intelligence projects using Microsoft BI tools. It includes 3 SQL Server Reporting Services reports, 8 dashboards in SharePoint including scorecards and KPIs, 18 examples of SQL programming, and 25 SQL Server Integration Services packages for data integration. The document provides detailed descriptions and screenshots of sample reports, dashboards, SQL code, and SSIS packages developed by the author.
This document summarizes Hong-Bing Li's portfolio of business intelligence projects using Microsoft BI tools. It includes 3 SQL Server Reporting Services reports, 8 dashboards in SharePoint including scorecards and KPIs, 18 examples of SQL programming, and 25 SQL Server Integration Services packages for data integration. The document provides detailed descriptions and screenshots of sample reports, dashboards, SQL code, and SSIS packages developed by the author.
This document summarizes Hong-Bing Li's portfolio of business intelligence projects using Microsoft BI tools. It includes 3 SQL Server Reporting Services reports, 8 dashboards in SharePoint including scorecards and KPIs, 18 examples of SQL programming, and 25 SQL Server Integration Services packages for data integration. The document provides detailed descriptions and screenshots of sample reports, dashboards, SQL code, and SSIS packages developed by the author.
This document summarizes Hong-Bing Li's portfolio of business intelligence projects using Microsoft BI tools. It includes 3 SQL Server Reporting Services reports, 8 dashboards in SharePoint including scorecards and KPIs, 18 examples of SQL programming, and 25 SQL Server Integration Services packages for data integration. The document provides detailed descriptions and screenshots of sample reports, dashboards, SQL code, and SSIS packages developed by the author.
SSIS provides capabilities for ETL operations using a control flow and data flow engine. It allows importing and exporting data, integrating heterogeneous data sources, and supporting BI solutions. Key concepts include packages, control flow, data flow, variables, and event handlers. SSIS can be optimized for scalability through techniques like parallelism, avoiding blocking transformations, and leveraging SQL for aggregations. Performance can be monitored using tools like SQL Server logs, WMI, and MOM. SSIS is interoperable with data sources like Oracle, Excel, and flat files.
This portfolio contains examples of the author's work with Microsoft's SQL Server 2008 Business Intelligence stack. It includes projects with Transact SQL, SQL Server Integration Services, SQL Server Analysis Services, SQL Server Reporting Services, and PerformancePoint Server. The projects were completed as part of a 12-week hands-on Master's program and involved building databases, an OLAP cube, reports and dashboards using real-world business scenarios and data from various sources like Excel, XML and CSV files.
This portfolio contains examples of the author's work with Microsoft Business Intelligence tools. It includes projects and queries demonstrating skills in SQL Server, SSIS, SSAS, SSRS, Excel, PerformancePoint Services and SharePoint. It also describes the author's education through SetFocus, a hands-on BI training program focused on the Microsoft stack.
Bi Architecture And Conceptual FrameworkSlava Kokaev
This document discusses business intelligence architecture and concepts. It covers topics like analysis services, SQL Server, data mining, integration services, and enterprise BI strategy and vision. It provides overviews of Microsoft's BI platform, conceptual frameworks, dimensional modeling, ETL processes, and data visualization systems. The goal is to improve organizational processes by providing critical business information to employees.
The document introduces concepts related to business intelligence (BI) and data warehousing (DW). It defines BI and DW, discusses their purposes, and describes common processes like dimensional modeling, extract-transform-load (ETL), online analytical processing (OLAP), and tools from IBM Cognos and Microsoft SQL Server used for BI and DW projects.
The document discusses various concepts related to database design and data warehousing. It describes how DBMS minimize problems like data redundancy, isolation, and inconsistency through techniques like normalization, indexing, and using data dictionaries. It then discusses data warehousing concepts like the need for data warehouses, their key characteristics of being subject-oriented, integrated, and time-variant. Common data warehouse architectures and components like the ETL process, OLAP, and decision support systems are also summarized.
The document summarizes Arthur Chan's business intelligence portfolio from his master's program. It includes samples and descriptions of projects involving extracting, transforming, and loading data with SQL Server Integration Services, modeling data with SQL Server Analysis Services, creating reports with SQL Server Reporting Services, and developing dashboards and scorecards with PerformancePoint and SharePoint. The portfolio aims to demonstrate Arthur's skills in core business intelligence technologies like SSIS, SSAS, SSRS, and Microsoft Office products for performance management and business analytics.
When it comes to dealing with large, complex, and disparate data sets, traditional database technologies are unable to keep pace with the rich analytics necessary to power today’s data-driven applications. Graph analytics databases are becoming the underlying infrastructure for AI and machine learning. These databases allow users to ask complex questions across complex data, which is not always practical or even possible at scale using other approaches. They also enable faster insights against massive data sets when combined with pattern recognition, statistical analysis, and AI/ machine learning. And in the case of standards-based graph databases, they connect with popular visualization tools like Graphileon, allowing users to easily explore their data stores and quickly build compelling graph-based applications.
Gowthami S is a software developer and designer with over 2 years of experience in data warehousing using databases like Teradata and Oracle. She has extensive experience with ETL tools like Informatica and data loading utilities for Teradata. She has worked on full data warehouse development lifecycles including requirements, design, implementation and maintenance. Currently working as a software engineer at Tech Mahindra, her projects include developing ETL processes and Teradata SQL queries to load and transform data from various sources into a Cisco enterprise data warehouse supporting business intelligence reporting and analytics.
SQL Server 2008 Portfolio for Saumya Bhatnagarsammykb
The document is Saumya Bhatnagar's portfolio showcasing her skills with SQL Server 2008. It includes examples from projects during her intensive 13-week SQL Server 2008 Master's program. The portfolio covers T-SQL development, database administration tasks, and tools like SSIS, SSRS. Specific projects highlighted include developing databases, stored procedures, triggers for a fictional bank application and using advanced T-SQL, SSIS, and SSRS to analyze real-world data.
The document discusses SQL query analyzer tools and database maintenance. It covers SQL query analyzer, execution plans, column statistics, running the analyzer, query tuning, optimization, and other analyzer tools like the profiler and tuning advisor. It also discusses database maintenance tasks like managing transaction log files, eliminating index fragmentation, ensuring accurate statistics, and establishing an effective backup strategy. The document demonstrates some of these tools and tasks.
5. ETL – Master Control Flow Master control flow populating a small staging database (example using SSIS) It consists of an ETL container calling a set of related external packages utilizing the Execute Package functionality in SSIS, and a series of administrative tasks: backup, database shrink, and index rebuild Each administrative task has a failure e-mail notification along with a final Success task for the entire package
6. ETL – Package Container Zoom into the package container The process for populating the staging database demonstrates the dependencies between the different dimensional hierarchies. For example, the timesheet load requires both the employee and project loads as prerequisites before it can execute.
7. ETL – Employee Data Flow Sample Data Flow: Employees In the event of truncation, the process writes these cases to a warning file before reintegrating them into the main flow. The employee data sources from a CSV flatfile and checks for a truncation issue on a new derived column, the full name (consisting of first name appended to last name).
8. ETL – Employee Data Flow Employee Data Flow (continued) Following the truncation check, the data flow looks up the existing employee table to determine if a given input will be an insert or an update (or an error).
9. ETL – Timesheets Control Flow This one uses a loop container to read multiple timesheet inputs from the same directory before sending off either a success or failure e-mail notification. Another control flow in the same package
10. ETL – Slowly Changing Dimensions CDC for Slowly Changing Dimension This example, implementing Change Data Capture, utilizes a Slowly Changing Dimension transform to sort out inserts and updates. Note, the source table for this data flow is the CDC table Another example from a different package for Change Data Capture
11.
12. ETL – Oracle PL/SQL Clickstream Data Warehouse Source is a Web traffic transactional system. Databases are not linked, so ETL is a combination of Oracle Import/Export (into a staging database) and PL/SQL Packages and Stored Procedures. This procedure calculates several daily clickstream statistics. procedure p_clicks_daily_sum ( p_day_ID in number ) is [...] cursor c_pages_per_hour is select min(count( click_ID )) min_cnt_clicks , max(count( click_ID )) max_cnt_clicks , min(count(distinct URL )) min_cnt_unique_page , max(count(distinct URL )) max_cnt_unique_page , sum(count(distinct URL )) / cn_hours_per_day mean_uq_page_viewed_cnt from com_daily_ssn_details cdsd where com_ID = v_com_ID and click_day_ID = p_day_ID group by click_hour ; Cursor illustrates several calculations for the daily clickstream statistics
13. ETL – Oracle PL/SQL Daily Clicks Summary Process employs a cursor loop to load the daily clickstream facts by community ID (the top-level dimension) and time begin v_cal_date := mat_stage . get_calendar_date ( p_day_ID ); for r_com in c_communities loop [...] begin insert into clicks_daily_sum values ( p_day_ID , r_com . com_ID , f_get_partition_key ( v_cal_date ) , v_min_cnt_clicks , v_max_cnt_clicks , v_min_cnt_unique_page , v_max_cnt_unique_page , v_mean_uq_page_viewed_cnt , [more columns...] , sysdate ); exception when others then DWH . DWH_process . write_errors ( parameters ); end; end loop; end p_clicks_daily_sum ; (continued)
14.
15. Analysis Services The staging database that serves as the data source for the OLAP cube, a result of the preceding SSIS example. The model has five dimensional hierarchies and four fact tables.
16. Analysis Services OLAP Cube Design This screenshot illustrates the relationships between the five dimensional hierarchies and four fact tables
17. Analysis Services Calculations and KPIs The calculated member and KPI design tabs Both images highlight the detail behind the KPI for Overhead as a Percentage of Total Cost
18. Analysis Services – KPI The end result, using Excel as the reporting client, demonstrating Overhead as a Percentage of Total Cost by Jobs
19.
20. Report Design Overhead Category Report (example using SSRS) The two extra datasets calculate the previous quarter and set the default to the most recent quarter. Note, as this is overhead, negative values for percentage change are good (black) and positive values are bad ( red ).
21. Simple Tabular Report Overhead Category Report (continued) Basic report showing current and previous quarter’s overhead and the percentage change, accepting the current quarter as an input parameter. Also shows the use of a SharePoint site collection as the distribution medium.
22. Dashboard Design PerformancePoint Server Dashboard Designer Scorecard with two KPIs, including the same Overhead Percentage as earlier Dashboard design, showing a single filter that links to the scorecard on the left. The Financials scorecard is in the right zone and does not use the filter.
24. Analytic Dashboard Employee Labor Analysis Second example showing both an analytic chart and grid on the same dashboard page. Note, the dual Y-axes that the chart utilizes.
25. SharePoint Web Part Job Profitability Another dual axis example, this one using Excel Services to display the chart in SharePoint and an Analysis Services Web Part filter
26.
27. Oracle Application Server Web Content Management Mock-up of primary page lists all items for each content area of the website procedure magazine_list ( [parameter list] ) is begin -- Piece the dynamic query together. v_query := v_select || v_where || v_ord_grp ; open c_pages for v_query ; fetch c_pages into v_resultset(v_resultset. count +1) ; while not c_pages % NOTFOUND loop fetch c_pages into v_resultset(v_resultset. count +1) ; end loop; v_row_count := DBMS_SQL . last_row_count ; close c_pages ;
28. Oracle Application Server -- Build the html frame. (does not include the navigation frame) htp.p ( ' <html> <head> <title>Untitled Document</title> <meta http-equiv="Content-Type“ content="text/html"> <script language="JavaScript"> [...] ‘ ); for i in v_first_result .. least ( v_last_result , v_resultset. count ) loop htp.p ( ‘<tr> <td>' || v_title || '</td> <td class=“list_item">' || v_resultset ( i ). ID || '</td> <td class=“list_item">' || to_char ( v_resultset ( i ). start_date, 'fmmm/dd/yyyy' ) || '</td> <td class=“list_item">' || to_char ( v_resultset ( i ). end_date, 'fmmm/dd/yyyy' ) || '</td> <td class=“list_item">' || v_edit_pvw_del || '</td> </tr>‘ ); end loop; end magazine_list ; Web Content Management (continued)
29. Oracle Application Server procedure nav_load_and_save_content_mag ( p_graphics_path in varchar2 := cn_graphics_path ) is begin htp.p (' function load_content() { isSelected(''content''); final_form = document.final_page; contentform = top.contentFrame.document.content; for (i=0; i < author_list.length; i++) { contentform.author.options[contentform.author.options.length] = author_list[i]; } preload_select(contentform.author, final_form.P_PGE_CMEM_ID.value); contentform.leadin.value = final_form.P_PGE_LEAD_IN.value; contentform.body.value = final_form.P_PGE_BODY.value; } Web Content Management – Edit Content The actual content management utility employs Javascript to load content text and related data into static HTML subpages
30. Oracle Application Server function save_content(direction) { final_form = document.final_page; contentform = top.contentFrame.document.content; if (get_selected(contentform.author)!="") document.content_change.src=''‘ || p_graphics_path || 'stus_yes.gif''; else document.content_change.src=''‘ || p_graphics_path || 'stus_no.gif''; final_form.P_PGE_CMEM_ID.value = get_selected(contentform.author); final_form.P_PGE_LEAD_IN.value = contentform.leadin.value; rawText = contentform.body.value; encodedText = ""; if (contentform.leadin.value.length >1990) { alert("The lead-in text area cannot handle more than 2000 characters.Please reduce its size."); return false; } final_form.P_PGE_BODY.value = encodedText; save_redirect(direction); }' ); end; Web Content Management – Edit Content (continued)
31. Perl sub a1 { print "<table border='1' bordercolor='#FFFFFF' cellspacing='0' cellpadding='2'>"; [header items] print "</table>"; […] my $sth3 = $dbh->prepare(' SELECT date_id, SUM( registrations ), SUM( ath ), SUM( listeners ), SUM( sessions ), SUM( ads_served ), SUM( ads_missed ) FROM a1 WHERE MONTH(date_ID)= ? AND YEAR(date_ID)= ? GROUP by date_ID ORDER by date_ID DESC') or die "Couldn't prepare statement: " . $dbh->errstr; Perl CGI Report Delivery for Web Streaming Advertising Note, the $dbh handle indicating the use of the DBI module to handle database agnostic SQL functionality