This document provides an overview of key concepts in SAP Business Intelligence (BI), including info objects, info cubes, and star schemas. It discusses how info objects define characteristics and key figures, and how master data and transactional data relate to these. It also explains the differences between classical and extended star schemas used in SAP BI info cubes, noting extended star schemas allow for more dimensions and faster performance. Steps are provided for creating various BI objects like info objects, attributes, and info cubes based on the extended star schema model.
This document outlines a 20-day Tableau training course covering Tableau basics, advanced functions, administration, and Tableau Public. The training includes connecting to various data sources, building visualizations, dashboarding, calculations, sets, filters, advanced charts, maps, and performance optimization. Administration topics include server configuration, permissions, subscriptions, and data refresh. Tableau Public is also introduced for end user sharing of reports.
This document outlines topics for working with data in an Access database, including searching for and replacing text, entering data accurately using AutoCorrect, editing text, and arranging columns. Specific techniques are described such as finding and refining searches, enabling AutoCorrect, selecting, deleting and inserting text, and checking spelling options. The document provides an outline for a training course on working with data in an Access database.
This document provides an introduction to Microsoft Access 2007, including:
1) Databases are used to organize related information into tables, queries, forms, and reports. Tables store the core data, while queries find and retrieve data, forms provide interfaces to view and edit data, and reports analyze and present data.
2) Proper database design includes determining the database purpose and intended uses, defining relevant tables and their fields, identifying primary keys to connect tables, and determining relationships between tables.
3) The core components of an Access database are tables, which organize data into rows and columns. Fields define the columns and have properties like data type and size that determine how data is stored and displayed.
Data Mining Extensions (DMX) is a query language used to create, manage, and query data mining models. DMX was introduced in 1999 to define common concepts for data mining. It includes objects like mining structures and models. Mining structures define columns and hold cached data, while models perform machine learning on structures. DMX statements are used for creation, prediction, and training. Prediction joins apply model patterns to data to estimate unknown values.
Access is a relational database management system that stores data in tables and allows for complex querying of data across related tables. It stores data in tables rather than worksheets like Excel. Access allows users to create forms and reports, run queries, and connect to external data sources. Key features include building queries visually through a graphical query designer interface without needing SQL knowledge, setting relationships between tables, and updating records through queries.
This document provides an overview of creating and working with tables in Microsoft Access. It describes how to design a table by adding fields and setting their properties. Key points covered include data types, primary keys, field properties like format, validation and required fields. Navigation and editing features for working with table data are also summarized.
This PowerPoint presentation covers the basics of Microsoft Access 2010, including how to identify good database design, create tables and define fields, change table structures, add queries, forms, and reports, and save and close databases. It also discusses how to create databases using templates, organize objects in the navigation pane, add new tables to template databases, and print reports and tables. The overall objectives are to understand fundamental Access concepts and tasks.
This document provides revision materials for an exam on database basics. It includes sections on database fundamentals, normalization, data validation, naming conventions, example questions, exam tips, and exam technique. The document covers key database concepts like entities, attributes, relationships, normalization forms, field data types, and validation rules. It also provides examples of database objects like tables, queries, forms, and reports. Overall, the document offers a comprehensive review of common database topics that may appear on the exam.
This document outlines a 20-day Tableau training course covering Tableau basics, advanced functions, administration, and Tableau Public. The training includes connecting to various data sources, building visualizations, dashboarding, calculations, sets, filters, advanced charts, maps, and performance optimization. Administration topics include server configuration, permissions, subscriptions, and data refresh. Tableau Public is also introduced for end user sharing of reports.
This document outlines topics for working with data in an Access database, including searching for and replacing text, entering data accurately using AutoCorrect, editing text, and arranging columns. Specific techniques are described such as finding and refining searches, enabling AutoCorrect, selecting, deleting and inserting text, and checking spelling options. The document provides an outline for a training course on working with data in an Access database.
This document provides an introduction to Microsoft Access 2007, including:
1) Databases are used to organize related information into tables, queries, forms, and reports. Tables store the core data, while queries find and retrieve data, forms provide interfaces to view and edit data, and reports analyze and present data.
2) Proper database design includes determining the database purpose and intended uses, defining relevant tables and their fields, identifying primary keys to connect tables, and determining relationships between tables.
3) The core components of an Access database are tables, which organize data into rows and columns. Fields define the columns and have properties like data type and size that determine how data is stored and displayed.
Data Mining Extensions (DMX) is a query language used to create, manage, and query data mining models. DMX was introduced in 1999 to define common concepts for data mining. It includes objects like mining structures and models. Mining structures define columns and hold cached data, while models perform machine learning on structures. DMX statements are used for creation, prediction, and training. Prediction joins apply model patterns to data to estimate unknown values.
Access is a relational database management system that stores data in tables and allows for complex querying of data across related tables. It stores data in tables rather than worksheets like Excel. Access allows users to create forms and reports, run queries, and connect to external data sources. Key features include building queries visually through a graphical query designer interface without needing SQL knowledge, setting relationships between tables, and updating records through queries.
This document provides an overview of creating and working with tables in Microsoft Access. It describes how to design a table by adding fields and setting their properties. Key points covered include data types, primary keys, field properties like format, validation and required fields. Navigation and editing features for working with table data are also summarized.
This PowerPoint presentation covers the basics of Microsoft Access 2010, including how to identify good database design, create tables and define fields, change table structures, add queries, forms, and reports, and save and close databases. It also discusses how to create databases using templates, organize objects in the navigation pane, add new tables to template databases, and print reports and tables. The overall objectives are to understand fundamental Access concepts and tasks.
This document provides revision materials for an exam on database basics. It includes sections on database fundamentals, normalization, data validation, naming conventions, example questions, exam tips, and exam technique. The document covers key database concepts like entities, attributes, relationships, normalization forms, field data types, and validation rules. It also provides examples of database objects like tables, queries, forms, and reports. Overall, the document offers a comprehensive review of common database topics that may appear on the exam.
This document provides an overview and introduction to Microsoft Access 2007. It discusses what a database is and how Access allows users to create computerized databases. It describes the basic Access interface elements like the navigation pane, ribbon, and views. It also introduces some common Access objects like tables, queries, forms, reports, macros and modules. The second half of the document focuses on creating and working with tables, including adding fields, assigning data types, setting field properties, and creating lookup columns to relate tables.
This document provides an overview of Microsoft Access and database concepts. It includes sections on getting started with Access, navigating the environment, database terms like tables, queries, forms and reports, and how to create and manage a database including adding tables, fields, records, relationships and running queries. The document aims to introduce users to key Access features and the basics of setting up and working with an Access database.
The document provides instructions for a database project involving creating tables, forms, queries, and reports in Microsoft Access. Students are asked to create tables to store supplier and product data, with a one-to-many relationship between them. Forms and queries are then developed to enter and extract data from these tables. Finally, a report is generated to outline products and suppliers sorted by state. The tasks guide students through the process of designing a basic relational database in Access.
MS Access is a relational database management system used to create and manage databases. It allows users to define, create, store, manage and manipulate data in a structured manner using tables, queries, forms, reports, macros and modules. Some key business uses of MS Access include compiling business information into databases, building relationships between different data tables, creating queries to extract specific data, and generating reports. Access provides tools to design user-friendly interfaces for entering, viewing and managing business data.
This document outlines the objectives and steps to create and manage a Microsoft Access 2007 database, including:
1) Creating a database file and designing tables, forms, queries, and reports to enter and display data
2) Populating tables with data and formatting fields
3) Designing forms and queries to view, enter, and extract specific data
4) Creating reports to output selected data
5) Properly closing and exiting the Access program and database
The document defines conceptual, logical, and physical data models and compares their key features. A conceptual model shows entities and relationships without attributes or keys. A logical model adds attributes, primary keys, and foreign keys. A physical model specifies tables, columns, data types, and other implementation details.
Tutorial for using SQL in Microsoft Accessmcclellm
SQL is a programming language used to manage data in relational databases. It allows users to insert, query, update and delete data from database tables. Microsoft Access is a common program that uses SQL to interact with its data tables, allowing users to run queries to retrieve certain records based on conditions. The document provides examples of SQL statements like SELECT, UPDATE, DELETE used in Microsoft Access and videos demonstrating how to execute them to select, modify and remove data from Access tables.
This document defines dimensional data modeling and describes its key concepts. Dimensional modeling uses facts and dimensions to structure data warehouses in star or snowflake schemas for understandability and query performance. Facts are numeric measures that can be aggregated, while dimensions provide context as descriptive attributes. The document outlines the modeling process and benefits of dimensional modeling for data querying, extensibility, and understandability.
Datatypes, Field Properties, Validation and Maskingstarsmileygirl34
The document discusses various data types, field properties, validation rules, and input masks in Microsoft Access. It provides descriptions and examples of commonly used data types like text, number, currency, and date/time. It also explains properties for fields including field size, format, default value, validation rules, and required fields. Input masks are discussed as a way to enforce data formatting and prevent invalid entries. Validation rules and required fields are presented as methods to enforce data quality. Examples are given for different validation rule expressions and common input mask formats.
Here are the steps to create a blank Access database:
1. Click the Blank Database template under New in the starting window.
2. In the File New Database dialog box, type a name for the database (e.g. "MyDatabase") and select a save location.
3. Click Create.
This will create a new blank Access database with the specified name and save it in the selected location.
This document discusses creating and designing an Access database and tables. It describes creating a blank database, adding and designing tables in Datasheet and Design views, and setting field properties. Key points include: creating a database using a template that can include pre-built tables, queries, forms and reports; setting a field's data type; adding fields in Design view; and using properties like Description, Field Size, Format, Default Value and Required to further define fields.
This lesson covers importing and exporting data between Access and other programs like Excel and Word. It also discusses creating form letters by merging data from an Access database into a Word document using merge fields. Key points include that imported data must match the field structure of the existing table, CSV files use commas to separate fields, and form letters generate a unique letter for each record by inserting data from the specified fields.
This document provides information on data models in BI Publisher and their components. A data model contains instructions to retrieve structured data from one or more sources to generate BI Publisher reports. It can extract, transform, and aggregate data. Key components of a data model include data sets, triggers, flexfields, lists of values, parameters, and bursting definitions. The data model editor allows users to link data between sets, perform calculations, and select from various data sources when building a data model. It provides an interface to design the data structure and properties. Parameters and lists of values can be added to allow for user filtering of report data.
This document provides an introduction to Microsoft Access, covering how to start Access, open and work with databases and their objects like tables and queries. It describes database concepts like records and fields, and how to navigate, edit, and format datasheets. The summary reviews how to open, edit, delete and select data in a table, as well as change layouts, print, and close databases in Access.
This document provides an overview of key concepts for the MS Access ECDL module, including tables, fields, primary keys, relationships, queries, forms, and reports. It explains how to create tables with fields, set primary keys, and establish relationships between tables. Queries, forms, and reports are also introduced. The document aims to prepare the reader for the tasks and knowledge required for the ECDL Access certification.
hbaseconasia2017: Ecosystems with HBase and CloudTable service at HuaweiHBaseCon
CTBase is a lightweight HBase client designed for structured data use cases. It provides features like schematized tables, global secondary indexes, cluster tables for joins, and online schema changes. Tagram is a distributed bitmap index implementation on HBase that supports ad-hoc queries on low-cardinality attributes with millisecond latency. CloudTable Service offers HBase as a managed service on Huawei Cloud with features including easy maintenance, security, high performance, service level agreements, high availability and low cost.
Access lesson 04 Creating and Modifying FormsAram SE
This document discusses creating and modifying forms in Microsoft Access. It covers creating forms using tools and wizards, navigating records in a form, finding and replacing data, and updating, adding, and deleting records using a form. It also discusses creating blank forms and modifying forms by adding fields and controls in Layout and Design views.
This is my presentation at SQLBits 8, Brighton, 9th April 2011. This session is about advanced dimensional modelling topics such as Fact Table Primary Key, Vertical Fact Tables, Aggregate Fact Tables, SCD Type 6, Snapshotting Transaction Fact Tables, 1 or 2 Dimensions, Dealing with Currency Rates, When to Snowflake, Dimensions with Multi Valued Attributes, Transaction-Level Dimensions, Very Large Dimensions, A Dimension With Only 1 Attribute, Rapidly Changing Dimensions, Banding Dimension Rows, Stamping Dimension Rows and Real Time Fact Table. Prerequisites: You need have a basic knowledge of dimensional modelling and relational database design.
My name is Vincent Rainardi. I am a data warehouse & BI architect. I wrote a book on SQL Server data warehousing & BI, as well as many articles on my blog, www.datawarehouse.org.uk. I welcome questions and discussions on data warehousing on vrainardi@gmail.com. Enjoy the presentation.
1. Microsoft Access allows users to create and work with databases, tables, forms, queries, and reports. It provides tools for starting and exiting the program, creating and opening databases, and designing and manipulating tables, forms, queries, and reports.
2. Key features include creating and customizing tables with fields and records, entering and editing data, generating forms and reports from tables, and building queries to extract and calculate specific data.
3. Microsoft Access gives users flexibility in how they view and interact with different database components, allowing switching between design and data entry views, customizing properties and layouts, and printing finished reports.
This document provides information about SAP BW InfoObjects, including InfoObject catalogs, characteristics, and key figures. It discusses how to create InfoArea and InfoObject catalogs using transaction code RSA1. It describes the various tab pages for defining characteristic and key figure InfoObjects, such as general properties, hierarchies, attributes, aggregation rules, and time dependency settings. Characteristic InfoObjects can be defined with texts, master data, compounding to other objects, and external hierarchies. Key figure InfoObjects are defined with type, unit, aggregation behavior, and additional display properties. The document provides technical details on modeling and configuring InfoObjects in SAP BW.
An InfoCube contains integrated data from multiple sources and is optimized for analysis. It contains dimensions such as material, time, and sales organization, and key figures like sales quantity, revenue, and discount. An InfoCube allows users to analyze relationships between different data points for better business decisions.
This document discusses the different types of tables that are generated when activating an info object and its structures in SAP BI 7.0. It explains the master data table, text table, SID table, attribute tables (P, Q, X, Y tables), and hierarchies tables (H table) that can be created. It provides the naming conventions and key fields for each table type.
This document provides an overview and introduction to Microsoft Access 2007. It discusses what a database is and how Access allows users to create computerized databases. It describes the basic Access interface elements like the navigation pane, ribbon, and views. It also introduces some common Access objects like tables, queries, forms, reports, macros and modules. The second half of the document focuses on creating and working with tables, including adding fields, assigning data types, setting field properties, and creating lookup columns to relate tables.
This document provides an overview of Microsoft Access and database concepts. It includes sections on getting started with Access, navigating the environment, database terms like tables, queries, forms and reports, and how to create and manage a database including adding tables, fields, records, relationships and running queries. The document aims to introduce users to key Access features and the basics of setting up and working with an Access database.
The document provides instructions for a database project involving creating tables, forms, queries, and reports in Microsoft Access. Students are asked to create tables to store supplier and product data, with a one-to-many relationship between them. Forms and queries are then developed to enter and extract data from these tables. Finally, a report is generated to outline products and suppliers sorted by state. The tasks guide students through the process of designing a basic relational database in Access.
MS Access is a relational database management system used to create and manage databases. It allows users to define, create, store, manage and manipulate data in a structured manner using tables, queries, forms, reports, macros and modules. Some key business uses of MS Access include compiling business information into databases, building relationships between different data tables, creating queries to extract specific data, and generating reports. Access provides tools to design user-friendly interfaces for entering, viewing and managing business data.
This document outlines the objectives and steps to create and manage a Microsoft Access 2007 database, including:
1) Creating a database file and designing tables, forms, queries, and reports to enter and display data
2) Populating tables with data and formatting fields
3) Designing forms and queries to view, enter, and extract specific data
4) Creating reports to output selected data
5) Properly closing and exiting the Access program and database
The document defines conceptual, logical, and physical data models and compares their key features. A conceptual model shows entities and relationships without attributes or keys. A logical model adds attributes, primary keys, and foreign keys. A physical model specifies tables, columns, data types, and other implementation details.
Tutorial for using SQL in Microsoft Accessmcclellm
SQL is a programming language used to manage data in relational databases. It allows users to insert, query, update and delete data from database tables. Microsoft Access is a common program that uses SQL to interact with its data tables, allowing users to run queries to retrieve certain records based on conditions. The document provides examples of SQL statements like SELECT, UPDATE, DELETE used in Microsoft Access and videos demonstrating how to execute them to select, modify and remove data from Access tables.
This document defines dimensional data modeling and describes its key concepts. Dimensional modeling uses facts and dimensions to structure data warehouses in star or snowflake schemas for understandability and query performance. Facts are numeric measures that can be aggregated, while dimensions provide context as descriptive attributes. The document outlines the modeling process and benefits of dimensional modeling for data querying, extensibility, and understandability.
Datatypes, Field Properties, Validation and Maskingstarsmileygirl34
The document discusses various data types, field properties, validation rules, and input masks in Microsoft Access. It provides descriptions and examples of commonly used data types like text, number, currency, and date/time. It also explains properties for fields including field size, format, default value, validation rules, and required fields. Input masks are discussed as a way to enforce data formatting and prevent invalid entries. Validation rules and required fields are presented as methods to enforce data quality. Examples are given for different validation rule expressions and common input mask formats.
Here are the steps to create a blank Access database:
1. Click the Blank Database template under New in the starting window.
2. In the File New Database dialog box, type a name for the database (e.g. "MyDatabase") and select a save location.
3. Click Create.
This will create a new blank Access database with the specified name and save it in the selected location.
This document discusses creating and designing an Access database and tables. It describes creating a blank database, adding and designing tables in Datasheet and Design views, and setting field properties. Key points include: creating a database using a template that can include pre-built tables, queries, forms and reports; setting a field's data type; adding fields in Design view; and using properties like Description, Field Size, Format, Default Value and Required to further define fields.
This lesson covers importing and exporting data between Access and other programs like Excel and Word. It also discusses creating form letters by merging data from an Access database into a Word document using merge fields. Key points include that imported data must match the field structure of the existing table, CSV files use commas to separate fields, and form letters generate a unique letter for each record by inserting data from the specified fields.
This document provides information on data models in BI Publisher and their components. A data model contains instructions to retrieve structured data from one or more sources to generate BI Publisher reports. It can extract, transform, and aggregate data. Key components of a data model include data sets, triggers, flexfields, lists of values, parameters, and bursting definitions. The data model editor allows users to link data between sets, perform calculations, and select from various data sources when building a data model. It provides an interface to design the data structure and properties. Parameters and lists of values can be added to allow for user filtering of report data.
This document provides an introduction to Microsoft Access, covering how to start Access, open and work with databases and their objects like tables and queries. It describes database concepts like records and fields, and how to navigate, edit, and format datasheets. The summary reviews how to open, edit, delete and select data in a table, as well as change layouts, print, and close databases in Access.
This document provides an overview of key concepts for the MS Access ECDL module, including tables, fields, primary keys, relationships, queries, forms, and reports. It explains how to create tables with fields, set primary keys, and establish relationships between tables. Queries, forms, and reports are also introduced. The document aims to prepare the reader for the tasks and knowledge required for the ECDL Access certification.
hbaseconasia2017: Ecosystems with HBase and CloudTable service at HuaweiHBaseCon
CTBase is a lightweight HBase client designed for structured data use cases. It provides features like schematized tables, global secondary indexes, cluster tables for joins, and online schema changes. Tagram is a distributed bitmap index implementation on HBase that supports ad-hoc queries on low-cardinality attributes with millisecond latency. CloudTable Service offers HBase as a managed service on Huawei Cloud with features including easy maintenance, security, high performance, service level agreements, high availability and low cost.
Access lesson 04 Creating and Modifying FormsAram SE
This document discusses creating and modifying forms in Microsoft Access. It covers creating forms using tools and wizards, navigating records in a form, finding and replacing data, and updating, adding, and deleting records using a form. It also discusses creating blank forms and modifying forms by adding fields and controls in Layout and Design views.
This is my presentation at SQLBits 8, Brighton, 9th April 2011. This session is about advanced dimensional modelling topics such as Fact Table Primary Key, Vertical Fact Tables, Aggregate Fact Tables, SCD Type 6, Snapshotting Transaction Fact Tables, 1 or 2 Dimensions, Dealing with Currency Rates, When to Snowflake, Dimensions with Multi Valued Attributes, Transaction-Level Dimensions, Very Large Dimensions, A Dimension With Only 1 Attribute, Rapidly Changing Dimensions, Banding Dimension Rows, Stamping Dimension Rows and Real Time Fact Table. Prerequisites: You need have a basic knowledge of dimensional modelling and relational database design.
My name is Vincent Rainardi. I am a data warehouse & BI architect. I wrote a book on SQL Server data warehousing & BI, as well as many articles on my blog, www.datawarehouse.org.uk. I welcome questions and discussions on data warehousing on vrainardi@gmail.com. Enjoy the presentation.
1. Microsoft Access allows users to create and work with databases, tables, forms, queries, and reports. It provides tools for starting and exiting the program, creating and opening databases, and designing and manipulating tables, forms, queries, and reports.
2. Key features include creating and customizing tables with fields and records, entering and editing data, generating forms and reports from tables, and building queries to extract and calculate specific data.
3. Microsoft Access gives users flexibility in how they view and interact with different database components, allowing switching between design and data entry views, customizing properties and layouts, and printing finished reports.
This document provides information about SAP BW InfoObjects, including InfoObject catalogs, characteristics, and key figures. It discusses how to create InfoArea and InfoObject catalogs using transaction code RSA1. It describes the various tab pages for defining characteristic and key figure InfoObjects, such as general properties, hierarchies, attributes, aggregation rules, and time dependency settings. Characteristic InfoObjects can be defined with texts, master data, compounding to other objects, and external hierarchies. Key figure InfoObjects are defined with type, unit, aggregation behavior, and additional display properties. The document provides technical details on modeling and configuring InfoObjects in SAP BW.
An InfoCube contains integrated data from multiple sources and is optimized for analysis. It contains dimensions such as material, time, and sales organization, and key figures like sales quantity, revenue, and discount. An InfoCube allows users to analyze relationships between different data points for better business decisions.
This document discusses the different types of tables that are generated when activating an info object and its structures in SAP BI 7.0. It explains the master data table, text table, SID table, attribute tables (P, Q, X, Y tables), and hierarchies tables (H table) that can be created. It provides the naming conventions and key fields for each table type.
This document provides an overview of key SAP BW data modeling concepts:
1. InfoProviders include Data Store Objects for raw transactional data, InfoCubes for aggregated reporting data, InfoObjects, and MultiProviders that combine data from multiple sources.
2. Data Store Objects store consolidated transaction data at an atomic level and support detailed operational reporting.
3. InfoCubes are the central multidimensional data model, containing one fact table and up to 16 dimension tables linked to characteristics. Reports and analyses are based on InfoCubes.
4. MultiProviders combine data from InfoCubes, Data Store Objects, InfoObjects, and InfoSets to provide consolidated data for reporting without containing data directly
This document provides information and instructions for the BIS 245 Week 3 Lab, which involves creating an entity relationship diagram (ERD) in Microsoft Visio and then using that ERD to build a database in Microsoft Access. The lab asks students to:
1. Create entities, attributes, keys, and relationships in a Visio ERD based on given data requirements and business rules for a bookstore database called "Pages in Time".
2. Specify data types for each attribute in the Visio ERD.
3. Modify Visio settings to display the physical data types in the diagram.
4. Create the Access database from the completed ERD, following steps to start a new blank database and
An info cube is a data storage area that maintains summarized and aggregated data in a star schema structure. It consists of one fact table containing key figures and dimensions tables. There are 11 steps to create a standard info cube which stores data physically in the cube: 1) Create a data source, 2) Create an info package to load data, 3) Create the info cube, 4) Assign info objects to dimensions, 5) Create a transformation, 6) Create the data transfer process, 7) Execute to load data, 8) Check loaded data. Standard info cubes allow only read access while virtual cubes access live data and real-time cubes allow read/write.
Pentaho BootCamp : Using the Pentaho Reporting ToolsWildan Maulana
This document provides an overview and introduction to using the Pentaho reporting tools. It discusses:
- The typical uses of reporting vs analytical tools vs data mining tools vs dashboards.
- The architecture of Pentaho reporting including the web-based ad hoc query and reporting client (WAQR).
- Practical uses of WAQR like exporting data and quickstart reports.
- The Pentaho Report Designer (PRD) for creating banded reports and the main components and elements available for building reports.
- How to create data sets in PRD using JDBC, metadata queries, and parameters.
- Formatting and layout options in PRD like alternating row colors, grouping
This document outlines an 11-module Qlikview online training course. The modules cover topics such as the Qlikview platform overview, creating QVD files, inserting and formatting objects in dashboards, basic data transformations, creating select statements, building dashboards, chart properties, security and access types, dimensions and metrics, temporary tables, and the Qlikview server. The training aims to teach participants how to use Qlikview's associative technology and build interactive dashboards and reports from data. Upon completion, participants will be able to leverage Qlikview's capabilities as both a reporting and analysis tool. For more information or a demo, contact the training provider Glory IT Technologies.
This document describes analyzing Yelp rating data using SAS Enterprise Miner to build predictive models. It discusses gathering data from multiple files on businesses, reviews, users and check-ins. Data preparation steps included replacing missing values, partitioning the data for training and validation, and imputing missing values. Several models were tested including decision trees, regression, and neural networks to predict a business's star rating. The best model was selected to meet the objective of predicting what a new customer will rate a business.
This document discusses characteristic info objects in SAP BW, which are used to analyze facts. It describes the types of info objects and provides steps to create a characteristic info object in BW. These include giving the info object a name and description, selecting attributes and settings for general properties, master data, hierarchies, and compounding. Characteristic info objects structure the master data needed for analysis in BW.
Microsoft Excel Dashboards and Their Features.pdfNitin
In today's data-driven business landscape, having a well-structured sales dashboard is paramount for tracking performance, making informed decisions, and driving growth. I'm excited to share with you my journey in creating a powerful sales dashboard using Microsoft Excel. This project showcases the incredible capabilities of Excel as a tool for data visualization and analysis.
The document outlines requirements for developing a service catalog prototype for a large company. Key points:
- The company needs a lightweight IT service catalog to document services provided.
- The prototype must collect service name, description, category, cost, and store in a format that can be imported into a full catalog application.
- Requirements include a GUI with menus to add/view services and generate reports. Data must be validated and stored in a database.
- The program design must follow a 3-tier architecture. Deliverables are design documents, source code, and a working prototype.
Learn how document indexing can be easily automated, capturing valuable metadata with little to no user intervention. Whether you are scanning documents or cataloging existing files, indexing can be automated. Discover how easy it can be with data mining, OCR and barcode technology and watched folders. Walk through a step by step example of setting up a batch processing template with DocuFi.
MBA 620 Module Five Power BI Executive Summary AssignmentAbramMartino96
MBA 620 Module Five
Power BI Executive Summary Assignment User Manual
MBA 620 Module Five .............................................................................................................................1
Power BI Executive Summary Assignment User Manual ........................................................................1
Introduction .........................................................................................................................................1
Accessing Power BI via the Virtual Desktop Infrastructure (VDI) .......................................................1
Cleaning the CSV File ...........................................................................................................................2
Importing Files to Power BI .................................................................................................................4
Working on the Module Five Visualization of Financial Performance Assignment ............................8
Creating Clustered Column Charts 8
Creating a Line Chart ........................................................................................................................ 14
Exporting Files From Power BI ......................................................................................................... 18
Uploading Image Files to OneDrive .................................................................................................. 20
Introduction
Power BI is a tool that enables its users to visualize and present data in a manner that is easy to
understand and analyze.
Accessing Power BI via the Virtual Desktop Infrastructure (VDI)
1. Open the virtual desktop interface (VDI) on your machine.
2
2. On the VDI home screen, select Power BI Desktop. The Power BI homepage is displayed.
Cleaning the CSV File
We should clean the CSV file before we import it into Power BI. For example, to plot charts as per
the prompt in the assignment guidelines, we will use data from the Balance Sheets section of the
CSV file. The other data sections, such as Income Statement, Cash Flow Statement, and Supporting
Schedules, are not required. Therefore, we can remove them and conduct the analysis.
To clean the data set file, locate MBA620_Module five_Assignment two_Company A Financial.
1. On the VDI home screen, open the Business Analytics Course Content folder.
3
2. Open the MBA620_Module five_Assignment two_Company A Financial CSV file.
3. In the file, we can remove following data to clean the CSV file for creating the specific chart
in the assignment:
• Remove columns A, C, and D, as they are empty and not required in analysis.
• Remove rows 1, 2, 3, and 4, as that data is not required to plot the charts in Power
BI.
• In section 1 of the prompt, in guiding question 1, you are asked to analyze the data
in the balance sheet section of the CSV file, which is starting ...
This document describes a data warehouse and business intelligence project for analyzing Starbucks store data. It discusses extracting data from various structured, semi-structured, and unstructured sources, transforming the data using SQL and R, and loading it into a star schema data warehouse with fact and dimension tables. The data warehouse is then used for business queries and analysis in Tableau, with case studies examining city revenue, visitor and beverage sales by city, and city ratings based on food and beverage counts. The analysis finds that New York City generally has the highest revenue, visitor counts, and ratings.
This document provides an overview of multi-dimensional modeling techniques used to create BI InfoCubes. It discusses:
1. The goals of multi-dimensional data models which are to present information to analysts in a way that corresponds to their business understanding and to provide a structure that software can access for analysis.
2. The basic modeling steps which include understanding the business process, creating an entity relationship model, translating this to a multi-dimensional model/star schema, and then implementing this in InfoCubes within the BI system.
3. Key concepts of multi-dimensional modeling including dimensions, facts, star schemas with dimension tables surrounding a central fact table, and granularity determined by the most atomic attributes.
An Auction Portal where people can buy (immediately or through auction), sell and get updates about their product status. Preventive measures for auction sniping and Real time synchronization during auction and notifications for users are provided. Web2py framework and mysql database is used.
This document provides an overview of the Business Intelligence and Reporting Tool (BIRT). It discusses BIRT's architecture and components, how to connect to data sources and create reports using the report designer, and how to preview and deploy reports. Key features covered include working with charts, cross tabs, groups, filters, and parameters.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
Sdn beginners bi
1. Chapter 1: Introduction to SAP BI from SDN
o BI - Business Intelligence (Reporting and Analysis)
o OLAP: Online Analytical Process (SAP BI)
o OLTP: Online Transaction Process (SAP SD, MM, FICO, ABAP, HR)
o Basics:
o BI is a data warehousing tool
o ETL: Extraction > Transformation > Loading
o BI is used by middle level and high level management
PSA (Persistent Storage Area): Used to correct errors.
2. Chapter 2: Info Objects (SDN)
Info objects are the fields in BI system. These are divided into two types:
1. Characteristics: Used to refer key figure
Ex: Material, Customer
The characteristics are divided into three types. They are:
a. Time Characteristics
b. Unit Characteristics
c. Technical Characteristics
a. Time Characteristics include day, month, year, and half-yearly, quarterly. They are generated by
the system.
Note: Info objects are of two types,
i. System generated (0)
ii. Customer generated (Z)
b. Unit Characteristics include currency, unit. (0Currency, 0Unit)
Material Amount 0Currency Quantity 0Unit
E620 400 Rs 10
E621 500 $ 12
They are always assigned to key figures type amount and quantity (as shown in the above example).
c. Technical Characteristics include 0requestID, 0changeID, 0recordID.
2. Key Figures: Used for calculation purpose
Ex: Amount, Quantity
The key figures are divided into two types. They are:
a. Cumulative key figures
b. Non-cumulative key figures
a. Cumulative key figures are used when the data in the key figure field need to be added.
Material Amount
3. b. Non-cumulative
key
figures are used in
MM and HR related
E621 100
E622 200
E623 300
Total: 600
reports
Plant Material Stock Value Date
4002 Pencil 500 28/04/2012
4002 Pencil 600 29/04/2012
Records in the 'Stock Value' field are not added.
Steps to create info objects of type characteristics and key figures:
Part 1:
1. Go to RSA1
2. Go to 'Info Object' selection
3. Right click on the context menu > Select 'Create Info Area'
4. Give the technical name (Always unique)
5. Give description
6. Click on Continue
Part 2:
1. Right click on Info Area > Select create 'Info Object Catalog'
2. Give technical name
3. Give description
4. Select Info object type 'Character'
5. Click on Activate button
Part 3:
1. Right click on Info area > Select create 'Info Object Catalog'
2. Give technical name and description
3. Select info object type 'Key Figure'
4. Click on Activate button
Part 4:
1. Right click on Info Object Catalog for characteristics
2. Select create Info Object
3. Give technical name (length between 3 to 8)
4. Give description
4. 5. Click on Continue
6. Give mandatory options in the 'General' tab page (like Data type, length .. )
7. Click on Activate button
Part 5:
1. Right click on the Info Object Catalog for key figures
2. Select create Info Object
3. Give technical name (length between 3 to 8)
4. Give description
5. Click on Continue
6. For key figure of type 'Amount' and 'Quantity' we have to give 'Unit Characteristic' (0Currency/
0Unit)
7. Click on Activate button)
There are two types of data in SAP (ERP). They are:
1. Master Data
2. Transaction Data
1. Master Data: It is always assigned to characteristic. From SAP BI point of view, master data
doesn't change frequently
Note: Master Data is always assigned to a characteristic. A characteristic is called master data
characteristic if it has attributes, text and hierarchies.
i. Attributes: These are info objects which explain a characteristic in detail. These are divided into
two types:
5. a. Navigational attributes
b. Display attributes
Steps to create Attributes (type characteristic):
Part 1:
1. Go to Info object of type characteristic
2. Go to 'Display/Change'
3. In the 'Master data text' tab page, check the 'With Master Data' checkbox
4. Go to the Attribute tab page
5. Give technical name of attribute
6. Click Enter
7. Give description
8. Give data type, length
9. Click on continue
10. Activate the info object
Part 2:
1. If the info object is already in the system, copy the technical name of the info object
2. Go to attribute tab page of char
3. Paste the technical name of the info object
4. Click on Activate button
Note: Key Figure can be an attribute to a characteristic and it can only be a display attribute
Steps to enable Texts:
1. Right click on the info object, select change, go to Master Data Text tab page, select the check
box Text
6. Company Code Amount
India 2000
USA 2500
Company Code Sales Org Amount
India Hyderabad 2000
Bangalore 2000
USA New York 2500
Washington D.C 2500
Company Code Sales Org Division Amount
India Hyderabad Ameerpet 1000
Begumpet 1000
Bangalore Electronic City 1000
Silk Board 1000
USA New York 7th Street 1250
9th lane 1250
Washington D.C 8th street 1250
10th street 1250
Navigational Attribute: We can drill down using navigational attribute. It acts as
characteristic in the report.
Display Attribute: We cannot drill down using display attribute
Note:
1. Attribute Only: If you mark the characteristic as exclusive attribute, it can only be used as display
attribute but not as navigational attribute.
2. The characteristic cannot be transferred into info cube directly.
Steps to change attributes from navigational to display:
7. 1. Go to 'Attribute' tab page, in column 'Navigation On/Off', select the pencil like structure.
2. When changing display to navigation, give a description, click on activate button.
Steps to create attribute (type Key Figure):
1. Go to info object, go to 'Attribute' tab page
2. Give technical name
3. Click on Enter, Select radio button 'Create attribute as key figure'
4. Click on Continue
5. Give description and data type
6. Click on continue
7. Click on activate button
Tab Pages of Characteristic:
1. General tab page:
2. Data Element: Naming convention of data element (technical name of info object). It is like a
field on database level
3. Data Type: Here we have Char (1-60), string, Numeric (1-60), Date (8), Time (6)
4. Lower Case Letters: If the characteristic is having lower case letters, select lower case allowed
option
5. SID Tables: Surrogate ID or Master Data ID
6. Business Explorer: The selections which are in the Business Explorer tab page are by default
displayed at report level
7. Master Data/Text tab page: Info object has the following tables
P -> Time Independent display attributes
Q -> Time dependent display attributes
X -> Time Independent navigational attributes
Y -> Time dependent navigational attributes
Text: If we select this option, we can have text for the characteristic
Hierarchy: To enable hierarchies, we have to select the hierarchies
Attribute: In this, we give the attributes for a characteristic
ii. Text: The same report can be selected in different language in different country. This is because of
the 'Text' functionality
iii. Hierarchy
8. Chapter 3: Extended Star Schema (SCN)
o Fact table consists of DIM ID and key figures.
o Every Info cube has two types of tables
a. Fact table
b. Dimension tables
o Info cube consists of one fact table (E and F), surrounded by multiple dimension tables.
o Maximum number of dimension tables in an info cube is 16 and the minimum number is 4.
o There are 3 system generated tables
a. Data Package dimension table (Technical dimension)
b. Time dimension
c. Unit dimension
o Maximum number of key figures in an info cube are 233
o Maximum number of characteristics in an info cube are 248
Advantages of Extended Star Schema:
9. o Faster loading of data/ faster access to reports
o Sharing of master data
o Easy loading of time dependent objects
Classical Star Schema:
o In classical star schema, the characteristic record is directly stored in DIM tables.
o For every Dimension table, a DIM ID is generated and it is stored in the fact table.
Differences between Classical Star Schema and Extended Star Schema:
o In Classic star schema, dimension and master data table are same. But in Extend star schema,
dimension and master data table are different. (Master data resides outside the Info cube and
dimension table, inside Info cube).
o In Classic star schema we can analyze only 16 angles (perspectives) whereas in extended star
schema we can analyze in 16*248 angles. Plus the performance is faster to that extent.
Guide to SAP Beginners
20. InfoCube
Info Cube is structured as Star Schema (extended) where a fact table is surrounded by different
dim table that are linked with DIM'ids. And the data wise, you will have aggregated data in the
cubes.
Infocube contains maximum 16(3 are sap defines and 13 are customer defined) dimensions and
minimum 4(3 Sap defined and 1 customer defined) dimensions with maximum 233 key figures
and 248 characteristic.
The following InfoCube types exist in BI:
. InfoCubes
. VirtualProviders
There are two subtypes of InfoCubes: Standard, and Real-Time. Although both have an
extended star schema design, Real-Time InfoCubes (previously called Transactional InfoCubes)
are optimized for direct update, and do not need to use the ETL process. Real-Time InfoCubes
are almost exclusively used in the BI Integrated Planning tool set. All BI InfoCubes consists of a
quantity of relational tables arranged together in a star schema.
Star Schema
In Star Schema model, Fact table is surrounded by dimensional tables. Fact table is usually very
large, that means it contains millions to billions of records. On the other hand dimensional tables
are very small. Hence they contain a few thousands to few million records. In practice, Fact table
holds transactional data and dimensional table holds master data.
The dimensional tables are specific to a fact table. This means that dimensional tables are not
shared to across other fact tables. When other fact table such as a product needs the same product
dimension data another dimension table that is specific to a new fact table is needed.
This situation creates data management problems such as master data redundancy because the
very same product is duplicated in several dimensional tables instead of sharing from one single
master data table. This problem can be solved in extended star schema.
Extended star schema
21. In Extended Star Schema, under the BW star schema model, the dimension table does not
contain master data. But it is stored externally in the master data tables (texts, attributes,
hierarchies).
The characteristic in the dimensional table points to the relevant master data by the use of SID
table. The SID table points to characteristics attribute texts and hierarchies.
This multistep navigational task adds extra overhead when executing a query. However the
benefit of this model is that all fact tables (info cubes) share common master data tables between
several info cubes.
Moreover the SID table concept allows users to implement multi languages and multi hierarchy
OLAP environments. And also it supports slowly changing dimension.
22. Routine Lesson 1
Scenario: the data source does not have division and we need to derive it from material
which exists in the datasource. Populate the cube with the division.
Solution:
Division needs to be derived from material as division is not retrieved from the datasource
and the division needs to be derived from material using the /BI0/PMATERIAL table.
wa_th_material is an internal table derived from a work area which is wa_material and
wa_material is a work area derived from the structure t_material
since t_material has material and division as the 2 fields and this is read into a work area
wa_material using a key which is the -material i.e. the material that is loaded into the end
routine of the transformation.
Start Routine: use a SELECT statement to load the internal table.
CODE SNIPPET:
if wa_th_material[] is initial.
*Load Division by material
Select material division
into table wa_th_material
from /BI0/PMATERIAL
where objvers = ‘A’.
End Routine: use a READ statement and read the internal table populated in the start
routine into a work area using a KEY. If data is found make the data found equal to the end
routine field.
CODE SNIPPET:
read table wa_th_material
into wa_material
with table key material = -material.
if sy-subrc = 0.
-division = wa_material-division.
DATA DEFINITION:
Data:
BEGIN OF t_material,
material TYPE /BI0/OImaterial,
division TYPE /BI0/OIdivision,
END OF t_material,
data: wa_th_material TYPE HASHED TABLE OF t_material WITH UNIQUE KEY material,
data: wa_material type t_material,
Routine Lesson 2
23. Scenario: cube needs a customer number and the datasource does not provide the customer
number. The datasource however contains the country code such as DE,FR etc. based on the
country code a particular customer number is assigned for eg: for DE it is DE01J45 and for
FR it is FR023J4. This customer number needs to be populated in the cube.
SOLUTION:
In this scenario the transformation from the DSO to the cube is worked on where the start
routine is coded to load a data element from a standard table with a field in the standard
table as a reference. This is then used in the END routine with a CASE statement and the
RESULT_FIELDS are loaded accordingly.
START ROUTINE:
Code snippet:
select single low from ZBW_CONSTANT_TAB
into g_de_billto
where vnam = ‘JV_DE_BILLTO’.
END ROUTINE:
Code snippet:
case -/bic/zjvsource.
when ‘DE’.
-Ship_to = g_de_billto.
-Sold_to = g_de_billto.
-billtoprty = g_de_billto.
-payer = g_de_billto
end case.
<RESULT_FIELDS>-/bic/zjvsource.
case <RESULT_FIELDS>-/bic/zjvsource.
when ‘DE’.
<RESULT_FIELDS>-Ship_to = g_de_billto.
<RESULT_FIELDS>-Sold_to = g_de_billto.
<RESULT_FIELDS>-billtoprty = g_de_billto.
<RESULT_FIELDS>-payer = g_de_billto
24. end case.
Routine lesson 3
Scenario: An info object in the cube has to be updated with a constant value and this info
object does not come from the datasource. Update the info object in the cube with a
constant value.
Solution: go to the DSO and add the info object where the data is not being sourced from the
datasource and in the transformation right click on the info object and click on RULE
DETAILS which will provide the below screen shot. Now choose constant and enter the
value.
Usefull tables for DSO (Data Store Object)
25. Listing of commonly used tables in SAP BI and to understand the way data is stored in the
backend of SAP BI
ODS Object
RSDODSO Directory of all ODS Objects
RSDODSOT Texts of all ODS Objects
RSDODSOIOBJ InfoObjects of ODS Objects
RSDODSOATRNAV Navigation Attributes for ODS Object
RSDODSOTABL Directory of all ODS Object Tables
Usefull Tables for Aggregates
Listing of commonly used tables in SAP BI and to understand the way data is stored in the
backend of SAP BI
Aggregates
RSDDAGGRDIR Directory of Aggregates
RSDDAGGRCOMP Description of Aggregates
RSDDAGGRT Text on Aggregates
RSDDAGGLT Directory of the aggregates, texts
Usefull Tables for InfoCube
Listing of commonly used tables in SAP BI and to understand the way data is stored in the
backend of SAP BI
InfoCube
RSDCUBE Directory of InfoCubes
RSDCUBET Texts on InfoCubes
RSDCUBEIOBJ Objects per InfoCube (where-used list)
RSDDIME Directory of Dimensions
RSDDIMET Texts on Dimensions
RSDDIMEIOBJ InfoObjects for each Dimension (Where-Used List)
RSDCUBEMULTI InfoCubes involved in a MultiCube
RSDICMULTIIOBJ MultiProvider: Selection/Identification of InfoObjects
RSDICHAPRO Characteristic Properties Specific to an InfoCube
RSDIKYFPRO Flag Properties Specific to an InfoCube
RSDICVALIOBJ InfoObjects of the Stock Validity Table for the InfoCube