In some Hyperion Planning projects, security becomes so complex that it takes more than just putting some security groups on the high-level members of the dimensions. Global companies often have the necessity to create multiple planning applications to meet the diverse regions of the globe. However, what happens when the business requires a single application with a single plan type that contains cost center from different regions around the entity hierarchy? Moreover, that data is restricted according to the region's security group using only one attribute dimension. Furthermore, each user can see aggregated values correctly for your region only. This case study will show how to generate and maintain leaf-level member security settings based on physical geography attribute dimension on one of Dell's global Planning applications using only ODI and planning application metadata repository information.
This document discusses using Oracle Data Integrator (ODI) to validate data against Hyperion Planning metadata before loading the data into Essbase cubes. It proposes using a single generic inbound table in ODI to hold data for multiple Planning applications. ODI constraints would validate the data against Planning repositories to ensure only valid members are loaded to Essbase. This prevents slow cell-by-cell loads and allows adding new Planning applications easily with minimal ODI changes.
Incredible ODI tips to work with Hyperion tools that you ever wanted to knowRodrigo Radtke de Souza
ODI is an incredible and flexible development tool that goes beyond simple data integration. But most of its development power comes from outside-the-box ideas.
* Did you ever want to dynamically run any number of “OS” commands using a single ODI component?
* Did you ever want to have only one data store and loop different sources without the need of different ODI contexts?
* Did you ever want to have only one interface and loop any number of ODI objects with a lot of control?
* Did you ever need to have a “third command tab” in your procedures or KMs to improve ODI powers?
* Do you still use an old version of ODI and miss a way to know the values of the variables in a scenario execution?
* Did you know ODI has four “substitution tags”? And do you know how useful they are?
* Do you use “dynamic variables” and know how powerful they can be?
* Do you know how to have control over you ODI priority jobs automatically (stop, start, and restart scenarios)?
The document describes Java Database Connectivity (JDBC), which provides Java applications with access to most database systems via SQL. It outlines the JDBC architecture and classes in the java.sql package. JDBC drivers allow applications to connect to databases without using proprietary APIs. There are four types of JDBC drivers. The document also provides an example of how to load a driver, connect to a database, execute a query, and retrieve and display results.
Data Warehouse - What you know about etl process is wrongMassimo Cenci
The document discusses redefining the typical ETL process. It argues that the traditional understanding of ETL, consisting of extraction, transformation, and loading, is misleading and does not accurately describe the workflow. Specifically, it notes that:
1) The extraction step is usually handled by external source systems, not the data warehouse team.
2) There is a missing configuration and data acquisition step before loading.
3) Transformation is better thought of as data enrichment rather than transformation.
4) The loading phase is unclear about where the data should be loaded.
It proposes redefining the process as configuration, acquisition, loading (to a staging area), enrichment, and final loading to the data warehouse.
This document introduces Oracle9i and relational database concepts. It discusses Oracle9i features like scalability and reliability. It also explains that a relational database consists of tables related through primary and foreign keys that can be accessed using SQL. The Oracle database server allows storage and querying of data across these tables.
In this lecture we look at the patterns in chapter 18 in the textbook (Patterns of Enterprise Application Architecture). The lecture is in two parts. First we go through each of the patterns and explain each.
Then in the second part we look at a problem we have to solve and try to get the patterns to show themselves at the time they are needed.
This document provides an overview of Java Database Connectivity (JDBC) in 3 sentences or less:
JDBC allows Java programs to connect to and interact with various database systems and provides a standard API for querying and manipulating data in relational databases. The key components of JDBC include the DriverManager, Connection, Statement, and ResultSet objects, and connecting to a database typically involves 4 steps - loading a driver, opening a connection, executing SQL statements via a Statement object, and processing result sets. JDBC offers database independence and ease of administration along with the ability to access any database from Java applications.
The document discusses Java Database Connectivity (JDBC) and how it allows Java programs to connect to databases. It describes the four types of JDBC drivers, the core JDBC interfaces like Driver, Connection, and Statement, and how to use JDBC to perform CRUD operations. The key interfaces allow establishing a database connection and executing SQL statements to retrieve and manipulate data.
This document discusses using Oracle Data Integrator (ODI) to validate data against Hyperion Planning metadata before loading the data into Essbase cubes. It proposes using a single generic inbound table in ODI to hold data for multiple Planning applications. ODI constraints would validate the data against Planning repositories to ensure only valid members are loaded to Essbase. This prevents slow cell-by-cell loads and allows adding new Planning applications easily with minimal ODI changes.
Incredible ODI tips to work with Hyperion tools that you ever wanted to knowRodrigo Radtke de Souza
ODI is an incredible and flexible development tool that goes beyond simple data integration. But most of its development power comes from outside-the-box ideas.
* Did you ever want to dynamically run any number of “OS” commands using a single ODI component?
* Did you ever want to have only one data store and loop different sources without the need of different ODI contexts?
* Did you ever want to have only one interface and loop any number of ODI objects with a lot of control?
* Did you ever need to have a “third command tab” in your procedures or KMs to improve ODI powers?
* Do you still use an old version of ODI and miss a way to know the values of the variables in a scenario execution?
* Did you know ODI has four “substitution tags”? And do you know how useful they are?
* Do you use “dynamic variables” and know how powerful they can be?
* Do you know how to have control over you ODI priority jobs automatically (stop, start, and restart scenarios)?
The document describes Java Database Connectivity (JDBC), which provides Java applications with access to most database systems via SQL. It outlines the JDBC architecture and classes in the java.sql package. JDBC drivers allow applications to connect to databases without using proprietary APIs. There are four types of JDBC drivers. The document also provides an example of how to load a driver, connect to a database, execute a query, and retrieve and display results.
Data Warehouse - What you know about etl process is wrongMassimo Cenci
The document discusses redefining the typical ETL process. It argues that the traditional understanding of ETL, consisting of extraction, transformation, and loading, is misleading and does not accurately describe the workflow. Specifically, it notes that:
1) The extraction step is usually handled by external source systems, not the data warehouse team.
2) There is a missing configuration and data acquisition step before loading.
3) Transformation is better thought of as data enrichment rather than transformation.
4) The loading phase is unclear about where the data should be loaded.
It proposes redefining the process as configuration, acquisition, loading (to a staging area), enrichment, and final loading to the data warehouse.
This document introduces Oracle9i and relational database concepts. It discusses Oracle9i features like scalability and reliability. It also explains that a relational database consists of tables related through primary and foreign keys that can be accessed using SQL. The Oracle database server allows storage and querying of data across these tables.
In this lecture we look at the patterns in chapter 18 in the textbook (Patterns of Enterprise Application Architecture). The lecture is in two parts. First we go through each of the patterns and explain each.
Then in the second part we look at a problem we have to solve and try to get the patterns to show themselves at the time they are needed.
This document provides an overview of Java Database Connectivity (JDBC) in 3 sentences or less:
JDBC allows Java programs to connect to and interact with various database systems and provides a standard API for querying and manipulating data in relational databases. The key components of JDBC include the DriverManager, Connection, Statement, and ResultSet objects, and connecting to a database typically involves 4 steps - loading a driver, opening a connection, executing SQL statements via a Statement object, and processing result sets. JDBC offers database independence and ease of administration along with the ability to access any database from Java applications.
The document discusses Java Database Connectivity (JDBC) and how it allows Java programs to connect to databases. It describes the four types of JDBC drivers, the core JDBC interfaces like Driver, Connection, and Statement, and how to use JDBC to perform CRUD operations. The key interfaces allow establishing a database connection and executing SQL statements to retrieve and manipulate data.
InterConnect 2016, OpenJPA and EclipseLink Usage Scenarios (PEJ-5303)Kevin Sutter
Presentation given at InterConnect 2016. With the introduction of EclipseLink as another JPA provider for WebSphere, this presentation will help with the usage and migration scenarios.
EclipseLink is an open source persistence framework that includes EclipseLink JPA for object-relational mapping, EclipseLink MOXy for object-XML mapping, and other services. EclipseLink JPA is the reference implementation for JPA 2.0 and supports advanced features through extensions for areas like caching, locking, and stored procedures. These extensions allow for high performance and tunability across databases while leveraging underlying technologies.
The document discusses JDBC (Java Database Connectivity), which provides Java applications with methods to access databases. It covers JDBC architecture and driver types, including Type 1 (JDBC-ODBC bridge), Type 2 (native API), Type 3 (network protocol), and Type 4 (pure Java) drivers. The key classes and interfaces of the JDBC API are also summarized, along with the typical steps to connect to a database using JDBC: loading a driver, connecting, executing statements, and handling exceptions.
The document discusses JDBC (Java Database Connectivity) and its architecture. It describes JDBC as a standard Java API that allows Java applications to connect to databases. It outlines the key components of JDBC including the driver manager, drivers, connections, statements, result sets, and how they interact. It also discusses the different types of JDBC drivers and how prepared statements and callable statements work.
Java applications cannot directly communicate with a database to submit data and retrieve the results of queries.
This is because a database can interpret only SQL statements and not Java language statements.
For this reason, you need a mechanism to translate Java statements into SQL statements.
The JDBC architecture provides the mechanism for this kind of translation.
The JDBC architecture can be classified into two layers :
JDBC application layer.
JDBC driver layer.
JDBC application layer : Signifies a Java application that uses the JDBC API to interact with the JDBC drivers. A JDBC driver is software that a Java application uses to access a database. The JDBC driver manager of JDBC API connects the Java application to the driver.
JDBC driver layer : Acts as an interface between a Java applications and a database. This layer contains a driver , such as a SQL server driver or an Oracle driver , which enables connectivity to a database.
A driver sends the request of a Java application to the database. After processing the request, the database sends the response back to the driver. The driver translates and sends the response to the JDBC API. The JDBC API forwards it to the Java application.
One Less Thing For DBAs to Worry About: Automatic IndexingJim Czuprynski
You’re a busy Oracle DBA. Your phone rings. It’s your most troublesome user, once again complaining that her query is running slow. You take a quick look at the execution plan, find a possible choice for a new index to improve its performance, and drop it in place: Problem solved. Or is it? Even an experienced DBA may not immediately realize the impact that new index will have on the performance of dozens of other queries and DML statements.
Finally, there’s a better way: Let the database decide.
I'll show you how Automatic Indexing (AI) - one of the newest features of Oracle Database 19c – provides an intriguing alternative to reactive performance tuning methodologies for index creation. We’ll look at how AIC reacts to a heavy hybrid application workload and then holistically builds, tests, and implements the most appropriate secondary indexes needed to improve database performance.
JDBC provides a standard interface for connecting to relational databases from Java applications. It establishes a connection with a database, allows sending SQL statements to it, and processing the results. The key classes and interfaces in JDBC are located in the java.sql package. JDBC supports connecting to all major databases and provides a consistent API for database access.
Stored procedures and functions are named PL/SQL blocks that are stored in a database. They improve performance by reducing network traffic and allowing shared memory usage. Stored procedures are created using the CREATE PROCEDURE statement and can accept parameters using modes like IN, OUT, and IN OUT. Stored functions are similar but return a value. Packages group related database objects like procedures, functions, types and provide modularity and information hiding.
This document discusses Extract, Transform, Load (ETL) processes. It covers topics like ETL definitions, implementation, planning, design and development. Specific sections define ETL processing, discuss ETL tools versus hand-coded solutions, ETL project environments and architectures. It also addresses ETL and data quality, how to handle incorrect source values. The presenter is Igor Bralgin and the agenda suggests exploring ETL processing, implementation, common pitfalls and taking questions.
This document contains a collection of questions and answers related to Informatica technical interviews. It includes questions about bitmap indexes, deleting duplicate rows from flat files, recovery strategies after a session fails, limitations of joiner transformations, how the server recognizes source and target databases, the purpose of rank indexes in a group, database operation constants and flags, generating reports using Informatica, starting batches within batches, types of groups in a router transformation, types of batches, the PowerCenter repository, differences between dynamic and static caches, the use of source qualifiers, page code compatibility, synonyms, and types of lookup caches.
This document provides an overview of JDBC (Java Database Connectivity) including:
- JDBC allows Java applications to connect to databases using SQL and handles vendor differences through drivers.
- There are 4 types of JDBC drivers that handle database connections differently.
- Key JDBC interfaces like Connection, Statement, PreparedStatement, CallableStatement, ResultSet allow executing queries and accessing results.
- Stored procedures can be executed through CallableStatements. Transactions ensure atomic execution across databases. Connections must be closed in the proper sequence.
This document discusses SQL functions. It defines SQL functions as sub-programs commonly used for processing or manipulating data in SQL databases. It notes that SQL functions have input parameters and return a single value. The document categorizes SQL functions into built-in and user-defined functions. Built-in functions are standard functions provided by the SQL system, while user-defined functions are created by users for specific purposes. Some advantages of SQL functions are that they can be reused, improve performance, and make complex logic easier to understand. Examples of aggregate functions like AVG, COUNT, MAX, and SUM are provided.
MuleSoft Nashik Virtual Meetup#3 - Deep Dive Into DataWeave and its ModuleJitendra Bafna
Deep Dive Into DataWeave and its Modules
The document discusses DataWeave, MuleSoft's data transformation language. It covers DataWeave modules, operators, working with arrays and objects, and Mule runtime features. Key topics include DataWeave fundamentals like data types, reading/writing data, variables, operators, and flow control. Functions, filtering, mapping, reducing, and updating arrays and objects are also summarized.
CallableStatement allows Java applications to call stored procedures in a database. Stored procedures are programs stored in a database that can be run from an application to improve performance. Database developers create stored procedures that are executed on the database server to encapsulate common operations.
JDBC (Java Database Connectivity) is a standard Java API for connecting to databases. It provides interfaces for tasks like making database connections, executing SQL statements, and retrieving results. There are 4 types of JDBC drivers that implement the JDBC interfaces in different ways. A basic JDBC program imports SQL packages, registers the JDBC driver, gets a database connection, executes SQL statements using a Statement object, extracts result data, and closes resources.
What is Data Warehousing? ,
Who needs Data Warehousing? ,
Why Data Warehouse is required? ,
Types of Systems ,
OLTP
OLAP
Maintenance of Data Warehouse
Data Warehousing Life Cycle
Manage security in Model-app Power App with Common data serviceLearning SharePoint
Manage security in Model-app Power App with Common data service - three options are discussed. Managing security by Owner & Access Teams, Managing security by sharing individual records and by creating Group Teams - AAD and Office 365 Teams.
The document discusses security policies, mechanisms, and formal languages for expressing policies. It covers types of security policies like confidentiality and integrity policies. It also discusses policy models, access control types, and examples of formal policy languages like DTEL that use domains and types to constrain access. Trust assumptions in formal methods are outlined, and the relationship between secure and precise enforcement mechanisms is discussed.
InterConnect 2016, OpenJPA and EclipseLink Usage Scenarios (PEJ-5303)Kevin Sutter
Presentation given at InterConnect 2016. With the introduction of EclipseLink as another JPA provider for WebSphere, this presentation will help with the usage and migration scenarios.
EclipseLink is an open source persistence framework that includes EclipseLink JPA for object-relational mapping, EclipseLink MOXy for object-XML mapping, and other services. EclipseLink JPA is the reference implementation for JPA 2.0 and supports advanced features through extensions for areas like caching, locking, and stored procedures. These extensions allow for high performance and tunability across databases while leveraging underlying technologies.
The document discusses JDBC (Java Database Connectivity), which provides Java applications with methods to access databases. It covers JDBC architecture and driver types, including Type 1 (JDBC-ODBC bridge), Type 2 (native API), Type 3 (network protocol), and Type 4 (pure Java) drivers. The key classes and interfaces of the JDBC API are also summarized, along with the typical steps to connect to a database using JDBC: loading a driver, connecting, executing statements, and handling exceptions.
The document discusses JDBC (Java Database Connectivity) and its architecture. It describes JDBC as a standard Java API that allows Java applications to connect to databases. It outlines the key components of JDBC including the driver manager, drivers, connections, statements, result sets, and how they interact. It also discusses the different types of JDBC drivers and how prepared statements and callable statements work.
Java applications cannot directly communicate with a database to submit data and retrieve the results of queries.
This is because a database can interpret only SQL statements and not Java language statements.
For this reason, you need a mechanism to translate Java statements into SQL statements.
The JDBC architecture provides the mechanism for this kind of translation.
The JDBC architecture can be classified into two layers :
JDBC application layer.
JDBC driver layer.
JDBC application layer : Signifies a Java application that uses the JDBC API to interact with the JDBC drivers. A JDBC driver is software that a Java application uses to access a database. The JDBC driver manager of JDBC API connects the Java application to the driver.
JDBC driver layer : Acts as an interface between a Java applications and a database. This layer contains a driver , such as a SQL server driver or an Oracle driver , which enables connectivity to a database.
A driver sends the request of a Java application to the database. After processing the request, the database sends the response back to the driver. The driver translates and sends the response to the JDBC API. The JDBC API forwards it to the Java application.
One Less Thing For DBAs to Worry About: Automatic IndexingJim Czuprynski
You’re a busy Oracle DBA. Your phone rings. It’s your most troublesome user, once again complaining that her query is running slow. You take a quick look at the execution plan, find a possible choice for a new index to improve its performance, and drop it in place: Problem solved. Or is it? Even an experienced DBA may not immediately realize the impact that new index will have on the performance of dozens of other queries and DML statements.
Finally, there’s a better way: Let the database decide.
I'll show you how Automatic Indexing (AI) - one of the newest features of Oracle Database 19c – provides an intriguing alternative to reactive performance tuning methodologies for index creation. We’ll look at how AIC reacts to a heavy hybrid application workload and then holistically builds, tests, and implements the most appropriate secondary indexes needed to improve database performance.
JDBC provides a standard interface for connecting to relational databases from Java applications. It establishes a connection with a database, allows sending SQL statements to it, and processing the results. The key classes and interfaces in JDBC are located in the java.sql package. JDBC supports connecting to all major databases and provides a consistent API for database access.
Stored procedures and functions are named PL/SQL blocks that are stored in a database. They improve performance by reducing network traffic and allowing shared memory usage. Stored procedures are created using the CREATE PROCEDURE statement and can accept parameters using modes like IN, OUT, and IN OUT. Stored functions are similar but return a value. Packages group related database objects like procedures, functions, types and provide modularity and information hiding.
This document discusses Extract, Transform, Load (ETL) processes. It covers topics like ETL definitions, implementation, planning, design and development. Specific sections define ETL processing, discuss ETL tools versus hand-coded solutions, ETL project environments and architectures. It also addresses ETL and data quality, how to handle incorrect source values. The presenter is Igor Bralgin and the agenda suggests exploring ETL processing, implementation, common pitfalls and taking questions.
This document contains a collection of questions and answers related to Informatica technical interviews. It includes questions about bitmap indexes, deleting duplicate rows from flat files, recovery strategies after a session fails, limitations of joiner transformations, how the server recognizes source and target databases, the purpose of rank indexes in a group, database operation constants and flags, generating reports using Informatica, starting batches within batches, types of groups in a router transformation, types of batches, the PowerCenter repository, differences between dynamic and static caches, the use of source qualifiers, page code compatibility, synonyms, and types of lookup caches.
This document provides an overview of JDBC (Java Database Connectivity) including:
- JDBC allows Java applications to connect to databases using SQL and handles vendor differences through drivers.
- There are 4 types of JDBC drivers that handle database connections differently.
- Key JDBC interfaces like Connection, Statement, PreparedStatement, CallableStatement, ResultSet allow executing queries and accessing results.
- Stored procedures can be executed through CallableStatements. Transactions ensure atomic execution across databases. Connections must be closed in the proper sequence.
This document discusses SQL functions. It defines SQL functions as sub-programs commonly used for processing or manipulating data in SQL databases. It notes that SQL functions have input parameters and return a single value. The document categorizes SQL functions into built-in and user-defined functions. Built-in functions are standard functions provided by the SQL system, while user-defined functions are created by users for specific purposes. Some advantages of SQL functions are that they can be reused, improve performance, and make complex logic easier to understand. Examples of aggregate functions like AVG, COUNT, MAX, and SUM are provided.
MuleSoft Nashik Virtual Meetup#3 - Deep Dive Into DataWeave and its ModuleJitendra Bafna
Deep Dive Into DataWeave and its Modules
The document discusses DataWeave, MuleSoft's data transformation language. It covers DataWeave modules, operators, working with arrays and objects, and Mule runtime features. Key topics include DataWeave fundamentals like data types, reading/writing data, variables, operators, and flow control. Functions, filtering, mapping, reducing, and updating arrays and objects are also summarized.
CallableStatement allows Java applications to call stored procedures in a database. Stored procedures are programs stored in a database that can be run from an application to improve performance. Database developers create stored procedures that are executed on the database server to encapsulate common operations.
JDBC (Java Database Connectivity) is a standard Java API for connecting to databases. It provides interfaces for tasks like making database connections, executing SQL statements, and retrieving results. There are 4 types of JDBC drivers that implement the JDBC interfaces in different ways. A basic JDBC program imports SQL packages, registers the JDBC driver, gets a database connection, executes SQL statements using a Statement object, extracts result data, and closes resources.
What is Data Warehousing? ,
Who needs Data Warehousing? ,
Why Data Warehouse is required? ,
Types of Systems ,
OLTP
OLAP
Maintenance of Data Warehouse
Data Warehousing Life Cycle
Manage security in Model-app Power App with Common data serviceLearning SharePoint
Manage security in Model-app Power App with Common data service - three options are discussed. Managing security by Owner & Access Teams, Managing security by sharing individual records and by creating Group Teams - AAD and Office 365 Teams.
The document discusses security policies, mechanisms, and formal languages for expressing policies. It covers types of security policies like confidentiality and integrity policies. It also discusses policy models, access control types, and examples of formal policy languages like DTEL that use domains and types to constrain access. Trust assumptions in formal methods are outlined, and the relationship between secure and precise enforcement mechanisms is discussed.
ODTUG Learn from Home S E R I E S-Automating Security Management in PBCS!Dayalan Punniyamoorthy
The document discusses automating security management in Oracle Planning and Budgeting Cloud (PBC). It describes the different artifacts and granular levels that can have security applied in PBC, including users, groups, roles, and dimensions/values. It then covers best practices for addressing security in bulk using the Lifecycle Management (LCM) tool, EPM Automate commands, and REST APIs. The presentation includes a demo and Q&A section.
CRMUG UK November 2015 - Dynamics CRM Security Modelling and Performance by A...Wesleyan
This document discusses performance considerations for Dynamics CRM security modeling. It provides an overview of how security is evaluated in CRM through user roles, teams, record sharing and ownership. It also covers the potential performance impacts of cascading behaviors, the principal object access table, and hierarchical security modeling. Tips are provided for optimizing the security model design.
Goals of Protection
Principles of Protection
Domain of Protection
Access Matrix
Implementation of Access Matrix
Access Control
Revocation of Access Rights
Capability-Based Systems
Language-Based Protection
User accounts, authentication, strong passwords, and network security are important controls to ensure authorized access and prevent unauthorized access. Group policy objects and security groups can be used to centrally manage permissions and settings for users and computers. Creating user and computer accounts, defining different group types and scopes, and configuring group policies allows administrators to effectively manage security and resources on the network.
This document discusses security settings and access controls for an application. It describes how user access rights are determined based on their membership and the security classes assigned to data and documents. Administrators can define security classes, assign users and groups to classes to control access, run security and audit reports, and load, extract, and migrate security configurations between systems.
The session will address the following points:
* Introduction to security in Oracle EPM Cloud Planning
* What are the artifacts/granular level that can have security in PBC?
* What are the best practices for addressing security?
* How can you mass update security using EPM Automate, REST API, Groovy, LCM, etc.?
Object design is the process of refining requirements analysis models and making implementation decisions to optimize execution time, memory usage, and other performance measures. It involves four main activities: service specification to define class interfaces; component selection and reuse of existing solutions; restructuring models to improve code reuse; and optimization to meet performance requirements. During object design, interfaces are fully specified with visibility, type signatures, and contracts to clearly define class responsibilities.
This document provides an overview of object-oriented programming (OOP) including:
- The history and key concepts of OOP like classes, objects, inheritance, polymorphism, and encapsulation.
- Popular OOP languages like C++, Java, and Python.
- Differences between procedural and OOP like top-down design and modularity.
26012 Managing & Auditing Security During Implementation And Beyond 03172009denigoin
This document provides an overview and agenda for managing and auditing security during a PeopleSoft implementation and beyond. It discusses what security tools are delivered with PeopleSoft, including queries to map permissions to roles and users. It also covers the core security tables, how to set up row level security in HR and Campus Solutions modules, and highlights some new security features in PeopleSoft 9.1.
This document summarizes a presentation on Dataverse permissions and security. It discusses key concepts like environment access, data ownership, security roles for row-level access, business units, teams and users, column-level security profiles, record sharing and access teams, and hierarchical/positional security. The presentation provides examples and explanations of how to configure these different Dataverse security features.
JD Edwards EnterpriseOne security enables administrators to control access for individual users and groups through various security features at the user, role, public, and object levels. Object level security in particular allows flexibility in securing specific applications, actions, rows, columns, tabs, exits and more. The system supports both user-based and system-based security approaches, with one method needing to be selected before implementing a security model.
JD Edwards EnterpriseOne uses object level security to control user access and permissions. It enables administrators to secure individual applications, forms, tables, fields, and other objects. There are different types of security, including application security, action security, row security, and tab security. The system also supports user-based and system-based security approaches, and security is defined and managed through the User Security and Security Workbench applications.
This document discusses principles of programming and software engineering. It describes the software development life cycle, which consists of nine phases: specification, design, risk analysis, verification, coding, testing, refining the solution, production, and maintenance. It also discusses problem solving through algorithms, data storage, object-oriented programming concepts like encapsulation and inheritance, and design techniques like top-down design and object-oriented design. The document emphasizes that modularity, ease of use, and fail-safe programming are important for developing quality software solutions.
Similar to Unleashing Hyperion Planning Security Using ODI (20)
OAC stands for Oracle Analytics Cloud Services, and it’s another cloud solution offered by Oracle. It provides you a lot of analytic tools for your data. The question is, do you need to be 100% cloud to use OAC services?
Well, with ODI we always have options, and for OAC that is not an exception.
In this presentation we’ll take a look at three different ways to use ODI to integrate all your data with OAC, ranging from using your existing on-premises environment to a 100% cloud solution (no ODI/DB footprint in your environment).
Oracle Cloud services products, including Planning and Budget Cloud Service (PBCS), enables companies to focus on their own business instead of spending money and resources on maintaining big IT infrastructures. It also gives them the possibility to be connected 24x7 from any place in the world.
But what happens if this company already has an ODI on-premise infrastructure and they want to integrate the new PBCS with it? Can we use our existing ODI on-premise? How hard is to accomplish this?
This session will show how to use your ODI on-premise to integrate and orchestrate your PBCS seamlessly.
Essbase Statistics DW: How to Automatically Administrate Essbase Using ODIRodrigo Radtke de Souza
In order to have a performatic Essbase cube, we must keep vigilance and follow up its growth and its data movements so we can distribute caches and adjust the database parameters accordingly. But this is a very difficult task to achieve, since Essbase statistics are not temporal and only tell you the cube statistics is in that specific time frame.
This session will present how ODI can be used to create a historical statistical DW containing Essbase cube’s information and how to identify trends and patterns, giving us the ability for programmatically tune our Essbase databases automatically.
EPM environments are generally supported by a Data Warehouse, however, we often see that those DWs are not optimized for the EPM tools. During the years, we have witnessed that modeling a DW thinking about the EPM tools may greatly increase the overall architecture performance.
The most common situation found in several projects is that the people that develops the data warehouse does not have a great knowledge about EPM tools and vice-versa. This may create a big gap between those two concepts which may severally impact performance.
This session will show a lot of techniques to model the right Data Warehouse for EPM tools. We will discuss how to improve performance using partitioned tables, create hierarchical queries with “Connect by Prior”, the correct way to use Multi-Period tables for block data load using Pivot/Unpivot and more. And if you want to go ever further, we will show you how to leverage all those techniques using ODI, which will create the perfect mix to perform any process between your DW and EPM environments.
In a fast-moving business environment, finance leaders are successfully leveraging technology advancements to transform their finance organizations and generate value for the business.
Oracle’s Enterprise Performance Management (EPM) applications are an integrated, modular suite that supports a broad range of strategic and financial performance management tools that help business to unlock their potential.
Dell’s global financial environment contains over 10,000 users around the world and relies on a range of EPM tools such as Hyperion Planning, Essbase, Smart View, DRM, and ODI to meet its needs.
This session shows the complexity of this environment, describing all relationships between those tools, the techniques used to maintain such a large environment in sync, and meeting the most varied needs from the different business and laws around the world to create a complete and powerful business decision engine that takes Dell to the next level.
No more unknown members! Smart data load validation for Hyperion Planning usi...Rodrigo Radtke de Souza
Usually, ODI data load interfaces for Essbase are simple and fast to be developed. But, depending on the data source quality, those interfaces may become a performance challenge. Essbase demands that all POV members in which we are trying to insert data to exist in the Essbase outline and when this is not true, Essbase switches its load method from Block Mode to Cell Mode. When this situation happens, one data load that would take only five minutes to complete may take several hours, degrading the Hyperion environment performance. Join us in this session to discover how we solved this problem in Dell's Computers in a dynamic way for any number of Hyperion Planning applications only using ODI data constraints and Hyperion Planning metadata repository to validate all POV members that will be used in the data load, guaranteeing the best performance and data quality on the Hyperion Planning environment.
Are you a young professional who just got out of college and unsure which career path to follow? Are you thinking about changing your career to something completely new and looking for options? Either way, this webinar is the right one for you. It’s the first in a series that the new ODTUG Career Track Community will bring you to show what Oracle careers look like and where/how to start with them.
During this webinar, we will talk about what an ETL developer career looks like, what the expectations are, challenges, rewards, and which steps are needed to be successful. We will discuss a wide range of topics, such as tools used on the job, certification paths, the importance of social media, user groups, and more. This webinar will be presented by Rodrigo Radtke de Souza, who has been working in the Oracle ETL world for quite some time now and has achieved great accomplishments as an ETL developer, such as Oracle ACE nomination, frequent Kscope speaker, ODTUG Leadership Program participant, and a successful career at Dell.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
State of Artificial intelligence Report 2023kuntobimo2016
Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines.
We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.
The State of AI Report is now in its sixth year. Consider this report as a compilation of the most interesting things we’ve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Industry: Areas of commercial application for AI and its business impact.
Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
Predictions: What we believe will happen in the next 12 months and a 2022 performance review to keep us honest.
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
2. About the Speakers
Giampaoli, Ricardo
● Master in Business
Administration and IT
management
● Founder of TeraCorp
Consulting
● 18 year working with IT and
the last 8 years as an EPM
solution architect
● EPM training instructor
● Essbase/OBIEE/ODI
Certified Specialist
● Blogger @ devepm.com
Radtke, Rodrigo
● Graduated in Computer
Engineering
● Software Developer Advisor at
Dell
● Ten years working with IT and
the last five as ETL architect
● ODI, Oracle and Java Certified
● Blogger @ devepm.com
About
3. TeraCorp is a company specialized in products
and services focused on EPM
TeraCorp mission is to create innovate
solutions that helps people, businesses and
partners to exceed their goals reaching their full
potential.
Learn more @ www.teracorp.com.br/en
About TeraCorp
About
4. Knowledge on:
● ODI
● Hyperion Planning
● SQL
Pre-Requisites
Pre-Requisites
6. Business Needs
The Study Case
One Cube with an Entity dimension containing all
22000+ cost center in the world
Security must be granted in such way that an user
from a region can only see data from their cost
centers
The parents aggregation should display only the
sum of data that the user has access
Cost center from different regions under the same
parent
Cost center region defined by an attribute
dimension
7. Hyperion Planning Security
Is Security Robust and Flexible?
● Cannot use attribute dimension to define security
Access control at Leaf level?
● How to provide and maintain security at leaf level in
dimensions with 22000 + cost centers?
● How to handle cost centers that change its region?
Use Microsoft Excel to generate all necessary
security combinations?
● What’s the cost to maintain such a file in a fast
changing business structure?
Planning Security
8. A Region dimension to split the
data by the world regions and
provide the right aggregation in
parent levels.
Cost Center Region defined by an
attribute dimension.
● The EMEA users needs to have
access only to Cost Centers with
support geography that belongs to
SUPP_EMEA and only to the EMEA
Region.
Aggregation Solution
Solution Choice
9. Read the Planning application repository to
dynamically build the Entity dimension security
based in the geography attributes and the
groups associated in the Entity Upper level
members
Security must be granted “bottom-up”
Security Solution
Solution Choice
10. The security must be granted for all users or groups in the
high level members (e.g. Entity gen1 or/and gen2 members).
The relation must be set as “Member”).
The Entity members attributes and the Support Geography
hierarchy
The users or groups names should have a relationship
between it and the attribute member.
Pre Requisites
Planning Security
Groups
11. All information exists in the Planning repository.
Seven tables were used to build this solution.
● Three security tables
● Three Attribute tables
● One object table
Planning Repository Overview
Planning Repository
12. Security is define using three tables:
● HSP_USERS
● Only used if an user is assigned directly to an object in
planning
● HSP_GROUP
● Only used if a group is assigned directly to an object in
planning
● HSP_ACCESS_CONTROL
● Is used to associate an user or group to an object and also
inform what access it will have to it and if this access will be
spread to its children or only on it
Security Tables
Planning Repository
13. Security Tables
Column Name Description
GROUP_ID
The group id that is created after an user that belongs to a
group login or a group is assigned to any object in Hyperion
planning.
SID The native or external directory ID
Column Name Description
USER_ID
The user id that is created after an user login or is assigned to
any object in Hyperion planning.
SID The native or external directory ID
HSP_USERS
HSP_GROUP
Planning Repository
14. Column Name Description
USER_ID
The user or the group id that is created after a group or an user is
assigned to any object in Hyperion planning.
OBJECT_ID The ID of the object that has been granted the security
ACCESS_MODE
The type of access that an user or a group can have on an object:
1 = Read 3 = ReadWrite -1 = Deny
FLAGS
Essbase access flag, determines if an user or a group has access
only to that object or to the hierarchy below it:
0
Member
5
@Children
6
@IChildren
8
@Descendants
9
@IDescendants
Security Tables
HSP_ACCESS_CONTROL
Planning Repository
15. Attributes is define using three tables:
● HSP_ATTRIBUTE_DIM
● Stores all attribute dimensions
● HSP_ATTRIBUTE_MEMBER
● Holds all attribute members stored in planning
● HSP_MEMBER_TO_ATTRIBUTE
● Joins the attributes with the members of a Dimension
Attribute Tables
Planning Repository
16. Attribute Tables
Column Name Description
ATTR_ID ID of the Attribute dimension.
DIM_ID The ID of the dimension that the attribute is associated
HSP_ATTRIBUTE_DIM
Planning Repository
Column Name Description
ATTR_MEM_ID ID of the Attribute member.
ATTR_ID ID of the Attribute dimension.
HSP_ATTRIBUTE_MEMBER
17. Attribute Tables
Planning Repository
Column Name Description
MEMBER_ID ID of the member that has been assigned an attribute.
ATTR_ID ID of the Attribute dimension.
ATTR_MEM_ID ID of the Attribute member.
HSP_MEMBER_TO_ATTRIBUTE
18. Planning objects is define using one table:
● HSP_OBJECT
● Contains the Metadata from all Planning objects as well the
parent member relationship used to create all metadata
structure.
Object Table
Planning Repository
19. Column Name Description
OBJECT_ID Object ID for all objects in planning.
OBJECT_NAME Stores all metadata description in Planning (e.g. Alias, Members)
OBJECT_TYPE Type of the Object (e.g. Entity, Account, Attribute…)
PARENT_ID
Parent ID of the object. Used for build the parent/child relationship
with OBJECT_ID
GENERATION Inform which generation that object belongs.
HAS_CHILDREN Inform if the object has or not a child
Object Tables
HSP_OBJECT
Planning Repository
20. Entity Hierarchy
Building Solution
Extract the Entity Dimension
members and their attributes
from Planning Repository
● Use connect by nocycle prior to
rebuild the hierarchy from bottom
up
21. Building Solution
Support Geography Hierarchy
Extract the Support Geography Attribute
Dimension Hierarchy from Planning Repository
● Use connect by prior to rebuild the hierarchy
23. Building Solution
Users/Groups Security
Extract the generation 1 and 2 members and their
security groups from Planning Repository
● Generation 1 is Channel and contains all groups that has
access to everything
● Generation 2 are the Business segments and contains all
groups that has access only to that segment
24. Join the queries by LIKE of REGION_NAME
Building Solution
Join 2: Adding Security Groups
26. Join Parent_ID from Generation 1 or 2 and Entity_ID
Join 3: Putting Everything Together
Building Solution
27. Why ODI?
Building Solution
Full flexible development platform
● Tweak KMs and procedures to create
dynamic processes
● Virtually accepts any existing technology
Complete execution platform
● Built in security (Only key users can use it)
● Easy to be used by Users
● Automatize, schedule and control jobs
● Complete log information
28. Two ways to do it:
● Solution 1: Generate a Secfile and run a command
line in the end of the ODI process to load it into
Planning (using ImportSecurity utility)
● Solution 2: Insert the security directly into
HSP_ACCESS_CONTROL table
Solution Design Choices
Building Solution
ImportSecurity Insert into Repository
No clear control (clear all or nothing) Clear any type of security based in
any rule (delete clause + repository)
No service restart Service restart
No repository manipulation Repository manipulation
29. ImportSecurity utility loads access permissions for
users or groups from a text file into Planning
ImportSecurity
Parameter Description
[-f:passwordFile] Optional: If an encrypted password file is set up, use as the first parameter in the
command line to read the password from the full file path and name specified in
passwordFile.
appname Name of the Planning application to which you are importing access permissions.
username Planning administrator user name.
delimiter Optional: SL_TAB, SL_COMMA, SL_PIPE, SL_SPACE, SL_COLON, SL_SEMI-COLON. If
no delimiter is specified, comma is the default.
RUN_SILENT Optional: Execute the utility silently (the default) or with progress messages. Specify 0 for
messages, or 1 for no messages.
[SL_CLEARALL] Optional: Clear existing access permissions when importing new access permissions. Must
be in uppercase.
ImportSecurity.cmd [-f:passwordFile] “appname,username,[delimiter],[RUN_SILENT],[SL_CLEARALL]”
Solution 1
30. Item Description
username or group name The name of a user or group defined in Shared Services Console.
artifact name The named artifact for the imported access permissions (for example the member,
data form, task list, folder, or Calculation Manager business rule).
access permissions Read, ReadWrite, or None. If there are duplicate lines for a user/member
combination, the line with ReadWrite access takes precedence.
Essbase access flags @CHILDREN, @ICHILDREN, @DESCENDANTS, @IDESCENDANTS and
MEMBER.
artifact type For artifacts other than members, distinguish which artifact you are importing
security for with artifact type identifier.
The SecFile.txt contain the access permissions
for users or groups and should have the
following format:
SecFile.txt
Solution 1
31. Importing access permissions overwrites
existing access assignments and the
SL_CLEARALL parameter clears all existing
access permissions giving us two options:
● (1.1) Load only the new security and manually delete
the old undesired access (Sent by email through the
interface)
● (1.2) Clear all Security with SL_CLEARALL and then
load all access from all dimensions back to Planning
(Entity + All other existing security)
Design Decision
Solution 1
32. Solution 1.1
Load only new security to SecFile.txt
● Using two datasets to generate a Minus between the
new and the existing security
Generating SecFile.txt
33. Solution 1.1
Load all old security to OldSecurity.txt
● Using two datasets to generate a Minus between the
existing security and the new generated access
Generating Old Security File
34. Solution 1.2
Load ALL security to SecFile.txt
● Using two datasets to generate an Union between
the new and the existing security
Generating Full SecFile.txt
35. Use a ODI Procedure to run a CMD command
on Planning Server and import security
Import Security
Solution 1
41. Ricardo Giampaoli – TeraCorp
Rodrigo Radtke de Souza - Dell
Thank you!
Thank You
Editor's Notes
Ricardo:
Present himself
Rodrigo:
Present himself
Ricardo:
TeraCorp is a company specialized in products and services focused on EPM
we are working to create epm products to help the developing process, maintenance efforts and performance improvement.
We want to create a smarter epm environment.
TeraCorp mission is to create innovate solutions that helps people, businesses and partners to exceed their goals reaching their full potential.
For a better understanding of this session is good to have a
1) Advanced Knowledge of ODI
2) Good Knowledge of Hyperion Planning
3) Good Knowledge of SQL
Or agenda will cover
Business Needs
Hyperion Planning Security
Planning Repository
Building Solutions
Dell’s Environment
QA
Before this project we had at dell 5 big planning applications. Three of then was splitted by region (amer, apj and emea) one wwops and one RUM application (Revenue Under Management). Each of this application had their own set of cost centers. Even the RUM app was splitted the cost centers by cubes (one cube per region).
In this project the requirement of the business was to create a new RUM app that will contain all cost centers from all regions (22000+) in the same cube.
Security should be granted in a way that the user can only see the cost center of their region.
Also the aggregation in the parent level should respect the user access, that’s mean, if under a parent we have 3 children each one from a different region and the user have access only to amer, he have to see only the amer amount in the parent level.
The trick part is that we could have cost centers from different regions under the same parent and the only way to identify the cost center region is using the support geography attribute dimension.
There is no doubt that Hyperion Planning security is robust and extremely flexible.
Well this statement is true, but not 100% accurate because planning don’t allow us to use attribute dimensions as a filter in the security. But is true because we can create the security by ourselves using some external tools as importutility tool or even with repository manipulation. The problem is how to create a process to handle a fast change environment and easy enough to be maintained?
I Already saw in other projects companies generating the security file using excel but with more than 22000+ cost centers and more than 70 security groups it’s get a little harder to maintain such a file.
For the aggregation part a region dimension will solve the problem (when the users input data in a amer cost center he’ll need to select the amer region as well).
The security is pretty simple in this dimension, all groups that has access to amer are granted the amer region access, the same for the other regions and for users with global access we grant access to total region and descendants. Also we used the planning conditional format to black out the form if a invalid combination of cost center and region is chosen.
We can see that the gen 1 of support geography has the region name on it, making easy to identify and create any role based on it.
The problem is how to create the security in the entity dimension using an attribute dimension as filter and knowing that under a parent we could have cost centers from different regions.
The solution is right under Planning! In its repository.
Querying the planning application repository we can build the entire security from bottom up and create the sec file or even populate the security tables with the new security using any type of rule imaginable.
And the best part is: it’s 100% dynamic, meaning almost 0 maintenance work.
How it will work. Basically we need to know 3 things:
The entities members and their support geography attribute:
The Support geography hierarchy to find out what region that attribute belongs
And The security groups that we need to grant access.
For the security groups we need 2 things:
First is to have in their name the region what it’s belongs (it’ll be used in the query to join the groups with the cost center attributes).
Second we need to grant security at gen 1 and gen 2 of entity hierarchy.
Gen1 will be the all channel access groups but restricted by region
Gen2 will be the groups that has access only to a Business segments plus a specific region.
Also the access flag should be set as member (because we’ll gran access member by member and we don’t want any Idescendants in any parent)
And the access mode granted for this groups will be used when we spread the security through the hierarchy.
This will give to our query the main lines to build the security (what channel or business segment that user will have access and what region it’ll have access)
Ok now we need to know how to get it:
We have all the information regarding a planning application in its repository.
For build the security we’ll need to query 7 tables:
3 tables to get the groups/users and the actual security
3 tables for the attribute hierarchy
1 table to get the metadata information
Planning stores all its security in three table:
Hsp_Users where it’s store all users that has access to planning
Hsp_group where it’s store all groups that has access to planning
And Hsp_access_control where planning stores all information regarding the objects and their grants.
The HSP_USERS and HSP_GROUP tables have the same structure, both have a ID that changes depending of the table but in both case they are created after a user login or after they are granted access in any object in planning
Also both have the SID that’s the native or external directory ID
The HSP_ACCES_CONTROL is the table that stores all security for all objects in planning.
It contains the USER_ID (this columns stores the IDS for both HSP_USERS or HSP_GROUP)
The OBJECT_ID, that’s the ID of the objects that received the grant
The ACCESS_MODE, if the user will have read, readwrite or deny access to a object
And FLAGS that tell planning if that user will have access only to that object or for their children.
For the Attributes planning stores then in 3 tables:
HSP_ATTRIBUTE_DIM Stores all attribute dimensions (the dimension it self! If your planning application have 3 attribute dimension this table will have 3 rows)
HSP_ATTRIBUTE_MEMBER Holds all attribute members stored in planning
HSP_MEMBER_TO_ATTRIBUTE Joins the attributes with the members of a Dimension
For HSP_ATTRIBUTE_DIM we have ATTR_ID that’s the ID of the attribute dimension
And DIM_ID that’s The ID of the dimension that the attribute is associated
For HSP_ATTRIBUTE_MEMBER we have ATTR_MEM_ID that is the ID of the Attribute member and ATTR_ID that is the ID of the Attribute dimension. (here we can figure out what attribute member belongs to what attribute dimension.
And for HSP_MEMBER_TO_ATTRIBUTE we have MEMBER_ID that is the ID of the member that has been assigned an attribute.
ATTR_ID that is the ID of the Attribute dimension
And ATTR_MEM_ID ID of the Attribute member
This table tell us what member have what attribute from what attribute dimension.
And finally we have HSP_OBJECT table that Contains the metadata from all Planning objects as well the parent member relationship used to create all hierarchy structure.
This table has some other columns but these are the important one for us.
The OBJECT_ID stores the ID for all objects in planning
OBJECT_NAME Stores all metadata description in Planning (e.g. Alias, Members, folders, forms)
OBJECT_TYPE store the Type of the Object (e.g. Entity, Account, Attribute, folder, forms…)
PARENT_ID stores the Parent ID of the object. This is used for build the parent/child relationship together with OBJECT_ID
GENERATION Informs which generation that object belongs.
HAS_CHILDREN Inform if the object has or not a child
Ok now let’s start to build our query.
First we need to get all members from the entity dimension with their attribute.
This will be used as the foundation to our query. Its important to spread the leaf attribute to their parents, this way we will be able to join this sub query even in the parent level with the support geography sub query to get the region of the attribute associated with the entity leaf member to the entire hierarchy.
To do so we need to rebuild the parent/child relationship in the HSP_OBJECT table and how we need the leaf attribute spread for all its parents we will need the rebuild the hierarchy bottom up. We need to do that because we’ll use the CONNECT_BY_ROOT operator, that’s used to bring the ROOT information from the hierarchy (the beginning point), and inverting it, it going to bring the leaf member.
To do so we’ll query the HSP_OBJECT Table using a CONNECT BY NO CYCLE with OBJECT_ID prior PARENT ID to rebuild the entity hierarchy from bottom up. We need to use the NOCYCLE operator because we inverted the CONNECT BY PRIOR relationship and if we not use the NOCICLE it’ll loop the query for ever.
Also, Because we inverted the CONNECT BY function, the starting point for this query will be all leafs members from Entity dimension instead of the top level member and for that we going to create a sub query in the START WITH operator (This function is what defines the starting point of the CONNECT BY function) filtering the OBJECT_TYPE 33, that’s Entity type ID and HAS_Children = 0 to get only the leaf members.
We can see in this example that the entity 223281 has the support geography attribute “SUPP_Netherlands” and this attribute was spread to the leaf parents until the top of the hierarchy as well the leaf_name for all hierarchy levels.
Ok now we need to get the Region information from the support geography attribute dimension
Since the region information is the top level of the support geography we need to rebuild the entire hierarchy to figure out all region of all attributes.
This sub query is different from the Entity sub query (that was a bottom-up query) because here we need to figure out the Parent information from the hierarchy.
We going to use the same strategy as before, a Connect by prior in HSP_OBJECT table but we’ll using PARENT_ID prior OBJECT_ID (the opposite order then the previous sub query) in order to retrieve the Region name from the Parent to its children (a Top-Down query) using the same CONNECT_BY_ROOT operator but this time getting the top level instead of the leaf level (Remember, It’s always get the ROOT information or the starting point).
As we can see in the example now we know that for “SUPP_Netherlands” the REGION_NAME is EMEA as well for all its parents.
Now we need to join both queries by ATTR_MEM_ID that’s the ID of the leaf attribute member.
With this we find out the region name for all entity hierarchy.
Now we need to figure out what Group will be spread to what members. To do so we need to query the HSP_ACCESS_CONTROL to get all security in the Generation 1 and 2 of entity (remember that this is our guide for the security spread) and also query the HSP_OBJECT to get the name of the groups (we need the name in case we’ll use the import utility approach).
It’s import to remember filter only the flags = 0. this way we only get the security that is set as member only (not Idescendants or any others)
With this query we’ll get all the security for generation 1 and 2 plus all the information regarding it as access mode (if is read or write or deny) and the flags (in our case will be always 0).
Now we need to join the previous two sub queries with this one to get the security information for each member we have in the entity dimension.
Our join is a like between the group name and the region name.
This is why we need to have the region name in the group name and in the support geography attribute dimension (or any attribute dimension we wants to use to create security)
We can see that a join like this will create a Cartesian result for all the regions and groups name. This will be fixed when we Join this sub query with the next and last sub query part, where we going to identify what parent that group belongs.
OK now we need to get ride of the Cartesian product that our last sub query creates.
For this we’ll need a sub query that is going to bring all members under generation 1 and 2 and all member under it in a way that it going to create a relationship between the generation 1 and 2 and their children.
For this we’ll use our old friend CONNECT BY PRIOR in a normal way (top-down) from Channel to the leafs and we’ll use the CONNECT_BY_ROOT to get the parent name and one new thing:
CONNECT_BY_SYS_PATH. This operator basically create a string with the entire path that returns from the query and will split each member by any character that we select! In our case we choose |.
This will create our relationship, because we’ll have in the same row the entire hierarchy.
We need only to split it by column, and for this we’ll use a Oracle regular expression. REGEXP_SUBSTR(PATH, '[^|]+', 1, 1).
This function enable us to get a string, define a character (that will be our search value) and define the starting position and the occurrences.
This is good because to move to the next one all we need is to increase the second parameter (the occurrences) to 2 that’s means it’ll get the next occurrence of the search value.
And Finally we will Join the last sub query part with the others, getting ride of the Cartesian product by creating a Relationship between the parent member that the security group have and the member that will receive the security and finishing our security query.
For this we’ll join the Generation 1 and 2 member (that we called PARENT_G1 and G2 with the PARENT_ID from the Parents sub query plus the ENTITY_ID from the Members sub query with the ENTITY_ID from the PARENTS sub query.
With this done we have our final query with all information to create dynamically any Planning security based in any attribute dimension. (remember that we create it based in Generation 1 and 2 because it’s enough for us. If you need to inform security with more details we just need to increase the generations in our query to 3 or more as needed, the rest will be the same)
Ok, from now one we’ll talk about the solution choices that we used at dell and explain the reasons for that but with this query you could create your own solution as a trigger that use this query to monitor any new groups added in the HSP_ACCESS_CONTROL table and populate it automatically or use pl/sql to do so, or any other possible solution that we could create with this information.
-----15 min
ODI is by far the best integration tool in the market today.
ODI 11g can be used to integrate several EPM tools and especially Hyperion Planning applications.
ODI knowledge models can be used to maintain Hyperion Planning metadata and also to load and extract data from its Essbase cubes.
But the main reason to use ODI is to take advance of its flexibility to customize its code.
You may tweak knowledge models and procedures in such way to allow you to create dynamic process with it.
With the right architecture in place ODI can be used as a full flexible development and execution platforms.
Dells uses ODI for the complete process that automatizes the entire maintenance cycle, admin tasks (like security, backups, optimizations), inbound and extract jobs, even with the possibility for you to schedule those jobs
For now we need to stick with odi11 because odi 12 doesn’t support Hyperion Planning/Essbase yet.
Rodrigo: Now that we have how our security should be, we need to decide how we are going to apply it to Hyperion Planning.
We have two ways to do it (read slide)
Let’s talk about the first solution.
This solution uses the existing ImportSecurity utility that comes with Hyperion Planning in order to read the SecFile and load the security into the application.
Here in this slide is how we use this utility. It is a very simple command line with some basic parameters (read slide).
Rodrigo: SecFile is just a text delimited file with all the security that needs to be added to Hyperion Planning.
It needs to have the following format (read the slide).
As I said in the previous slides, one of the cons of having the security loaded through ImportSecurity utility is that it does not have a good clear control, so basically we may clear everything and load the new security from SecFile, or we don’t clear anything and just add the new security settings over the existing one, which may lead to some garbage, some undesired old security, as an example, when a cost center changed its parent.
Based on this restriction, we may work in two ways: we may load the SecFile without SL_CLEARALL and discover the old undesired access using a SQL and then send those by email to an Admin to have them removed manually
Or we may get a SQL query to retrieve the existing security that is not related to Entity, that is the main dimension that we are showing here, clear all security using SL_CLEARALL option and load all new Entity related security and all other existing security retrieved by this query back to Planning.
But why we do not always use the CLEARALL option? That’s because depending on how big/complex is you security settings, it may take a long time to refresh the Essbase security filters. So this decision needs to take into consideration your current architecture to see if it worthy to use it or not.
So, if we choose to not use SL_CLEARALL and we want to manually delete the old undesired security settings, we will need to create two ODI interfaces.
First one will only load the new security to SecFile.txt. This is easily done by creating two datasets inside the ODI interface as shown in this slide:
The first dataset contains the entire desired Entity security (from our temporary security table) minus the security that already exists in Planning and here you may notice the OBJECT_TYPE 33 filter for Entity. This filter was added because we are dealing with only the Entity dimension here.
In our target datastore we have added four columns that represents the necessary layout for the SecFile. We have omitted the ARTIFACT_TYPE column from it because the MEMBER type is the default when you leave it blank. Also we have done some basic mappings there, decoding from the Planning codes to the codes accepted by the SecFile, so in ACCESS_PERMISSION 1 becomes Read, 3 Write and so on. Same thing for ESSBASE_ACCESS_FLAG column.
Now that only the new security is in the SecFile, it is just a matter to use it to import to Hyperion Planning without the SL_CLEARALL parameter
Our second interface invert the datasets orders:
The first dataset will read from what currently exists in Hyperion Planning, filtering OBJECT_TYPE 33, that stands for Entity,and FLAGS equal to 0 that represents the MEMBERS security type
The second dataset gets the data from our temporary security table. When we MINUS the first dataset and the second dataset, the query will give us all the old undesired security that should not exist anymore in the application.
Our target datastore can be in any format here and here in this example we decided to let it like the SecFile for convenience. Now you may create and ODI procedure that send emails, attach this file to the email and send it to an Admin to have him deleting the security manually for those cases.
But now, if we decide to go with the SL_CLEARALL option, we must remember that we will clear everything first and then we will need to reload both the new Entity security plus everything that already existed in the Planning application that was not related to Entity or to the MEMBER security time, because those are the ones that we actually are manipulating here. So we will need to create just one interface, but this one will be slightly different from the other ones that I have shown. This interface will UNION both datasets instead of MINUS them.
The first dataset is the desired security settings from our temporary security table, that contains the security for Entity and only for the MEMBER security type.
The second dataset will be everything that exists in planning that is not related to Entity (see the filter OBJECT_TYPE <> 33 there in the slide) OR everything that has the FLAGS column different from 0, meaning that it is not a MEMBER security type. This filter needs to be done this way because we want all security related to other Dimensions and also all those security settings from Entity Dimension that we may eventually have that is not related to MEMBER like Idecendants, Ichildren and so on.
And again, our target table will be the same SecFile datastore that was shown before with the same mappings to convert from Planning codes to SeqFile codes.
Here is the final procedure for our Solution 1, independently if you are going to use CLEARALL or not
This procedure calls ImportSecurity utility command passing the necessary parameters to it.
Here we can see that the connection information (application name and user name) from the “Command on Source” tab that is set to our Hyperion Planning application. This Command on Source and Target tables is a great technique used in ODI because it allows you to use information that we set in the source tab into a command in the Target table, giving you a lot of coding flexibility. The source tab can also be used similarly to a PL/SQL cursor, where you may have a SQL command on source that will return N number of rows and for each one of those rows, ODI will execute whatever is in the source. It is a pretty cool feature and we use it a lot in our ODI projects.
You can also notice here that we use two ODI variables: PLANNING_BIN_PATH that indicates where the ImportSecurity utility is located under Planning install folder; and PASSWORD_FILE that is the full path where the password file is stored. Remember, this password file is necessary in our case here because it is an automatic process and we will not type our password in the prompt line.
Also we have added an ODI option to select if we want to use SL_CLEARALL or not, so we may use this procedure in both approaches that we have shown. This is all we need to load the SecFile to the Hyperion Planning application. Very small coding.
Ok, now lets switch to our second Solution, that is the manipulation of HSP_ACCESS_CONTROL table directly using SQL. Before I move on and before you ask which approach you should use, I must say that our intention here is not to tell you which approach is better, even because me and Ricardo have different opinions about it: I prefer the ImportUtility approach because we use what we officially have, even thought it is incomplete and painful to use some times. Ricardo will always prefer repository manipulation because it is easier and much more flexible and powerful. Anyway, we will show you both solutions and you decide which one is the best for you. We may just guarantee to you that both were tested and work as expected. Also Oracle likes the first approach, since they don’t like us touching their metadata tables.
Ok, so lets see what we have here.
Instead of using ODI interfaces, we decided to use and ODI procedures with two steps inside of it, due to the simplicity and flexibility of it. The logic in the queries is very similar to what was used in Solution 1.
The first step deletes from HSP_ACCESS_CONTROL everything related to Entity dimension members with FLAGS = 0 (members) that does not exists in our temporary security table. This will remove all old undesired access that should not be in the application.
The second step just inserts into HSP_ACCESS_CONTROL everything from our temporary security table that does not exists in HSP_ACCESS_CONTROL yet.
Performing those two steps will guarantee that your Hyperion Planning application is in sync with the necessary security. But in order to have it applied to Planning, we first need to restart the Planning Services.
This slide shows us a very dummy way to restart Planning This example was done in Windows and just calls a Service Controller command to stop and start the HYS9Planning service. Between each command we have added a “Wait” ODI object with an arbitrary number of seconds for it to wait for the services to go down and to go up again. There are other much better ways to do it and they are all dependent on which Operational System your planning application is running, and since it is not the scope of this presentation to show all possible ways to do it, we decided to show the simpler one.
And here is the final ODI package that you will get. A very small, but powerful package that will allows you to automatically maintain your Planning Security settings with all the benefits of ODI scheduling, proper user access and so on. Here we can see that depending on the solution that we followed, we may have different objects there, but in the end all of them will result in the same objective.
Here is a final (bonus) slide on how ODI can be powerful and useful in a Hyperion Planning structure. Here is what we have at Dell today. (Read the slide).