This document provides instructions on how to add new key figures to an InfoCube in SAP BI 7.0 using the remodeling feature and populating the new key figures using customer exits. It describes creating an exit class and method, the remodeling process, and post-remodeling steps. The example walks through splitting an existing revenue key figure into separate plan and actual revenue key figures populated by the exit code.
Optimized dso data activation using massive parallel processing in sap net we...Nuthan Kishore
SAP NetWeaver BW 7.3 introduces optimized data activation for standard DataStore objects that uses massive parallel processing (MPP) on supported database platforms like IBM DB2. This allows the data activation to be performed directly in the database via parallel SQL statements, rather than processing records one by one in the application server. It can significantly improve performance over the previous method. The document describes how MPP-optimized activation works, its implementation for DB2, and recommendations for its use.
The document discusses state modeling and state diagrams. It defines states as representations of intervals of time that describe an object's behavioral condition. Events trigger transitions between states. A state diagram uses a graph to represent an object's states and the transitions between them caused by events. It specifies the object's response to input events over time. The document provides examples of how to notationally represent states, transitions, events, and other elements in a state diagram.
Developing Complex Business Rules with Drools IntegrationBonitasoft
Create rich and dynamic rule driven business process applications with the Bonita Open Solution BPM Suite.
Learn how to add business rules to your process transitions easily with decision tables in the Bonita Studio for process modeling, and for more complex rules, use the Drools Connector to call shared rules.
Este documento apresenta uma introdução sobre modelos NoSQL e a persistência poliglota. Aborda conceitos como Big Data, o Teorema CAP, as propriedades ACID vs BASE, e diferentes modelos de dados NoSQL como chave-valor, documento e família de colunas. Também discute tópicos como MapReduce, JSON, BSON e a importância da agilidade no desenvolvimento de software.
Espresso: LinkedIn's Distributed Data Serving Platform (Talk)Amy W. Tang
This talk was given by Swaroop Jagadish (Staff Software Engineer @ LinkedIn) at the ACM SIGMOD/PODS Conference (June 2013). For the paper written by the LinkedIn Espresso Team, go here:
http://www.slideshare.net/amywtang/espresso-20952131
A sequence diagram shows the interactions between objects in a system. It displays objects arranged along a timeline, with messages passed between objects over time. Key elements include object lifelines that run vertically, messages shown as horizontal arrows, and activation boxes that indicate when an object is processing a message. Sequence diagrams are useful for illustrating the flow of control in response to events generated by external actors on the system.
The document discusses using Hazelcast distributed locks to synchronize access to critical sections of code across multiple JVMs and application instances. It describes how Hazelcast implements distributed versions of common Java data structures, including distributed locks via its ILock interface. It provides examples of configuring a Hazelcast cluster programmatically by specifying cluster properties like IP addresses and ports, and shows how to obtain and use a distributed lock within a try-finally block to ensure it is released.
Tradeoffs in Distributed Systems Design: Is Kafka The Best? (Ben Stopford and...HostedbyConfluent
When choosing an event streaming platform, Kafka shouldn’t be the only technology you look at. There are a plethora of others in the messaging space today, including open source and proprietary software as well as a range of cloud services. So how do you know you are choosing the right one? A great way to deepen our understanding of event streaming and Kafka is exploring the trade-offs in distributed system design and learning about the choices made by the Kafka project. We’ll look at how Kafka stacks up against other technologies in the space, including traditional messaging systems like Apache ActiveMQ and RabbitMQ as well as more contemporary ones, such as BookKeeper derivatives like Apache Pulsar or Pravega. This talk focuses on the technical details such as difference in messaging models, how data is stored locally as well as across machines in a cluster, when (not) to add tiers to your system, and more. By the end of the talk, you should have a good high-level understanding of how these systems compare and which you should choose for different types of use cases.
Optimized dso data activation using massive parallel processing in sap net we...Nuthan Kishore
SAP NetWeaver BW 7.3 introduces optimized data activation for standard DataStore objects that uses massive parallel processing (MPP) on supported database platforms like IBM DB2. This allows the data activation to be performed directly in the database via parallel SQL statements, rather than processing records one by one in the application server. It can significantly improve performance over the previous method. The document describes how MPP-optimized activation works, its implementation for DB2, and recommendations for its use.
The document discusses state modeling and state diagrams. It defines states as representations of intervals of time that describe an object's behavioral condition. Events trigger transitions between states. A state diagram uses a graph to represent an object's states and the transitions between them caused by events. It specifies the object's response to input events over time. The document provides examples of how to notationally represent states, transitions, events, and other elements in a state diagram.
Developing Complex Business Rules with Drools IntegrationBonitasoft
Create rich and dynamic rule driven business process applications with the Bonita Open Solution BPM Suite.
Learn how to add business rules to your process transitions easily with decision tables in the Bonita Studio for process modeling, and for more complex rules, use the Drools Connector to call shared rules.
Este documento apresenta uma introdução sobre modelos NoSQL e a persistência poliglota. Aborda conceitos como Big Data, o Teorema CAP, as propriedades ACID vs BASE, e diferentes modelos de dados NoSQL como chave-valor, documento e família de colunas. Também discute tópicos como MapReduce, JSON, BSON e a importância da agilidade no desenvolvimento de software.
Espresso: LinkedIn's Distributed Data Serving Platform (Talk)Amy W. Tang
This talk was given by Swaroop Jagadish (Staff Software Engineer @ LinkedIn) at the ACM SIGMOD/PODS Conference (June 2013). For the paper written by the LinkedIn Espresso Team, go here:
http://www.slideshare.net/amywtang/espresso-20952131
A sequence diagram shows the interactions between objects in a system. It displays objects arranged along a timeline, with messages passed between objects over time. Key elements include object lifelines that run vertically, messages shown as horizontal arrows, and activation boxes that indicate when an object is processing a message. Sequence diagrams are useful for illustrating the flow of control in response to events generated by external actors on the system.
The document discusses using Hazelcast distributed locks to synchronize access to critical sections of code across multiple JVMs and application instances. It describes how Hazelcast implements distributed versions of common Java data structures, including distributed locks via its ILock interface. It provides examples of configuring a Hazelcast cluster programmatically by specifying cluster properties like IP addresses and ports, and shows how to obtain and use a distributed lock within a try-finally block to ensure it is released.
Tradeoffs in Distributed Systems Design: Is Kafka The Best? (Ben Stopford and...HostedbyConfluent
When choosing an event streaming platform, Kafka shouldn’t be the only technology you look at. There are a plethora of others in the messaging space today, including open source and proprietary software as well as a range of cloud services. So how do you know you are choosing the right one? A great way to deepen our understanding of event streaming and Kafka is exploring the trade-offs in distributed system design and learning about the choices made by the Kafka project. We’ll look at how Kafka stacks up against other technologies in the space, including traditional messaging systems like Apache ActiveMQ and RabbitMQ as well as more contemporary ones, such as BookKeeper derivatives like Apache Pulsar or Pravega. This talk focuses on the technical details such as difference in messaging models, how data is stored locally as well as across machines in a cluster, when (not) to add tiers to your system, and more. By the end of the talk, you should have a good high-level understanding of how these systems compare and which you should choose for different types of use cases.
Design patterns are general reusable solutions to common problems in software design. They are not specific designs that can be transformed directly into code, but descriptions that can be applied to many situations. In 1994, the "Gang of Four" authors published the influential book Design Patterns, which introduced design patterns to software development. The book categorized patterns into creational, structural, and behavioral groups. Factory pattern is a creational pattern that provides a way to create objects without exposing object creation logic to the client. It allows for more flexibility in deciding which objects need to be created.
A stream processing platform is not an island unto itself; it must be connected to all of your existing data systems, applications, and sources. In this talk we will provide different options for integrating systems and applications with Apache Kafka, with a focus on the Kafka Connect framework and the ecosystem of Kafka connectors. We will discuss the intended use cases for Kafka Connect and share our experience and best practices for building large-scale data pipelines using Apache Kafka.
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...Josef A. Habdank
Presentation consists of an amazing bundle of Pro tips and tricks for building an insanely scalable Apache Spark and Spark Streaming based data pipeline.
Presentation consists of 4 parts:
* Quick intro to Spark
* N-billion rows/day system architecture
* Data Warehouse and Messaging
* How to deploy spark so it does not backfire
Near Real Time Indexing: Presented by Umesh Prasad & Thejus V M, FlipkartLucidworks
This document summarizes a presentation given by Umesh Prasad and Thejus V M of Flipkart on building a real-time search index for e-commerce. It discusses the need for real-time indexing to support high update rates and microservices architecture at Flipkart. It evaluates using SolrCloud but finds that update-by-delete-and-add hinders performance. The presentation then describes Flipkart's approach using a near real-time Lucene store with optimized data structures and filtering to enable low-latency search across updated documents.
Stream Processing using Apache Flink in Zalando's World of Microservices - Re...Zalando Technology
In this talk we present Zalando's microservices architecture, introduce Saiki – our next generation data integration and distribution platform on AWS and show how we employ stream processing for near-real time business intelligence.
Zalando is one of the largest online fashion retailers in Europe. In order to secure our future growth and remain competitive in this dynamic market, we are transitioning from a monolithic to a microservices architecture and from a hierarchical to an agile organization.
We first have a look at how business intelligence processes have been working inside Zalando for the last years and present our current approach - Saiki. It is a scalable, cloud-based data integration and distribution infrastructure that makes data from our many microservices readily available for analytical teams.
We no longer live in a world of static data sets, but are instead confronted with an endless stream of events that constantly inform us about relevant happenings from all over the enterprise. The processing of these event streams enables us to do near-real time business intelligence. In this context we have evaluated Apache Flink vs. Apache Spark in order to choose the right stream processing framework. Given our requirements, we decided to use Flink as part of our technology stack, alongside with Kafka and Elasticsearch.
With these technologies we are currently working on two use cases: a near real-time business process monitoring solution and streaming ETL.
Monitoring our business processes enables us to check if technically the Zalando platform works. It also helps us analyze data streams on the fly, e.g. order velocities, delivery velocities and to control service level agreements.
On the other hand, streaming ETL is used to relinquish resources from our relational data warehouse, as it struggles with increasingly high loads. In addition to that, it also reduces the latency and facilitates the platform scalability.
Finally, we have an outlook on our future use cases, e.g. near-real time sales and price monitoring. Another aspect to be addressed is to lower the entry barrier of stream processing for our colleagues coming from a relational database background.
This presentation discusses design patterns, which are general reusable solutions to commonly occurring problems in software design. It describes several design patterns including creational patterns like factory and singleton that deal with object creation, structural patterns like adapter and proxy that deal with relationships between entities, and behavioral patterns like strategy and observer that deal with communication between objects. Specific patterns like singleton, factory, observer, strategy, and adapter are explained in more detail through their definitions and purposes.
The document provides an introduction to Airflow and Supervisor. It describes Airflow as a workflow management platform that uses directed acyclic graphs (DAGs) to define and automate tasks and workflows for batch processing. It discusses how Airflow uses Python as a runtime, has different executor types like Local and Celery, and stores metadata about tasks and DAGs in a database. The document also introduces Supervisor as a process monitoring tool used to run and monitor Celery, Flower and Airflow processes, and highlights its benefits like centralized control, a web UI, and automatic restart of failed applications.
Unit test your java architecture with ArchUnitJeremy Cook
From Confoo 2021.
Software architecture tends to be esoteric and intangible. The result of this is architectural drift, with the architecture losing the qualities it was promoting as the code evolves. This talk will introduce ArchUnit, a library that allows you to test your Java architecture. You'll see how to write unit tests that protect architectural characteristics in your code while making your architecture easier to understand for everyone in your team.
We all know normalization is crucial to delivering high quality search results. We don’t want uninteresting variations between the query and the document to lead to missed hits (e.g., “celebrity” v. “celebrities”). Normalization of dictionary words is well understood, but what if your application focuses on names? Whether you’re tackling patent examination, sports records, e-commerce, watchlist screening or many other topics, names are often the key. Can your users find “Abdul Jabbar, Karim” if they search for “Kareem AbdalJabar” or “كريم عبد الجبار”? Solr application architects have attempted to address this through custom integration of nickname lists, edit distance, case normalization, phonetic encoding and n-grams (see example #1 or example #2), but doing so requires significant effort and may not address all desired variations. A simpler approach is to use a Solr field type for names that handles these linguistic nuances behind-the-scenes. We’ll talk about how we built this sort of field type via a Solr plug-in for the Rosette Name Indexer. We’ll also discuss examples of use cases this has enabled, how it can be tuned if necessary, and how it connects to the broader trend of entity-centric search.
Domain Driven Design (DDD) is a software design approach that focuses on modeling a domain accurately. It uses ubiquitous language, bounded contexts, and explicit domain models. The key aspects of DDD include developing a shared model with domain experts, separating concerns into bounded contexts, and iteratively refining domain models through close collaboration between technical and domain teams. DDD aims to produce software designs that are more aligned with the mental models of users and stakeholders in a complex domain.
This document discusses building a model serving pipeline. It covers conceptualizing the pipeline, containerizing models, serving models via APIs, managing workflows such as retraining, and designing architectures. An example deep knowledge tracing model is presented that predicts question answers based on user interaction data. The pipeline uses BentoML to package models, Docker for containers, Airflow for scheduling, and Docker Swarm for container management.
in these slides i have explained the factory method design pattern. slides contains complete notes on this pattern from definition to implementation by code example.
The document discusses the Abstract Factory pattern, which defines an interface for creating families of related objects without specifying their concrete classes. It provides advantages like isolating code from implementation classes and promoting consistency. The implementation overview describes creating shape and color interfaces and classes, an AbstractFactory interface, and Factory classes that extend AbstractFactory and create shape and color objects. FactoryProducer is used to get the appropriate factory. Tests create objects using the factories to demonstrate the pattern.
SparkSQL: A Compiler from Queries to RDDsDatabricks
SparkSQL, a module for processing structured data in Spark, is one of the fastest SQL on Hadoop systems in the world. This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will walk away with a deeper understanding of how Spark analyzes, optimizes, plans and executes a user’s query.
Speaker: Sameer Agarwal
This talk was originally presented at Spark Summit East 2017.
The presentation begins with an overview of the growth of non-structured data and the benefits NoSQL products provide. It then provides an evaluation of the more popular NoSQL products on the market including MongoDB, Cassandra, Neo4J, and Redis. With NoSQL architectures becoming an increasingly appealing database management option for many organizations, this presentation will help you effectively evaluate the most popular NoSQL offerings and determine which one best meets your business needs.
Je vous partage l'un des présentations que j'ai réalisé lorsque j'étais élève ingénieur pour le module 'Anglais Business ' , utile pour les étudiants souhaitant préparer une présentation en anglais sur les Design Pattern - ou les patrons de conception .
Top 5 mistakes when writing Spark applicationshadooparchbook
This document discusses common mistakes people make when writing Spark applications and provides recommendations to address them. It covers issues related to executor configuration, application failures due to shuffle block sizes exceeding limits, slow jobs caused by data skew, and managing the DAG to avoid excessive shuffles and stages. Recommendations include using smaller executors, increasing the number of partitions, addressing skew through techniques like salting, and preferring ReduceByKey over GroupByKey and TreeReduce over Reduce to improve performance and resource usage.
The document discusses several "xDD" models which are software development processes that rely on short development cycles focused on specific goals like features, tests, or behaviors. It provides details on Feature-Driven Development (FDD), Test-Driven Development (TDD), and Behaviour-Driven Development (BDD) including key steps and founders/influencers of each methodology.
Leveraging IBM Cognos TM1 for Merchandise Planning at Tractor Supply Company ...QueBIT Consulting
AGENDA:
Introductions and Company Overviews
TSC Merchandise Planning Solution Overview
Prior State
Solution and Implementation
Tips & Tricks for TM1 Perspectives Templates
Q&A
The document provides an introduction on how to create and manage variable type customer exits in SAP BI 7.0 queries. It discusses setting up a new project for an SAP enhancement, creating a variable in a BW query and setting its type to customer exit, writing ABAP code to manage the variable's values, and testing the customer exit.
Design patterns are general reusable solutions to common problems in software design. They are not specific designs that can be transformed directly into code, but descriptions that can be applied to many situations. In 1994, the "Gang of Four" authors published the influential book Design Patterns, which introduced design patterns to software development. The book categorized patterns into creational, structural, and behavioral groups. Factory pattern is a creational pattern that provides a way to create objects without exposing object creation logic to the client. It allows for more flexibility in deciding which objects need to be created.
A stream processing platform is not an island unto itself; it must be connected to all of your existing data systems, applications, and sources. In this talk we will provide different options for integrating systems and applications with Apache Kafka, with a focus on the Kafka Connect framework and the ecosystem of Kafka connectors. We will discuss the intended use cases for Kafka Connect and share our experience and best practices for building large-scale data pipelines using Apache Kafka.
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...Josef A. Habdank
Presentation consists of an amazing bundle of Pro tips and tricks for building an insanely scalable Apache Spark and Spark Streaming based data pipeline.
Presentation consists of 4 parts:
* Quick intro to Spark
* N-billion rows/day system architecture
* Data Warehouse and Messaging
* How to deploy spark so it does not backfire
Near Real Time Indexing: Presented by Umesh Prasad & Thejus V M, FlipkartLucidworks
This document summarizes a presentation given by Umesh Prasad and Thejus V M of Flipkart on building a real-time search index for e-commerce. It discusses the need for real-time indexing to support high update rates and microservices architecture at Flipkart. It evaluates using SolrCloud but finds that update-by-delete-and-add hinders performance. The presentation then describes Flipkart's approach using a near real-time Lucene store with optimized data structures and filtering to enable low-latency search across updated documents.
Stream Processing using Apache Flink in Zalando's World of Microservices - Re...Zalando Technology
In this talk we present Zalando's microservices architecture, introduce Saiki – our next generation data integration and distribution platform on AWS and show how we employ stream processing for near-real time business intelligence.
Zalando is one of the largest online fashion retailers in Europe. In order to secure our future growth and remain competitive in this dynamic market, we are transitioning from a monolithic to a microservices architecture and from a hierarchical to an agile organization.
We first have a look at how business intelligence processes have been working inside Zalando for the last years and present our current approach - Saiki. It is a scalable, cloud-based data integration and distribution infrastructure that makes data from our many microservices readily available for analytical teams.
We no longer live in a world of static data sets, but are instead confronted with an endless stream of events that constantly inform us about relevant happenings from all over the enterprise. The processing of these event streams enables us to do near-real time business intelligence. In this context we have evaluated Apache Flink vs. Apache Spark in order to choose the right stream processing framework. Given our requirements, we decided to use Flink as part of our technology stack, alongside with Kafka and Elasticsearch.
With these technologies we are currently working on two use cases: a near real-time business process monitoring solution and streaming ETL.
Monitoring our business processes enables us to check if technically the Zalando platform works. It also helps us analyze data streams on the fly, e.g. order velocities, delivery velocities and to control service level agreements.
On the other hand, streaming ETL is used to relinquish resources from our relational data warehouse, as it struggles with increasingly high loads. In addition to that, it also reduces the latency and facilitates the platform scalability.
Finally, we have an outlook on our future use cases, e.g. near-real time sales and price monitoring. Another aspect to be addressed is to lower the entry barrier of stream processing for our colleagues coming from a relational database background.
This presentation discusses design patterns, which are general reusable solutions to commonly occurring problems in software design. It describes several design patterns including creational patterns like factory and singleton that deal with object creation, structural patterns like adapter and proxy that deal with relationships between entities, and behavioral patterns like strategy and observer that deal with communication between objects. Specific patterns like singleton, factory, observer, strategy, and adapter are explained in more detail through their definitions and purposes.
The document provides an introduction to Airflow and Supervisor. It describes Airflow as a workflow management platform that uses directed acyclic graphs (DAGs) to define and automate tasks and workflows for batch processing. It discusses how Airflow uses Python as a runtime, has different executor types like Local and Celery, and stores metadata about tasks and DAGs in a database. The document also introduces Supervisor as a process monitoring tool used to run and monitor Celery, Flower and Airflow processes, and highlights its benefits like centralized control, a web UI, and automatic restart of failed applications.
Unit test your java architecture with ArchUnitJeremy Cook
From Confoo 2021.
Software architecture tends to be esoteric and intangible. The result of this is architectural drift, with the architecture losing the qualities it was promoting as the code evolves. This talk will introduce ArchUnit, a library that allows you to test your Java architecture. You'll see how to write unit tests that protect architectural characteristics in your code while making your architecture easier to understand for everyone in your team.
We all know normalization is crucial to delivering high quality search results. We don’t want uninteresting variations between the query and the document to lead to missed hits (e.g., “celebrity” v. “celebrities”). Normalization of dictionary words is well understood, but what if your application focuses on names? Whether you’re tackling patent examination, sports records, e-commerce, watchlist screening or many other topics, names are often the key. Can your users find “Abdul Jabbar, Karim” if they search for “Kareem AbdalJabar” or “كريم عبد الجبار”? Solr application architects have attempted to address this through custom integration of nickname lists, edit distance, case normalization, phonetic encoding and n-grams (see example #1 or example #2), but doing so requires significant effort and may not address all desired variations. A simpler approach is to use a Solr field type for names that handles these linguistic nuances behind-the-scenes. We’ll talk about how we built this sort of field type via a Solr plug-in for the Rosette Name Indexer. We’ll also discuss examples of use cases this has enabled, how it can be tuned if necessary, and how it connects to the broader trend of entity-centric search.
Domain Driven Design (DDD) is a software design approach that focuses on modeling a domain accurately. It uses ubiquitous language, bounded contexts, and explicit domain models. The key aspects of DDD include developing a shared model with domain experts, separating concerns into bounded contexts, and iteratively refining domain models through close collaboration between technical and domain teams. DDD aims to produce software designs that are more aligned with the mental models of users and stakeholders in a complex domain.
This document discusses building a model serving pipeline. It covers conceptualizing the pipeline, containerizing models, serving models via APIs, managing workflows such as retraining, and designing architectures. An example deep knowledge tracing model is presented that predicts question answers based on user interaction data. The pipeline uses BentoML to package models, Docker for containers, Airflow for scheduling, and Docker Swarm for container management.
in these slides i have explained the factory method design pattern. slides contains complete notes on this pattern from definition to implementation by code example.
The document discusses the Abstract Factory pattern, which defines an interface for creating families of related objects without specifying their concrete classes. It provides advantages like isolating code from implementation classes and promoting consistency. The implementation overview describes creating shape and color interfaces and classes, an AbstractFactory interface, and Factory classes that extend AbstractFactory and create shape and color objects. FactoryProducer is used to get the appropriate factory. Tests create objects using the factories to demonstrate the pattern.
SparkSQL: A Compiler from Queries to RDDsDatabricks
SparkSQL, a module for processing structured data in Spark, is one of the fastest SQL on Hadoop systems in the world. This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will walk away with a deeper understanding of how Spark analyzes, optimizes, plans and executes a user’s query.
Speaker: Sameer Agarwal
This talk was originally presented at Spark Summit East 2017.
The presentation begins with an overview of the growth of non-structured data and the benefits NoSQL products provide. It then provides an evaluation of the more popular NoSQL products on the market including MongoDB, Cassandra, Neo4J, and Redis. With NoSQL architectures becoming an increasingly appealing database management option for many organizations, this presentation will help you effectively evaluate the most popular NoSQL offerings and determine which one best meets your business needs.
Je vous partage l'un des présentations que j'ai réalisé lorsque j'étais élève ingénieur pour le module 'Anglais Business ' , utile pour les étudiants souhaitant préparer une présentation en anglais sur les Design Pattern - ou les patrons de conception .
Top 5 mistakes when writing Spark applicationshadooparchbook
This document discusses common mistakes people make when writing Spark applications and provides recommendations to address them. It covers issues related to executor configuration, application failures due to shuffle block sizes exceeding limits, slow jobs caused by data skew, and managing the DAG to avoid excessive shuffles and stages. Recommendations include using smaller executors, increasing the number of partitions, addressing skew through techniques like salting, and preferring ReduceByKey over GroupByKey and TreeReduce over Reduce to improve performance and resource usage.
The document discusses several "xDD" models which are software development processes that rely on short development cycles focused on specific goals like features, tests, or behaviors. It provides details on Feature-Driven Development (FDD), Test-Driven Development (TDD), and Behaviour-Driven Development (BDD) including key steps and founders/influencers of each methodology.
Leveraging IBM Cognos TM1 for Merchandise Planning at Tractor Supply Company ...QueBIT Consulting
AGENDA:
Introductions and Company Overviews
TSC Merchandise Planning Solution Overview
Prior State
Solution and Implementation
Tips & Tricks for TM1 Perspectives Templates
Q&A
The document provides an introduction on how to create and manage variable type customer exits in SAP BI 7.0 queries. It discusses setting up a new project for an SAP enhancement, creating a variable in a BW query and setting its type to customer exit, writing ABAP code to manage the variable's values, and testing the customer exit.
The document discusses SAP Certified Application Associate - Financials with SAP Business All-in-One exam questions and answers. It includes 80 multiple choice questions covering topics such as: SAP Best Practices packages and how they are used to accelerate implementations; SAP NetWeaver Business Client and how it provides a unified user interface; and master data setups needed for financial accounting and reporting in SAP. Key tools mentioned include the Solution Builder, Demo Assistant, and SAP Best Practices for Data Migration.
Reporting data in alternate unit of measure in bi 7.0Ashwin Kumar
This document discusses how to report data in alternate units of measure in SAP Business Intelligence 7.0. It covers creating a quantity conversion type using transaction RSUOM, specifying the conversion type in a query definition, and creating a variable for the target unit of measure to provide flexibility in unit selection. When the query is executed, the user will be prompted to select a unit, and results will be displayed in the selected unit due to conversion calculations performed using the conversion factor from the reference InfoObject.
This document provides a tutorial on how to create a CATS type specific approval process for users with activated structural authorizations in SAP. It involves customizing settings, implementing a BAdI, coding a method to determine the approver, and activating the new objects. The method calls a standard functional module to get the leading position and returns the approver, bypassing structural restrictions for this approval process.
This document provides guidance on testing overhead cost accounting functionality in SAP S/4HANA Cloud. It outlines prerequisites for the test including required system access, roles, master data, and preliminary configuration steps. These include setting the controlling area for the user, setting report relevancy, replicating runtime hierarchies, opening the cost accounting period, creating a statistical key figure, and optionally mapping intercompany accounts. The document then provides a table with detailed steps to test processes like recording costs, allocations, reassigning costs, executing allocation cycles, and analytics reporting. It also includes optional steps to test commitment management, cost center budgeting, and budget transfers.
This document provides a step-by-step process for writing Backup Domain Controller (BDC) reports to transfer legacy data into an SAP system. It describes recording a transaction, creating an ABAP report from the recording, and writing ABAP code to fetch data from a legacy system and store it in SAP. The steps include creating internal tables, calling upload functions, using a loop to replace constants with table fields, and testing the data transfer.
Beginner's guide create a custom 'copy' planning function typeNaveen Kumar Kotha
This document provides a step-by-step guide to creating a custom "Copy" planning function in SAP BW. It explains how to set up the environment, create a custom class, define the planning function type in transaction RSPLF1, integrate the function into a planning application, create and execute a planning sequence, modify the function code, debug and test, and transport the custom function. The guide aims to help users avoid obstacles in customizing planning functions and provide a working example to copy actual sales data from one year to a plan for the next year.
- The PowerPivot data refresh problem needs to be fixed by using the PowerPivot Configuration Tool and selecting the Configure or Repair PowerPivot for SharePoint option, as the Secure Store Service target application used for unattended data refresh has been deleted.
- The SSISOwners SQL Server login needs to be mapped to the SSISDB database and assigned to the db_ssisadmin role to grant appropriate permissions for ETL administrators.
- A Regional Sales report needs to be created using a PivotTable in Excel 2010 or a matrix in SQL Server Report Builder to allow filtering by year and drilling down through the Products hierarchy while highlighting sales values under $5,000 in red.
The document explains how to use a customer exit variable in SAP BW/BI reports to display month-to-date data based on the current date. It describes creating a variable called Z_FDCPM to return the first day of the current or previous month. The code for the variable is provided to calculate this date. Instructions are given on testing the variable, designing a report with it, and executing the report to see month-to-date results.
This document provides guidance on creating new units of measure in SAP BW. It discusses the different types of units of measure and how to define a new unit using transaction code CUNI. Details like the measurement text, conversion factors, and ISO codes should be specified when creating a new unit of measure. Alternative units of measure refer to any units other than the base unit defined for a dimension. Related resources on units of measure in SAP are also listed.
Unit 1: Introduction to SAP Analytics Cloud planning
No exercises
Unit 2: Dimensions and planning models
1 Exercise 1: Create a public dimension and maintain master data
8 Exercise 2: Import dimensional data
19 Exercise 3: Create and use a measure-based model
33 Exercise 4: Create a measure and account-based model
44 Exercise 5: Import actual data from a file
59 Exercise 6: Import forecast data from a file
Unit 3: Core planning functionality
68 Exercise 7: Work with data tables, versions, mass data entry
83 Exercise 8: Add new members and compare the data
100 Exercise 9: Distribute using the planning panel
114 Exercise 10: Configure and translate currencies
Unit 4: Forecasting
133 Exercise 11: Create a rolling forecast input form
142 Exercise 12: Create a predictive forecast
155 Exercise 13: Use smart predict with a planning model
165 Exercise 14: Create a value driver tree
Unit 5: Data actions and allocation processes
187 Exercise 15: Create data actions to copy data within a model
202 Exercise 16: Create data action to copy data between models
215 Exercise 17: Create a data action to calculate labor and benefits
233 Exercise 18: Dynamic data actions & tables
249 Exercise 19: Configure Multi Actions
259 Exercise 20: Create and execute an allocation
Simplifying the Complexity of Salesforce CPQ: Tips & Best Practicespanayaofficial
Since Salesforce CPQ (Part of Revenue Cloud) entered the market, it has been a real game-changer for businesses. However, it comes with some challenges, especially if you assume that maintaining Salesforce CPQ is like core Salesforce products.
Failing to recognize the level of expertise needed to manage Salesforce CPQ can lead to a landslide of bugs and errors and costly consequences to the business. But no need to worry. We have the right technology to help you get it right!
Join our upcoming webinar with Salesforce MVP Ben McCarthy, Salesforce CPQ master, Alyssa Lefebvre
and Oz Lavee, Panaya ForeSight CTO:
During the webinar, you will learn how to Simplify the Complexity of Salesforce CPQ by:
· Successfully managing CPQ deployments
· Dealing with ongoing CPQ implementations, Org dependencies review, and updates
· Easily finding the root cause of a bug or a failure
· Managing CPQ tests the smart way
This document provides an introduction and overview of SQL Server 2005 Reporting Services:
- It describes the main components of the Reporting Services architecture including Report Server, Report Manager, Report Designer, and Report Builder.
- It explains how to use Report Designer to create reports using the Report Wizard, modifying existing reports, and designing reports from scratch.
- It covers how to publish reports to the Report Server so they are available to users.
- It introduces Report Builder as an alternative reporting tool for end users and how to create a data model to define the data available to Report Builder reports.
This document summarizes the key changes in SAP Business Planning and Consolidation 10.1, which includes two versions - classic and unified. The unified version requires SAP HANA and references SAP BW InfoProviders, while classic can use various databases. Both have a new HTML5 user interface. Integration and functionality differ between the versions.
Fi enhancement technique how-to-guide on the usage of business transaction ...Kranthi Kumar
This document provides a step-by-step guide on configuring and using Business Transaction Events (BTEs) in SAP FI to populate a custom value in an accounting document field. It describes BTEs and their differences from BADIs, the two types of BTE interfaces, and provides an example of using a process interface BTE to copy the text "Demo BTE" to the assignment field when a document is posted for a specific company code. The steps include identifying the BTE, creating a custom function module, assigning ABAP code, assigning the BTE, and testing the configuration.
Fi enhancement technique how-to-guide on the usage of business transaction ...Rajeev Kumar
This document provides a step-by-step guide on configuring and using Business Transaction Events (BTEs) in SAP FI to populate a custom value in an accounting document field. It describes BTEs and their differences from BADIs, the two types of BTE interfaces, and provides an example of using a process interface BTE to copy the text "Demo BTE" to the assignment field when a document is posted for a specific company code. The steps include identifying the BTE, creating and assigning a custom function module to the BTE, and testing the configuration.
Differences between NAV 2017, NAV 2018 & Dynamics 365 Business CentralInfo San
This slide share shows the differences between NAV 2017, NAV 2018 and Dynamics 365 Business Central. It contains the screenshots with the description of Dynamics 365 Business Central.
The document discusses journals in SAP Business Planning and Consolidation for NetWeaver (BPC NW). It describes how to create a journal template using the journal wizard, which allows setting header dimensions, dimension order, additional header items, and summary. Journal templates are used to make adjustments to data through journal entries, providing an alternative to input sheets for updating application data.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
What is Master Data Management by PiLog Groupaymanquadri279
PiLog Group's Master Data Record Manager (MDRM) is a sophisticated enterprise solution designed to ensure data accuracy, consistency, and governance across various business functions. MDRM integrates advanced data management technologies to cleanse, classify, and standardize master data, thereby enhancing data quality and operational efficiency.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
WhatsApp offers simple, reliable, and private messaging and calling services for free worldwide. With end-to-end encryption, your personal messages and calls are secure, ensuring only you and the recipient can access them. Enjoy voice and video calls to stay connected with loved ones or colleagues. Express yourself using stickers, GIFs, or by sharing moments on Status. WhatsApp Business enables global customer outreach, facilitating sales growth and relationship building through showcasing products and services. Stay connected effortlessly with group chats for planning outings with friends or staying updated on family conversations.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.