Part 1: Trigger Batch/Command File from FDMEE
How to make FDMEE-triggered Essbase calculation and partition scripts targeted so that they run only for the entities that are applicable to the current FDMEE load.
Part 2: Run a MAXL script from Smartview
Use an Essbase CDF inside an Essbase calculation script to run a MAXL script. The calculation script can be run from Smartview.
This document discusses migrating applications from Oracle's Hyperion Financial Data Management (FDM) Classic to the newer Oracle Financial Data Management Enterprise Edition (FDMEE). It provides an overview of the migration utility that can automate much of the migration process. The utility can migrate artifacts like locations, import formats, rules and mappings but will not migrate items like security, scripts or custom reports. The document reviews prerequisites for the migration utility and the general steps for completing a migration, including installing the utility scenarios and configuring the source and target repositories.
Tony Scalese, Edgewater Ranzal Oracle Financial Data Management (FDM) practice director, presented "Jython Scripting in FDMEE - It's Not That Scary" at KScope14.
This webinar discusses migrating from Oracle's Hyperion Financial Data Management (FDM) Classic application to the newer FDMEE application. It introduces the speakers and outlines the agenda. The presentation covers prerequisites for conversion, system requirements, artifacts that can and cannot be migrated, opportunities and nuances of FDMEE, and how to develop a migration plan. Benefits of migrating include improved functionality, integration, data integrity, and flexibility. The application migration process involves running scripts, configuring Oracle Data Integrator, importing scenarios, and executing extracts to migrate mappings, historical data, and other artifacts from FDM to FDMEE. Questions from attendees are invited at the end.
Minnesota Hyperion User Group Discussion on Financial Data Management Enterprise Edition: the combination of Financial Data Quality Management (FDM) and ERP Integrator (ERPi)
What is FDMEE?
What can you do with FDMEE?
Considerations for converting to FDMEE?
What’s new with FDMEE?
What’s coming next with FDMEE?
Jon Harvey, Oracle EPM Practice Lead
Josh Kinkeade, Oracle HFM Practice Lead
Tony Scalese, Edgewater Ranzal Oracle Financial Data Management (FDM) practice director, presented "Getting the Most Out of FDMEE in a Multiproduct Environment" at KScope14.
The document discusses Oracle's FDMEE (FDM Enterprise Edition), which replaced the classic FDM and ERPi data integration tools in Oracle EPM version 11.1.2.3. FDMEE can integrate data from various sources like ERP systems, databases, and files into EPM applications like Oracle Hyperion Planning. It has the functions of both classic FDM and ERPi and uses the Oracle Data Integrator engine. The document then provides a tutorial on using FDMEE to load data from a text file into an Oracle Hyperion Planning application in three parts - concepts, configuration and definition, and execution.
The document discusses loading text data into an Oracle Hyperion Planning application using the Oracle Hyperion Financial Data Quality Management Enterprise Edition (FDMEE). It provides a tutorial in 3 parts: concepts, configuration and definition, and execution. It explains how to register a text file as the data source, define mappings between the source and Planning application dimensions, and load the data. FDMEE allows integrating data from various sources and writing data back to source systems.
This document discusses migrating applications from Oracle's Hyperion Financial Data Management (FDM) Classic to the newer Oracle Financial Data Management Enterprise Edition (FDMEE). It provides an overview of the migration utility that can automate much of the migration process. The utility can migrate artifacts like locations, import formats, rules and mappings but will not migrate items like security, scripts or custom reports. The document reviews prerequisites for the migration utility and the general steps for completing a migration, including installing the utility scenarios and configuring the source and target repositories.
Tony Scalese, Edgewater Ranzal Oracle Financial Data Management (FDM) practice director, presented "Jython Scripting in FDMEE - It's Not That Scary" at KScope14.
This webinar discusses migrating from Oracle's Hyperion Financial Data Management (FDM) Classic application to the newer FDMEE application. It introduces the speakers and outlines the agenda. The presentation covers prerequisites for conversion, system requirements, artifacts that can and cannot be migrated, opportunities and nuances of FDMEE, and how to develop a migration plan. Benefits of migrating include improved functionality, integration, data integrity, and flexibility. The application migration process involves running scripts, configuring Oracle Data Integrator, importing scenarios, and executing extracts to migrate mappings, historical data, and other artifacts from FDM to FDMEE. Questions from attendees are invited at the end.
Minnesota Hyperion User Group Discussion on Financial Data Management Enterprise Edition: the combination of Financial Data Quality Management (FDM) and ERP Integrator (ERPi)
What is FDMEE?
What can you do with FDMEE?
Considerations for converting to FDMEE?
What’s new with FDMEE?
What’s coming next with FDMEE?
Jon Harvey, Oracle EPM Practice Lead
Josh Kinkeade, Oracle HFM Practice Lead
Tony Scalese, Edgewater Ranzal Oracle Financial Data Management (FDM) practice director, presented "Getting the Most Out of FDMEE in a Multiproduct Environment" at KScope14.
The document discusses Oracle's FDMEE (FDM Enterprise Edition), which replaced the classic FDM and ERPi data integration tools in Oracle EPM version 11.1.2.3. FDMEE can integrate data from various sources like ERP systems, databases, and files into EPM applications like Oracle Hyperion Planning. It has the functions of both classic FDM and ERPi and uses the Oracle Data Integrator engine. The document then provides a tutorial on using FDMEE to load data from a text file into an Oracle Hyperion Planning application in three parts - concepts, configuration and definition, and execution.
The document discusses loading text data into an Oracle Hyperion Planning application using the Oracle Hyperion Financial Data Quality Management Enterprise Edition (FDMEE). It provides a tutorial in 3 parts: concepts, configuration and definition, and execution. It explains how to register a text file as the data source, define mappings between the source and Planning application dimensions, and load the data. FDMEE allows integrating data from various sources and writing data back to source systems.
Getting the Most Out of FDM - Integrating with Essbase and Planningfinitsolutions
FDM offers a centralized platform for collecting data from all areas of the organization and a standardized approach to data validation and loading. Its intuitive interface, powerful audit trail, repeatable end-user driven workflow and out of the box reports have made it an integration staple to consolidation systems such as HFM and Enterprise.
With an adapter specifically designed to integrate with Essbase and Planning FDM can be a solution for those applications as well. We will take a look at successful FDM to Essbase and Planning implementations and how FDM can be configured in ways that lead to successful integrations.
This document discusses custom reporting in Oracle's Financial Data Management Enterprise Edition (FDMEE). It provides examples of custom reports that were created to enhance standard reports, integrate with other systems, and align with business reports. The key steps outlined for creating a custom report include defining the SQL query, building the report template in BI Publisher Desktop, defining the report, and testing the report. One detailed example shows how a custom report was built to include account descriptions from both FDMEE and an ERP system by joining tables in the query and using synonyms.
Cast Iron Cloud Integration Best PracticesSarath Ambadas
This document provides best practices for developing and managing WebSphere Cast Iron integrations. It discusses naming conventions, error handling, orchestration development, appliance configuration, performance tuning, and upgrade processes. Development best practices include splitting large orchestrations, using configuration properties, and testing before deploying. Appliance best practices involve monitoring resources and purging logs. Performance can be improved by configuring connection pooling, batch processing, and tuning job concurrency. Upgrades involve backing up repositories and deploying existing projects to new versions.
Using IBM's Cast Iron with SugarCRM to Integrate Customer Data | SugarCon 2011SugarCRM
Given the global alliance announcements between IBM and SugarCRM, we will discuss the possibilities of how supply chain based companies can utilize IBM's Cast Iron and SugarCRM to effectively integrate customer data with other ERP and Supply Chain systems.
Presented by Scott Tabak, Highland Solutions, at SugarCon 2011
How to migrate to fdmee not die trying: Levi's knowsChristopher Chong
Levi's migrated from Hyperion FDM Classic to FDMEE and learned several lessons during the process. Key aspects of the migration included transforming import scripts, simplifying mappings between dimensions, and moving historical data. Levi's also began using DRM to centrally manage brand and cost center hierarchies between Essbase and FDMEE for improved data alignment. The migration helped address prior pain points around complex mappings and inconsistencies while reducing reliance on IT resources.
JavaOne BOF 5957 Lightning Fast Access to Big DataBrian Martin
This document discusses IBM's WebSphere eXtreme Scale product, an in-memory data grid that provides lightning fast access to big data. Some key capabilities of the data grid include horizontal scalability, fault tolerance, data redundancy and replication. The data grid can be used to cache application state, HTTP sessions, and operational data to improve performance and scalability compared to traditional caching approaches. It also allows for distributed computing patterns like map-reduce processing.
Data Sharing using Spectrum Scale Active File ManagementTrishali Nayar
IBM Spectrum Scale with Active File Management (AFM) allows storing data safely across geographically distributed sites using a clustered file system cache. AFM moves data between the home cluster where data is primarily stored and cache clusters where data is made available on demand or periodically to increase availability. Modes like read-only, single-writer, and independent-writer define how data is cached, modified, and synchronized between sites.
ODTUG KSCOPE 2017 - Black Belt Techniques for FDMEE and Cloud Data ManagementFrancisco Amores
This document provides techniques for advanced data integration using Oracle's Hyperion Financial Management (HFM) and Financial Data Management Enterprise Edition (FDMEE). It discusses 25 techniques across areas like data extraction, mappings, scripting, integration with EPM applications, and automation. Examples include using member lists and functions to extract additional data, mapping based on target dimension values, running MaxL scripts via the Essbase JAPI, and enhancing the standard scheduler to allow more flexible scheduling options.
Best Practices in Preparing for and Managing your EPM InfrastructureAlithya
Learn the key architecture components to consider with the Oracle EPM Product stack, such as proper sizing, virtual vs. physical, and maintaining the EPM Environment. Also gain insight about the key supported elements of the EPM product stack, and hear about what’s coming in future releases related to operating systems, database engines, and client software support.
Introduction to PackedObjects
JavaOne 2013 CON5758
Ryan A. Sciampacone, Senior Software Developer, IBM JTC
Abstract: In Java the layout of objects is abstracted away from the application, leaving Java inherently challenged by concerns such as (1) interoperation with native data structures, (2) the dense packing of Java objects, and (3) cache conflicts and false sharing. With PackedObjects, there is a proposal for a new explicit object model that enables direct binding of native data structures, dense packing of Java objects to improve the performance of operations such as serialization, and precise object layout to allow for finer-grained control over locality and the development of a high-performance concurrent library. Learn more in this session.
UKOUG APPS 14: Optimizing Performance for Oracle EPM SystemsAlithya
Oracle's Enterprise Performance Management (EPM) is a cutting edge suite of tools engineered to deliver analytical insights to users at high speeds, but any competitive advantage is often blunted by sub-optimal infrastructure and easily remedied software configuration issues. The result is slow user response times, endless reboots, and lengthy calculation and consolidation times.
Edgewater Ranzal infrastructure consultant Paul Rix presents a compact guide on how to get the best performance from Oracle EPM investments as well as those planning for increased workload in the near future.
Mixing theory and real-world examples, we navigate the key elements of EPM performance tuning from the low hanging fruit of 10-minute fixes to the architectural and infrastructure blockages that commonly prevent EPM systems from reaching their true performance potential.
This document discusses taking source filters in Oracle's Financial Data Management Enterprise Edition (FDMEE) to the next level. It presents two case studies of customizing source filters: 1) For a Universal Data Adapter extracting from SQL, dynamically setting a filter parameter value to include all entities in a division. 2) For an HFM extract, dynamically setting dimension filters based on a user attribute value. The document explains how to build custom filter values in a BefImport script and update the parameter value at runtime to make it dynamic rather than static. This allows more flexible filtering than the out-of-the-box capabilities in FDMEE.
Glimpse into the workings of an Edgwater Ranzal Infrastructure Engineer that specializes in Enterprise Performance Management (EPM). Presented at OAUG Collaborate 2015.
Bringing Mainframe Security Information Into Your Splunk Security Operations ...Precisely
In today’s always-on IT world, a single security breach can bring your business to a standstill. You rely on Splunk’s powerful platform for monitoring, integrating, analyzing and visualizing security data from across your enterprise to protect your organization from security threats and incidents. However, Splunk doesn’t natively interact with mainframe and IBM i systems, leaving a glaring blind spot.
Join us to learn how to effectively integrate Mainframe and IBM i security data into Splunk- providing you with a comprehensive view of your security operations landscape.
Topics will include:
- An overview of different types of security data and how to tap into mainframe & IBM i data in your Splunk Security Operations Center
- Unique and comparative differentiators across security data integration tools to be used within the Splunk Security Operations center
- Customer use cases and examples
VMworld 2013: Strategic Reasons for Classifying Workloads for Tier 1 Virtuali...VMworld
This document discusses the importance of classifying workloads before virtualizing tier 1 applications. Workload classification involves measuring existing application and database workloads to properly size and place them in a new virtualized environment. This reduces risks and speeds up implementation by providing the proper analysis. The document outlines challenges, opportunities, models, metrics, tools and an example MolsonCoors used workload classification to virtualize their SAP landscape.
Content delivery Plone Symposium East 2010alan runyan
ContentMirror allows Plone content to be delivered out-of-band by synchronizing content from the Plone database into a relational database management system (RDBMS). This decouples content delivery from Plone, allowing other applications and frameworks to serve the content dynamically or statically. ContentMirror provides a simple, extensible, and fast way to mirror Plone content and has been used successfully in several large projects. While atypical for Plone, out-of-band content delivery can help overcome constraints for content management when organizational structures or technical requirements differ from Plone's default approach.
Scalable Web Architectures: Common Patterns and Approaches - Web 2.0 Expo NYCCal Henderson
The document discusses common patterns and approaches for scaling web architectures. It covers topics like load balancing, caching, database scaling through replication and sharding, high availability, and storing large files across multiple servers and data centers. The overall goal is to discuss how to architect systems that can scale horizontally to handle increasing traffic and data sizes.
Beginner's Guide: Programming with ABAP on HANAAshish Saxena
The focus of this blog is to present an overview of the new programming techniques in ABAP after the introduction of HANA database. The focus will be towards providing a guideline on why and how an ABAP developer should start transitioning its code to use the new coding technique’s.
Getting the Most Out of FDM - Integrating with Essbase and Planningfinitsolutions
FDM offers a centralized platform for collecting data from all areas of the organization and a standardized approach to data validation and loading. Its intuitive interface, powerful audit trail, repeatable end-user driven workflow and out of the box reports have made it an integration staple to consolidation systems such as HFM and Enterprise.
With an adapter specifically designed to integrate with Essbase and Planning FDM can be a solution for those applications as well. We will take a look at successful FDM to Essbase and Planning implementations and how FDM can be configured in ways that lead to successful integrations.
This document discusses custom reporting in Oracle's Financial Data Management Enterprise Edition (FDMEE). It provides examples of custom reports that were created to enhance standard reports, integrate with other systems, and align with business reports. The key steps outlined for creating a custom report include defining the SQL query, building the report template in BI Publisher Desktop, defining the report, and testing the report. One detailed example shows how a custom report was built to include account descriptions from both FDMEE and an ERP system by joining tables in the query and using synonyms.
Cast Iron Cloud Integration Best PracticesSarath Ambadas
This document provides best practices for developing and managing WebSphere Cast Iron integrations. It discusses naming conventions, error handling, orchestration development, appliance configuration, performance tuning, and upgrade processes. Development best practices include splitting large orchestrations, using configuration properties, and testing before deploying. Appliance best practices involve monitoring resources and purging logs. Performance can be improved by configuring connection pooling, batch processing, and tuning job concurrency. Upgrades involve backing up repositories and deploying existing projects to new versions.
Using IBM's Cast Iron with SugarCRM to Integrate Customer Data | SugarCon 2011SugarCRM
Given the global alliance announcements between IBM and SugarCRM, we will discuss the possibilities of how supply chain based companies can utilize IBM's Cast Iron and SugarCRM to effectively integrate customer data with other ERP and Supply Chain systems.
Presented by Scott Tabak, Highland Solutions, at SugarCon 2011
How to migrate to fdmee not die trying: Levi's knowsChristopher Chong
Levi's migrated from Hyperion FDM Classic to FDMEE and learned several lessons during the process. Key aspects of the migration included transforming import scripts, simplifying mappings between dimensions, and moving historical data. Levi's also began using DRM to centrally manage brand and cost center hierarchies between Essbase and FDMEE for improved data alignment. The migration helped address prior pain points around complex mappings and inconsistencies while reducing reliance on IT resources.
JavaOne BOF 5957 Lightning Fast Access to Big DataBrian Martin
This document discusses IBM's WebSphere eXtreme Scale product, an in-memory data grid that provides lightning fast access to big data. Some key capabilities of the data grid include horizontal scalability, fault tolerance, data redundancy and replication. The data grid can be used to cache application state, HTTP sessions, and operational data to improve performance and scalability compared to traditional caching approaches. It also allows for distributed computing patterns like map-reduce processing.
Data Sharing using Spectrum Scale Active File ManagementTrishali Nayar
IBM Spectrum Scale with Active File Management (AFM) allows storing data safely across geographically distributed sites using a clustered file system cache. AFM moves data between the home cluster where data is primarily stored and cache clusters where data is made available on demand or periodically to increase availability. Modes like read-only, single-writer, and independent-writer define how data is cached, modified, and synchronized between sites.
ODTUG KSCOPE 2017 - Black Belt Techniques for FDMEE and Cloud Data ManagementFrancisco Amores
This document provides techniques for advanced data integration using Oracle's Hyperion Financial Management (HFM) and Financial Data Management Enterprise Edition (FDMEE). It discusses 25 techniques across areas like data extraction, mappings, scripting, integration with EPM applications, and automation. Examples include using member lists and functions to extract additional data, mapping based on target dimension values, running MaxL scripts via the Essbase JAPI, and enhancing the standard scheduler to allow more flexible scheduling options.
Best Practices in Preparing for and Managing your EPM InfrastructureAlithya
Learn the key architecture components to consider with the Oracle EPM Product stack, such as proper sizing, virtual vs. physical, and maintaining the EPM Environment. Also gain insight about the key supported elements of the EPM product stack, and hear about what’s coming in future releases related to operating systems, database engines, and client software support.
Introduction to PackedObjects
JavaOne 2013 CON5758
Ryan A. Sciampacone, Senior Software Developer, IBM JTC
Abstract: In Java the layout of objects is abstracted away from the application, leaving Java inherently challenged by concerns such as (1) interoperation with native data structures, (2) the dense packing of Java objects, and (3) cache conflicts and false sharing. With PackedObjects, there is a proposal for a new explicit object model that enables direct binding of native data structures, dense packing of Java objects to improve the performance of operations such as serialization, and precise object layout to allow for finer-grained control over locality and the development of a high-performance concurrent library. Learn more in this session.
UKOUG APPS 14: Optimizing Performance for Oracle EPM SystemsAlithya
Oracle's Enterprise Performance Management (EPM) is a cutting edge suite of tools engineered to deliver analytical insights to users at high speeds, but any competitive advantage is often blunted by sub-optimal infrastructure and easily remedied software configuration issues. The result is slow user response times, endless reboots, and lengthy calculation and consolidation times.
Edgewater Ranzal infrastructure consultant Paul Rix presents a compact guide on how to get the best performance from Oracle EPM investments as well as those planning for increased workload in the near future.
Mixing theory and real-world examples, we navigate the key elements of EPM performance tuning from the low hanging fruit of 10-minute fixes to the architectural and infrastructure blockages that commonly prevent EPM systems from reaching their true performance potential.
This document discusses taking source filters in Oracle's Financial Data Management Enterprise Edition (FDMEE) to the next level. It presents two case studies of customizing source filters: 1) For a Universal Data Adapter extracting from SQL, dynamically setting a filter parameter value to include all entities in a division. 2) For an HFM extract, dynamically setting dimension filters based on a user attribute value. The document explains how to build custom filter values in a BefImport script and update the parameter value at runtime to make it dynamic rather than static. This allows more flexible filtering than the out-of-the-box capabilities in FDMEE.
Glimpse into the workings of an Edgwater Ranzal Infrastructure Engineer that specializes in Enterprise Performance Management (EPM). Presented at OAUG Collaborate 2015.
Bringing Mainframe Security Information Into Your Splunk Security Operations ...Precisely
In today’s always-on IT world, a single security breach can bring your business to a standstill. You rely on Splunk’s powerful platform for monitoring, integrating, analyzing and visualizing security data from across your enterprise to protect your organization from security threats and incidents. However, Splunk doesn’t natively interact with mainframe and IBM i systems, leaving a glaring blind spot.
Join us to learn how to effectively integrate Mainframe and IBM i security data into Splunk- providing you with a comprehensive view of your security operations landscape.
Topics will include:
- An overview of different types of security data and how to tap into mainframe & IBM i data in your Splunk Security Operations Center
- Unique and comparative differentiators across security data integration tools to be used within the Splunk Security Operations center
- Customer use cases and examples
VMworld 2013: Strategic Reasons for Classifying Workloads for Tier 1 Virtuali...VMworld
This document discusses the importance of classifying workloads before virtualizing tier 1 applications. Workload classification involves measuring existing application and database workloads to properly size and place them in a new virtualized environment. This reduces risks and speeds up implementation by providing the proper analysis. The document outlines challenges, opportunities, models, metrics, tools and an example MolsonCoors used workload classification to virtualize their SAP landscape.
Content delivery Plone Symposium East 2010alan runyan
ContentMirror allows Plone content to be delivered out-of-band by synchronizing content from the Plone database into a relational database management system (RDBMS). This decouples content delivery from Plone, allowing other applications and frameworks to serve the content dynamically or statically. ContentMirror provides a simple, extensible, and fast way to mirror Plone content and has been used successfully in several large projects. While atypical for Plone, out-of-band content delivery can help overcome constraints for content management when organizational structures or technical requirements differ from Plone's default approach.
Scalable Web Architectures: Common Patterns and Approaches - Web 2.0 Expo NYCCal Henderson
The document discusses common patterns and approaches for scaling web architectures. It covers topics like load balancing, caching, database scaling through replication and sharding, high availability, and storing large files across multiple servers and data centers. The overall goal is to discuss how to architect systems that can scale horizontally to handle increasing traffic and data sizes.
Beginner's Guide: Programming with ABAP on HANAAshish Saxena
The focus of this blog is to present an overview of the new programming techniques in ABAP after the introduction of HANA database. The focus will be towards providing a guideline on why and how an ABAP developer should start transitioning its code to use the new coding technique’s.
How Adobe Does 2 Million Records Per Second Using Apache Spark!Databricks
Adobe’s Unified Profile System is the heart of its Experience Platform. It ingests TBs of data a day and is PBs large. As part of this massive growth we have faced multiple challenges in our Apache Spark deployment which is used from Ingestion to Processing.
This document provides an overview of PHP, discussing why it is popular for web development, how to scale PHP applications, and caching strategies. It introduces PHP basics and arrays. It then explains that PHP is popular because its array syntax can be directly passed to JavaScript, avoiding the need for object mapping. The document discusses scaling by moving to multiple servers ("scaling out") rather than increasing resources on one server ("scaling up"). It covers database replication and load balancing across database slaves. It also recommends scaling the web tier by storing sessions in a database. Finally, it discusses caching frequently accessed data in memory caches like APC or Memcached to improve performance.
The document discusses big data challenges and solutions. It describes how specialized systems like Hadoop are more efficient than relational databases for large-scale data. It provides examples of open source projects that can be used for tasks like storage, search, streaming data, and batch processing. The document also summarizes the design of the Voldemort distributed key-value store and how it was inspired by Dynamo and Memcached.
The document discusses scalable web architectures and common patterns for scaling web applications. It covers key topics like load balancing, caching, database replication, and data federation. The overall goal of application architecture is to scale traffic and data while maintaining high availability and performance. Horizontal scaling by adding more servers is preferable to vertical scaling of buying larger servers.
The document discusses scalable web architectures and common patterns for scaling web applications. It covers key topics like load balancing, caching, database replication and sharding, and asynchronous queuing to distribute workloads across multiple servers. The goal of these patterns is to scale traffic, data size, and maintainability through horizontal expansion rather than just vertical upgrades.
The document discusses scalable web architectures and common patterns for scaling web applications. It covers key topics like load balancing, caching, database replication and sharding, and asynchronous queuing to distribute workloads across multiple servers. The goal of these patterns is to scale traffic, data size, and maintainability through horizontal expansion rather than just vertical upgrades.
The document provides an overview of scaling principles for web applications, beginning with optimizing a single server application and progressing to more advanced architectures involving load balancing, multiple web/application servers, and multiple database servers. It discusses profiling applications to identify bottlenecks, various caching and optimization strategies, Apache configuration for handling load, and links to additional resources on related topics.
The document provides an overview of scaling principles for web applications, beginning with optimizing a single server application and progressing to more advanced architectures involving load balancing, multiple web/application servers, and multiple database servers. It discusses profiling applications to identify bottlenecks, various caching and optimization strategies, Apache configuration for prefork MPM, and load balancing technologies like DNS round robin, Apache reverse proxy, HAProxy and Pound. Links are provided to additional resources on related topics.
NoSQL databases are non-relational databases designed for large volumes of data across many servers. They emerged to address scaling and reliability issues with relational databases. While different technologies, NoSQL databases are designed for distribution without a single point of failure and to sacrifice consistency for availability if needed. Examples include Dynamo, BigTable, Cassandra and CouchDB.
Catalyst - refactor large apps with it and have fun!mold
This document discusses refactoring a large Perl application using Catalyst. Some key points:
1) The existing application was built over time by many people and contained inconsistencies, bugs and hacks. Refactoring with Catalyst aimed to make the code more maintainable, easier to work with, and fun to develop.
2) Catalyst provides an MVC framework and conventions that help split code into logical modules and provide common web functionality out of the box.
3) There was an initial steep learning curve to understand Catalyst and choose supporting libraries, but Template Toolkit, DBIx::Class and other CPAN modules helped simplify tasks like templates, object-relational mapping and handling web requests
A presentation about the problems you'll face when dealing with the relational model and highly customizable general-purpose projects, with a look at the NoSQL word focusing on a real solution, a graph database.
Short and comprehensive manual to extend your local matlab with a high performance computing cluster of NVidia tesla's 2070 graphical processing units.
The document discusses various techniques for performance tuning and cluster administration in HBase, including garbage collection tuning, use of memstore-local allocation buffers (MSLAB), enabling compression, optimizing splits and compactions through pre-splitting regions, and addressing hotspotting through manual splits. It provides guidance on configuring garbage collection, compression codecs, and approaches for managing splits and compactions to reduce disk I/O loads.
This document discusses how to maintain large web applications over time. It describes how the author's team managed a web application with over 65,000 lines of code and 6,000 automated tests over 2.5 years of development. Key aspects included packaging full releases, automating dependency installation, specifying supported environments, and automating data migrations during upgrades. The goal was to have a sustainable process that allowed for continuous development without slowing down due to maintenance issues.
This document provides an agenda and details for a class on databases and servers. It discusses homework status, projects 2 and 3 which involve building a website with front-end and back-end components. It demonstrates deploying a sample node app to IBM Bluemix and using cloud foundry commands. Key database topics covered include SQL vs noSQL, using local databases, and database services. An optional extra homework is assigned to deploy a pizza website project to Bluemix using a database.
"PHP from soup to nuts" -- lab exercisesrICh morrow
This document provides instructions for setting up a LAMP (Linux, Apache, MySQL, PHP) development environment on Amazon Web Services (AWS) for completing a series of PHP/LAMP labs. It describes launching an EC2 Linux instance on AWS, installing the LAMP stack, and downloading lab code files. The labs cover topics like control structures, data types, input/output, forms, files, cookies, sessions, and regular expressions. Students are instructed to stop their EC2 instance each day to avoid costs when not in use.
This document provides an overview of using Cassandra in web applications. It discusses why developers may consider using a NoSQL solution like Cassandra over traditional SQL databases. It then covers topics like Cassandra's architecture, data modeling, configuration options, APIs, development tools, and examples of companies using Cassandra in production systems. Key points emphasized are that Cassandra offers high performance but requires rewriting code and developing new processes and tools to support its flexible schema and data model.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
2. www.123olap.com
Future Webinars
We have a large amount of webinar / training
content available related to :
Essbase
FDMEE and Jython
OBIEE
HFM
DRM
If you’d like to receive a notification when we
schedule webinars, please notify
bash@epmclarity.com.
3. www.123olap.com
Introductions
Bernard Ash - EPMClarity.com
EPM Gurus – 15+ years and counting
123Olap.com – premier Hyperion training provider:
Rudy Zucca, Eric Eriksen, Matthias Wohlen
Experienced and renowned consultants, NOT just trainers.
Konvergence.com
Vendor of choice – come see demos at booth #111.
With Essbase or Without. EPM and Beyond.
Come find out about the “Save Essbase!” revolution.
4. www.123olap.com
Agenda
Part 1: Trigger Batch/Command File from FDMEE
How to make FDMEE-triggered Essbase calculation and partition
scripts targeted so that they run only for the entities that are
applicable to the current FDMEE load.
Part 2: Run a MAXL script from Smartview
Use an Essbase CDF inside an Essbase calculation script to run a
MAXL script. The calculation script can be run from Smartview.
5. www.123olap.com
Part 1: Trigger Batch/Command File from FDMEE
To run parameterized MAXL which will
orchestrate TARGETED :
1) Calculation scripts
2) Partition scripts
7. www.123olap.com
Why FDMEE?
End-user Driven data loads
Maintenance of mapping tables driven by end-users
in end-user friendly interface
Managing and remediation of kickouts in end-user
friendly interface
Flexibility with Jython scripting.
How do I get to FDMEE from FDM and how should I?
See other presentation about the migration from FDM to FDMEE
8. www.123olap.com
Why FDMEE vs. FDM Classic
Sunset
Oracle Enterprise Performance Management System 11.1.2.3 is the
terminal release for FDM Classic.
ODI
FDMEE leverages the power of the go-forward data integration
platform – ODI
LCM
Multiple platforms unlike FDM Classic
IE or Firefox support
Workspace integration
64 bit
Jython and access to java libraries.
9. www.123olap.com
Why still use BSO?
Hybrid not there yet. But we all wait excitedly for
Essbase Cloud/ 12C
ASO has a limitation of no more than 2^64
intersections
Easier to develop and support b/c many more
people have BSO skills than ASO & MDX skills.
10. www.123olap.com
Intro / Concepts
Subvars may not be best way.
We could use the API
Not only do you want to make your calculation
TARGETED (w/ a fix), you can gain tremendous
performance benefits by TARGETING your partition scripts
as well.
Only level 0 on partition definition, especially if your
target is ASO
Upper level write back – don’t get me started. How
many of you do upper level write back
11. www.123olap.com
Major Hacks – It shouldn’t be this hard ;-)
Not pretty b/c it was a bit complicated to manage double quotes when passing
parameters at the commend line and bypass the potential limit in the length of a
string that could be passed to the command line. At first, I wanted to just pass one
command line string with all the entities but I found out that DOS would limit the length
of that string so I had to resort to looping in DOS. That was a real low point in my life –
looping in DOS – YUCK!… LOL !
It loops through the parameters based on the comma as a separator. It also needs to
reconstruct the list with a certain double quote format so that it would be acceptable
to maxl or Essbase. With that high level explanation in conjunction with liberal
commenting in the code, it should hopefully be understandable.
When we call the command file and pass parameters (the entities) in this format
MAXL_DEV.cmd "France","Switzerland","China", the looping handles the fact that we
don’t know how many entities we will receive. It also then reformats that into a format
that will be acceptable to MAXL and Essbase (as a subvar).
12. www.123olap.com
What is the #1 most important (and easiest) thing
for Calc Performance?
Fix() !
Then why does the FDMEE -> Essbase Calc Script integration not support the
passing of the Entities / Departments, etc. to the Calc Script to narrow down the
fix of a calculation? Also, partitions are not integrated with FDMEE and certainly
not TARGETED partitions.
And we know that, since Hybrid doesn’t cure all our ailments “yet” ;-), we
probably need to use our BSO workhorse to calculate some “sub-models” and
then partition the results to an ASO reporting cube. Wouldn’t it be great if could:
Trigger a calc the second the data is loaded - a calc that is TARGETED/ “Fix’d” on ONLY the incoming Entities
Then automatically push the results to the ASO cube once the calc has finished using a TARGETED parition.
Okay, now that we all agree that this is pretty fundamental (or at least highly
desirable) functionality, let’s show you how to implement it.
13. www.123olap.com
The good, the bad, and the ugly – DOS.
Linux shell much better.
We will not cover DOS scripting in this class b/c
a) we don’t have time
b) I wouldn’t wish that on my worst enemy ;-)
But, if you need to customize the DOS code here,
Google is your friend – and so am I. Call / email me
if you get stuck.
14. www.123olap.com
Exercise – Add the magic sauce
You see, FDMEE has events, like FDM did.
Add pre-written script to the AftLoad event in FDMEE.
This script will query the tDataSeg table to get the distinct market
members that were in the previous load file:
"SELECT DISTINCT UD2X FROM [HYPFDMEE].[dbo].[TDATASEG] where
loadid = '" + lid + "'"
Run a load.
15. www.123olap.com
Exercise - Show me the data -
Be Sure you have FX Rates
View the Results in BSO (Input member converted into
USD)
Results in ASO
Change fx rates, rerun
Extra Credit – add support for another country
16. www.123olap.com
Advanced Exercise
Add logic to wrap “entities” / markets (i.e. Brazil) in
double quotes in DOS so that they have double
quotes in the partition script in MAXL. This would
support entities with spaces in them. This requires
some painful DOS coding so proceed at your own
risk – or migrate to Linux ;-)
17. www.123olap.com
Part 2: Trigger MAXL file from Smartview
But watch out, b/c users can hurt
themselves and others if given too many
privileges ;-)
18. www.123olap.com
Intro / Concepts
Why bother?
Partial Clear
Partition Creation / Refresh
Security
Backups
ASO aggregation (materialize aggregation)
Any other ideas for which you otherwise would need RDP access to run MAXL?
Go In and out of archive mode.
Security?
Can we pass parameters from Smartview? Maybe we could do this in Calc
Manager?
Subvars
Watch out ! Be careful who can run MAXL.
19. www.123olap.com
Our Sample use-case : FX with ASO
“Disclaimers”
There is never only one way to get something done and often no
“right way”. The right way often depends on and can vary with the
requirements. It also depends on the skills of the resources available
and environment you’re in.
Can’t exceed 2^64 intersections
Improvements to make it better in 11.1.2.4.00x in regards to skipping
over non-existent data
Hybrid will do everything but not in current EPM releases. It will be
Essbase Cloud or 12c.
20. www.123olap.com
Full Circle back to BSO?
ASO vs. BSO vs. Hybrid.
If :
•ASO has a ceiling on the number of intersections
•Hybrid has limitations – i.e. no cross-dims to upper members
•BSO is a great calc engine and we can use parameterized
partition scripts to move data from BSO to ASO. Maybe we should
still count on BSO for calculations?... At least for now.
21. www.123olap.com
The How
Calc Manager CDFs – These handy little java-based
utilities written by the Hyperion Calc Manager team
allow us to, among many other things, launch a MAXL
command from an Essbase calc script / Calc Manager
rule.
How can we get them installed?
TBD – come with Essbase and Planning install
If you need to confirm them, to go EAS bottom left of the EAS calc script
editor (see screenshot next page)
@ Functions versus “RUNJAVA”
Can you write your own?
23. www.123olap.com
@ Functions versus “RUNJAVA”
@ Functions versus “RUNJAVA”
RunJava allows you to make a one-time call to a CDF
Functions beginning with @ will call the cdf for every iteration through the
loop.
• There may be some cases where you would want to do this but way
overkill otherwise. For example, you want to check a variance, find out
what the variance threshold is for that POV by querying Essbase or a
relational database >> and, finally, fire off an email if the variance
exceeds a certain threshold. In this case, you would want to do it inside
a loop so the CDF would be called for multiple variances (i.e. for each
product / customer).
• In theory, if you don’t want to call a CDF for every iteration of a loop
you’re in, you could fix down on one cell and then you would only call
the CDF. But that seems like a lot of trouble when you could just use
“RunJava” instead.
24. www.123olap.com
Can you write your own CDF?
Short Answer – yes
Longer answer – this is a big topic for another lab /
webinar. We unfortunately don’t have enough time
for this in this session. However, we are happy to set
up another webinar to go over this after KScope :
bash@epmclarity.com. Or come visit us at the
Konvergence booth #111.
25. www.123olap.com
Calc Manager
Why not do it in calc manager?
Many users don’t want to leave Excel (Smartview)
What if we want to write calc scripts in EAS?
Developing scripts in Calc Manager is still not the most enjoyable
experience.
Can Calc Manager run native Essbase calcs?
Anyone have more reasons why not to use Calc Manager?
Pretty sure You need Planning / HFM to get Calc Manager
Advantages of calc manager?
Pass parameters?
Enter Shuttle which can help you manage parameterized calc scripts.
26. www.123olap.com
Anatomy of an ASO procedural calc
The Dynamic member formula that does all the
work – budUSDx
The budUSD member which will store the results of
the member formula. We do this so that the result is
stored and can then, therefore, aggregate more
quickly.
The copy from budUSDx to budUSD is done with the
maxl on the following slide.
31. www.123olap.com
If time permits, add another currency
1. Add another currency rate (stored in
Measures>>Metrics. See MDX screenshot from earlier.
Don’t forget to input the rate!
2. Create and flag another dealer with your new currency.
3. Add another if statement to the budUSDx member
formula to handle your new currency
4. Input money to your new FX dealer.
5. Run the MAXL from Smartview to do the conversion.
6. Validate the result is correct and has landed in budUSD.
32. www.123olap.com
Encore?
Come see my session tomorrow about FDM -> FDMEE
migration utility. Once you are done using the utility
to migrate an FDM classic app into FDMEE, you can
use Eclipse to write some Jython code and then
integrate that script into FDMEE.
33. www.123olap.com
Additional Webinars / Classes – on request
1. FDMEE Smart Replace for HFM - While HFM clears the entities in your current load, it
will not clear entities that were loaded previously. This could happen if someone
books some transactions incorrectly and they flow through to HFM. SmartReplace
will catch and clear ALL entities, not just current load and by doing so, save
customers from many hours of digging to find why their data doesn't tie.
2. How to use FDMEE data sync.
3. How to write FDMEE custom reports using BI Publisher’s MS Word plug-in.
4. How to use ODI to load EPMA Interface tables.
5. How to write a Linux Shell Script (Exalytics compatible) to automate :
a) LCM backups
b) file system backups
c) MAXL parallel data exports (for backups and/or defrag)
d) backup of Essbase partitions whose backup is not supported by LCM
34. www.123olap.com
Reminders
Bernard Ash - EPMClarity.com
EPM Gurus – 15+ years and counting
123Olap.com – premier Hyperion training provider:
Rudy Zucca, Eric Eriksen, Matthias Wohlen
Experienced and renowned consultants, NOT just trainers.
Konvergence.com
Come find out about the “Save Essbase!” revolution at booth #111.