Site/Location Hub is an MDM solution for mastering site/location data in an enterprise to facilitate the management of enterprise wide (single) view of locations and associated information with key MDM features such as Data Quality, Extensibility, etc., built in.
This session covers a brief introduction about Fusion Applications and the session progresses into the discussion of some of the highlights of the Fusion MDM for Customer application.
Oracle has recently launced a new MDM hub for tackling the Site domain. Many organizations in industries such as Retail, Utilities, Financials, etc., have a huge problem to manage site information in their business context and all of the information (lots of attributes) that they need to manage at these site levels. Oracle addresses this need through the launching of their Site Hub product
This presentation talks about few of the use cases for SIte Hub and discusses the features of the Site Hub product.
White Paper - Data Warehouse Documentation RoadmapDavid Walker
All projects need documentation and many companies provide templates as part of a methodology. This document describes the templates, tools and source documents used by Data Management & Warehousing. It serves two purposes:
• For projects using other methodologies or creating their own set of documents to use as a checklist. This allows the project to ensure that the documentation covers the essential areas for describing the data warehouse.
• To demonstrate our approach to our clients by describing the templates and deliverables that are produced.
Documentation, methodologies and templates are inherently both incomplete and flexible. Projects may wish to add, change, remove or ignore any part of any document. Some may also believe that aspects of one document would sit better in another. If this is the case then users of this document and these templates are encouraged to change them to fit their needs.
Data Management & Warehousing believes that the approach or methodology for building a data warehouse should be to use a series of guides and checklists. This ensures that small teams of relatively skilled resources developing the system can cover all aspects of the project whilst being free to deal with the specific issues of their environment to deliver exceptional solutions, rather than a rigid methodology that ensures that large teams of relatively unskilled staff can meet a minimum standard.
Site/Location Hub is an MDM solution for mastering site/location data in an enterprise to facilitate the management of enterprise wide (single) view of locations and associated information with key MDM features such as Data Quality, Extensibility, etc., built in.
This session covers a brief introduction about Fusion Applications and the session progresses into the discussion of some of the highlights of the Fusion MDM for Customer application.
Oracle has recently launced a new MDM hub for tackling the Site domain. Many organizations in industries such as Retail, Utilities, Financials, etc., have a huge problem to manage site information in their business context and all of the information (lots of attributes) that they need to manage at these site levels. Oracle addresses this need through the launching of their Site Hub product
This presentation talks about few of the use cases for SIte Hub and discusses the features of the Site Hub product.
White Paper - Data Warehouse Documentation RoadmapDavid Walker
All projects need documentation and many companies provide templates as part of a methodology. This document describes the templates, tools and source documents used by Data Management & Warehousing. It serves two purposes:
• For projects using other methodologies or creating their own set of documents to use as a checklist. This allows the project to ensure that the documentation covers the essential areas for describing the data warehouse.
• To demonstrate our approach to our clients by describing the templates and deliverables that are produced.
Documentation, methodologies and templates are inherently both incomplete and flexible. Projects may wish to add, change, remove or ignore any part of any document. Some may also believe that aspects of one document would sit better in another. If this is the case then users of this document and these templates are encouraged to change them to fit their needs.
Data Management & Warehousing believes that the approach or methodology for building a data warehouse should be to use a series of guides and checklists. This ensures that small teams of relatively skilled resources developing the system can cover all aspects of the project whilst being free to deal with the specific issues of their environment to deliver exceptional solutions, rather than a rigid methodology that ensures that large teams of relatively unskilled staff can meet a minimum standard.
Project Controls Expo 18th Nov 2014 - Introduction and key note presentation ...Project Controls Expo
I believe that project delivery challenges are driving organisations towards greater integration and changing the way we should approach Project Controls within project delivery organisations.
This presentation captures my thoughts on a way forward for Project Controls within a Project Delivery organisation discussing:
Scope definition and Work Breakdown techniques driving more effective data integration (Single sources of key data from numerous functions)
Improvements in systems and system integration
Use of Data Warehouses and Management Information systems
Utilisation of BIM incorporating 4th (Time) and 5th (Cost) dimensions
Taking a simpler higher level approach to creation of a control model
Use of an Earned Value Management system and EV Maturity Compass
The need for a wider range of skills to deliver good control
An integrated industry approach to career definition, development, training and qualifications for Project Controls within the Project Management umbrella
My experience tells me that there is no single ‘magic ingredient’ to delivering successful projects; but greater integration of management and team activities, intelligent application of recognised best practices and fit-for-purpose processes, tools and systems would benefit UK construction as a whole in the future.
Trending use cases have pointed out the complementary nature of Hadoop and existing data management systems—emphasizing the importance of leveraging SQL, engineering, and operational skills, as well as incorporating novel uses of MapReduce to improve distributed analytic processing. Many vendors have provided interfaces between SQL systems and Hadoop but have not been able to semantically integrate these technologies while Hive, Pig and SQL processing islands proliferate. This session will discuss how Teradata is working with Hortonworks to optimize the use of Hadoop within the Teradata Analytical Ecosystem to ingest, store, and refine new data types, as well as exciting new developments to bridge the gap between Hadoop and SQL to unlock deeper insights from data in Hadoop. The use of Teradata Aster as a tightly integrated SQL-MapReduce® Discovery Platform for Hadoop environments will also be discussed.
Agile BI Development Through AutomationManta Tools
How can code life cycle automation satisfy the growing demands in modern enterprise business intelligence?
Whilst an agile approach to BI development is useful for delivering value in general, the use of advanced automation techniques can also save significant resources, prevent production errors, and shorten time to market.
Gentlemen from Data To Value, Manta Tools, Volkswagen and M&G investments presented and discussed different approaches to agile BI development. Take a look!
Agile Data Warehouse Design for Big Data PresentationVishal Kumar
Synopsis:
[Video link: http://www.youtube.com/watch?v=ZNrTxSU5IQ0 ]
Jim Stagnitto and John DiPietro of consulting firm a2c) will discuss Agile Data Warehouse Design - a step-by-step method for data warehousing / business intelligence (DW/BI) professionals to better collect and translate business intelligence requirements into successful dimensional data warehouse designs.
The method utilizes BEAM✲ (Business Event Analysis and Modeling) - an agile approach to dimensional data modeling that can be used throughout analysis and design to improve productivity and communication between DW designers and BI stakeholders. BEAM✲ builds upon the body of mature "best practice" dimensional DW design techniques, and collects "just enough" non-technical business process information from BI stakeholders to allow the modeler to slot their business needs directly and simply into proven DW design patterns.
BEAM✲ encourages DW/BI designers to move away from the keyboard and their entity relationship modeling tools and begin "white board" modeling interactively with BI stakeholders. With the right guidance, BI stakeholders can and should model their own BI data requirements, so that they can fully understand and govern what they will be able to report on and analyze.
The BEAM✲ method is fully described in
Agile Data Warehouse Design - a text co-written by Lawrence Corr and Jim Stagnitto.
About the speaker:
Jim Stagnitto Director of a2c Data Services Practice
Data Warehouse Architect: specializing in powerful designs that extract the maximum business benefit from Intelligence and Insight investments.
Master Data Management (MDM) and Customer Data Integration (CDI) strategist and architect.
Data Warehousing, Data Quality, and Data Integration thought-leader: co-author with Lawrence Corr of "Agile Data Warehouse Design", guest author of Ralph Kimball’s “Data Warehouse Designer” column, and contributing author to Ralph and Joe Caserta's latest book: “The DW ETL Toolkit”.
John DiPietro Chief Technology Officer at A2C IT Consulting
John DiPietro is the Chief Technology Officer for a2c. Mr. DiPietro is responsible
for setting the vision, strategy, delivery, and methodologies for a2c’s Solution
Practice Offerings for all national accounts. The a2c CTO brings with him an
expansive depth and breadth of specialized skills in his field.
Sponsor Note:
Thanks to:
Microsoft NERD for providing awesome venue for the event.
http://A2C.com IT Consulting for providing the food/drinks.
http://Cognizeus.com for providing book to give away as raffle.
Master Data Management (MDM) has been one of the hot technology areas that are striving to solve the age old data quality and data management problems of the Master Data such as Customer, Product, Chart of Accounts (COA), etc. Of late given the ever increasing capabilities of Hardware, global single instances of packaged applications, mergers and acquisitions, it has become apparent that the data quality problems associated with Master data have been continue to worsen. It is in this context that the MDM solutions try to address the management of master data with robust data quality solutions. The Trading Community Architecture (TCA) framework is an Oracle's answer to solve the problem associated with managing the customer data. Of late the TCA has evolved much more into managing of Location data, Supplier data, Citizen Data, etc. The objective of this session is to provide the overview of Master Data Management (MDM) and Oracle's Trading Community Architecture (TCA) and how it can be used to model the customer data in an enterprise. This is an entry level session and any one with keen interest to learn what MDM and TCA can attend this session. Learn the basics of Master Data Management (MDM), MDM for Customer, and Oracle's Trading Community Architectue (TCA) Learn about the importance of MDM to an enterprise Take a brief look at the TCA's logical data model and the power/flexibility of model to solution cusotmer data
Data Works Berlin 2018 - Worldpay - PCI ComplianceDavid Walker
A presentation from the Data Works conference in 2018 that looks how Worldpay, a major payments provider, deployed a secure Hadoop cluster in order to meet business requirements and in the process became on e of the few fully certified PCI compliance clusters in the world
Project Controls Expo 18th Nov 2014 - Introduction and key note presentation ...Project Controls Expo
I believe that project delivery challenges are driving organisations towards greater integration and changing the way we should approach Project Controls within project delivery organisations.
This presentation captures my thoughts on a way forward for Project Controls within a Project Delivery organisation discussing:
Scope definition and Work Breakdown techniques driving more effective data integration (Single sources of key data from numerous functions)
Improvements in systems and system integration
Use of Data Warehouses and Management Information systems
Utilisation of BIM incorporating 4th (Time) and 5th (Cost) dimensions
Taking a simpler higher level approach to creation of a control model
Use of an Earned Value Management system and EV Maturity Compass
The need for a wider range of skills to deliver good control
An integrated industry approach to career definition, development, training and qualifications for Project Controls within the Project Management umbrella
My experience tells me that there is no single ‘magic ingredient’ to delivering successful projects; but greater integration of management and team activities, intelligent application of recognised best practices and fit-for-purpose processes, tools and systems would benefit UK construction as a whole in the future.
Trending use cases have pointed out the complementary nature of Hadoop and existing data management systems—emphasizing the importance of leveraging SQL, engineering, and operational skills, as well as incorporating novel uses of MapReduce to improve distributed analytic processing. Many vendors have provided interfaces between SQL systems and Hadoop but have not been able to semantically integrate these technologies while Hive, Pig and SQL processing islands proliferate. This session will discuss how Teradata is working with Hortonworks to optimize the use of Hadoop within the Teradata Analytical Ecosystem to ingest, store, and refine new data types, as well as exciting new developments to bridge the gap between Hadoop and SQL to unlock deeper insights from data in Hadoop. The use of Teradata Aster as a tightly integrated SQL-MapReduce® Discovery Platform for Hadoop environments will also be discussed.
Agile BI Development Through AutomationManta Tools
How can code life cycle automation satisfy the growing demands in modern enterprise business intelligence?
Whilst an agile approach to BI development is useful for delivering value in general, the use of advanced automation techniques can also save significant resources, prevent production errors, and shorten time to market.
Gentlemen from Data To Value, Manta Tools, Volkswagen and M&G investments presented and discussed different approaches to agile BI development. Take a look!
Agile Data Warehouse Design for Big Data PresentationVishal Kumar
Synopsis:
[Video link: http://www.youtube.com/watch?v=ZNrTxSU5IQ0 ]
Jim Stagnitto and John DiPietro of consulting firm a2c) will discuss Agile Data Warehouse Design - a step-by-step method for data warehousing / business intelligence (DW/BI) professionals to better collect and translate business intelligence requirements into successful dimensional data warehouse designs.
The method utilizes BEAM✲ (Business Event Analysis and Modeling) - an agile approach to dimensional data modeling that can be used throughout analysis and design to improve productivity and communication between DW designers and BI stakeholders. BEAM✲ builds upon the body of mature "best practice" dimensional DW design techniques, and collects "just enough" non-technical business process information from BI stakeholders to allow the modeler to slot their business needs directly and simply into proven DW design patterns.
BEAM✲ encourages DW/BI designers to move away from the keyboard and their entity relationship modeling tools and begin "white board" modeling interactively with BI stakeholders. With the right guidance, BI stakeholders can and should model their own BI data requirements, so that they can fully understand and govern what they will be able to report on and analyze.
The BEAM✲ method is fully described in
Agile Data Warehouse Design - a text co-written by Lawrence Corr and Jim Stagnitto.
About the speaker:
Jim Stagnitto Director of a2c Data Services Practice
Data Warehouse Architect: specializing in powerful designs that extract the maximum business benefit from Intelligence and Insight investments.
Master Data Management (MDM) and Customer Data Integration (CDI) strategist and architect.
Data Warehousing, Data Quality, and Data Integration thought-leader: co-author with Lawrence Corr of "Agile Data Warehouse Design", guest author of Ralph Kimball’s “Data Warehouse Designer” column, and contributing author to Ralph and Joe Caserta's latest book: “The DW ETL Toolkit”.
John DiPietro Chief Technology Officer at A2C IT Consulting
John DiPietro is the Chief Technology Officer for a2c. Mr. DiPietro is responsible
for setting the vision, strategy, delivery, and methodologies for a2c’s Solution
Practice Offerings for all national accounts. The a2c CTO brings with him an
expansive depth and breadth of specialized skills in his field.
Sponsor Note:
Thanks to:
Microsoft NERD for providing awesome venue for the event.
http://A2C.com IT Consulting for providing the food/drinks.
http://Cognizeus.com for providing book to give away as raffle.
Master Data Management (MDM) has been one of the hot technology areas that are striving to solve the age old data quality and data management problems of the Master Data such as Customer, Product, Chart of Accounts (COA), etc. Of late given the ever increasing capabilities of Hardware, global single instances of packaged applications, mergers and acquisitions, it has become apparent that the data quality problems associated with Master data have been continue to worsen. It is in this context that the MDM solutions try to address the management of master data with robust data quality solutions. The Trading Community Architecture (TCA) framework is an Oracle's answer to solve the problem associated with managing the customer data. Of late the TCA has evolved much more into managing of Location data, Supplier data, Citizen Data, etc. The objective of this session is to provide the overview of Master Data Management (MDM) and Oracle's Trading Community Architecture (TCA) and how it can be used to model the customer data in an enterprise. This is an entry level session and any one with keen interest to learn what MDM and TCA can attend this session. Learn the basics of Master Data Management (MDM), MDM for Customer, and Oracle's Trading Community Architectue (TCA) Learn about the importance of MDM to an enterprise Take a brief look at the TCA's logical data model and the power/flexibility of model to solution cusotmer data
Data Works Berlin 2018 - Worldpay - PCI ComplianceDavid Walker
A presentation from the Data Works conference in 2018 that looks how Worldpay, a major payments provider, deployed a secure Hadoop cluster in order to meet business requirements and in the process became on e of the few fully certified PCI compliance clusters in the world
Analytic Platforms in the Real World with 451Research and Calpont_July 2012Calpont Corporation
Matt Aslett, 451 Research, and Bob Wilkinson, VP Engineering for Calpont, discuss the emergence of the analytic platform, its place the new ecosystem for Big Data, considerations for selection, and applied use cases of Calpont’s analytic platform, InfiniDB, in Telco and Mobile Advertising.
Explores the notion of "Hadoop as a Data Refinery" within an organisation, be it one with an existing Business Intelligence system or none - looks at 'agile data' as a a benefit of using Hadoop as the store for historical, unstructured and very-large-scale datasets.
The final slides look at the challenge of an organisation becoming "data driven"
Hadoop as Data Refinery - Steve LoughranJAX London
Apache Hadoop is often described as a "Big Data Platform" but what does that mean? One way to better understand Hadoop is to talk about how Hadoop is used. This talk discusses using Hadoop as a "Data Refinery", which is a common use case. The concept is very much like a traditional oil refinery except with data, pulling in large quantities of "crude data" over pipelines, refining some into useful business intelligence; refining other pieces into slightly less crude data that stays in the cluster until needed later. This metaphor proves useful when considering how Hadoop could be adopted in an organisation that already has data warehousing and business intelligence systems -and when contemplating how to hook up a Hadoop cluster to the sources of data inside and outside that organisation. A key point to remember is that storing data in Hadoop is not a means to an end any more than storing data in a database is: it is extracting information from that data. Using Hadoop as a front end "data refinery" means that it can integrate with existing Business Intelligence systems, while providing the platform for new applications.
GDPR Noncompliance: Avoid the Risk with Data VirtualizationDenodo
You can watch the full webinar on-demand here: https://goo.gl/2f2RYF
In its recent report “Predictions 2018: A year of reckoning”, Forrester predicts that 80% of firms affected by GDPR will not comply with the regulation by May 2018. Of those noncompliant firms, 50% will intentionally not comply.
Compliance doesn’t have to be this difficult! What if you have an opportunity to facilitate GDPR compliance with a mature technology and significant cost reduction? Data virtualization is a mature, cost-effective technology that enables privacy by design to facilitate GDPR compliance.
Attend this session to learn:
• How data virtualization provides a GDPR compliance foundation with data catalog, auditing, and data security.
• How you can enable single enterprise-wide data access layer with guardrails.
• Why data virtualization is a must-have capability for compliance use cases.
• How Denodo’s customers have facilitated compliance.
Education Seminar: Self-service BI, Logical Data Warehouse and Data LakesDenodo
This educational seminar took place on Thursday, December 8th in Westin Galleria Dallas, Texas.
Self-service BI, Logical Data Warehouse and Data Lakes – They are all essential components of Fast Data Strategy. Many companies are rapidly augmenting their traditional data warehouses, data marts, and ETL with their logical counterparts. Reason? Agility and rapid time-to-market.
Speakers including:
• Chuck DeVries, VP, Strategic Technology and Enterprise Architecture, Vizient,
• Ravi Shankar, Chief Marketing Officer, Denodo
• Charles Yorek, Vice President, iOLAP
Advanced Analytics and Machine Learning with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3aXysas
Advanced data science techniques, like machine learning, have proven to be extremely useful to derive valuable insights from your data. Data Science platforms have become more approachable and user friendly. With all the advancements in the technology space, the Data Scientist is still spending most of the time massaging and manipulating the data into a usable data asset. How can we empower the data scientist? How can we make data more accessible, and foster a data sharing culture?
Join us, and we will show you how Data Virtualization can do just that, with an agile and AI/ML laced data management platform. It can empower your organization, foster a data sharing culture, and simplify the life of the data scientist.
Watch this webinar to learn:
- How data virtualization simplifies the life of the data scientist, by overcoming data access and manipulation hurdles.
- How integrated Denodo Data Science notebook provides for a unified environment
- How Denodo uses AI/ML internally to drive the value of the data and expose insights
- How customers have used Data Virtualization in their Data Science initiatives.
A Strategic View of Enterprise Reporting and Analytics: The Data FunnelInside Analysis
The Briefing Room with Colin White and Jaspersoft
Slides from the Live Webcast on June 12, 2012
As the corporate appetite for analytics and reporting grows, companies must find a way to secure a strategic view of their information architecture. End users with varying degrees of expertise need a wide range of data and reports delivered in a timely fashion. As the audience for analytics expands, that puts pressure on IT infrastructure and staff. And now with the promise of Hadoop and MapReduce, the organization's desire for business insight becomes even more significant.
In this episode of The Briefing Room, veteran Analyst Colin White of BI Research will explain the value of being strategic with enterprise reporting. White will be briefed by Karl Van den Bergh of Jaspersoft, who will tout his company's “data funnel” concept, which is designed to strategically manage an organization's information architecture. By aligning information assets along this funnel, IT can effectively address the spectrum of analytical needs – from simple reporting to complex, ad hoc analysis – without over-taxing personnel and system resources.
Architecting Agile Data Applications for ScaleDatabricks
Data analytics and reporting platforms historically have been rigid, monolithic, hard to change and have limited ability to scale up or scale down. I can’t tell you how many times I have heard a business user ask for something as simple as an additional column in a report and IT says it will take 6 months to add that column because it doesn’t exist in the datawarehouse. As a former DBA, I can tell you the countless hours I have spent “tuning” SQL queries to hit pre-established SLAs. This talk will talk about how to architect modern data and analytics platforms in the cloud to support agility and scalability. We will include topics like end to end data pipeline flow, data mesh and data catalogs, live data and streaming, performing advanced analytics, applying agile software development practices like CI/CD and testability to data applications and finally taking advantage of the cloud for infinite scalability both up and down.
Optimizing IT Costs & Services With Big Data (Little Effort!) - Case Studies ...TeamQuest Corporation
IT organizations have a wealth of Service Management and Service Delivery tools, processes and metrics that typically exist in relative isolation. This session will present detailed real-life examples of how existing tools and metrics can be brought together using big data techniques to optimize costs and performance of IT environments.
14 Haziran 2012 tarihinde Sybase Türkiye tarafından yapılan PowerDesigner etkinliğinde, PowerDesigner'in SAP içindeki stratejik yol haritası hakkında temel bilgiler
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
2. AGENDA
• Introduction to DWH
– Are DWH project complex, challenges, requirements
• Information Architecture (EA) for DWH
– Models, Workflows, Artifacts
• Mapping over Sybase PowerDesigner
– Mapping to IA models and artifacts
– Features of interest for DWH
• Demonstration
– One example of IA architecture on the project
• DWH Referent IA
– Layers and recommendations
2 – Company Confidential – June 18, 2012
3. COMPLEXITY OF DWH PROJECTS
ARE DWH PROJECTS COMPLEX?
3 – Company Confidential – June 18, 2012
5. BI/DWH COMPLEXITY
Causes, sources
– Data Sources
Different sources, technologies, business functions, legacy,
overlapping, concepts, elements
– Scope and Performances,
• Never enough, never on time, content variations(!),
– Participants
• Different backgrounds, knowledge, skills, motivation, visions,
– Requirements
• Continuous changes and extensions,
– Growth and Development
• Volume of data, people, reports and analysis,
– Quality
• Clean, right, correct on time, cleansing (division of resp.)
5 – Company Confidential – June 18, 2012
6. BI/DWH COMPLEXITY
Change, Heterogeneity
– Never enough
Users/BI Analyst can not give definite, detailed, complete and
precise specification of all reports/views in advance (!),
Changes are coming on the end, they are inevitable and continual,
– Never on time
Every change should be implemented and used in usable time
frame, before user forgets about it,
– Never one and exactly one data source,
Different sources results in:
– Different DMS technology, different refresh rate, different volume,
different performances (management).....
– Data overlapping – consolidation,
6 – Company Confidential – June 18, 2012
7. BI/DWH COMPLEXITY
Volume and growth, data quality
– Large Volume and intensive Growth is inevitable,
Operational Data Sources,
– Keep only data set needed for operational work (year?),
– Keep it in the shape suitable for operational work (Relational),
– Analytical Extension,
Keep data needed for sound analysis (many years?),
Keep it in the shape suitable for analysis (MDM),
– Growth is inevitable per time and volume,
– Compromises:
Time: keep last (x) months or representative sample,
Nobody is actually happy, neither IT or Business
– Data Quality
Clean, consolidated data source does not exists,
Every data source needs “housekeeping” constantly, Data Entry/ETL
7 – Company Confidential – June 18, 2012
8. BI/DWH COMPLEXITY
Performance
– Never fast enough
Use of technology not designed/suited for analytics (RDBMS),
Intensive use of „ad hoc“ request – indexing problem (RDBMS),
– Free exploration over arbitrary data set is heavily limited,
Very intensive and heavy administration – never ending story,
– One and only one complete “Version of the truth”
Similar or overlapping analysis are presenting different data!!
– Which one is correct and right? What about the rest of it?
You are publisher – hold the responsibilities,
– Hold the reader's trust,
– Publish on regular basis,
– Use variety of sources and edit them with quality and consistency,
Data consistency must be established and protected,
8 – Company Confidential – June 18, 2012
11. INFORMATION ARCHITECTURE FOR DWH
Position of IA
Motivation, goals, business principles
organizational structure, Business
Functions, Services and Processes
IT support for Business Architecture
System Services, Applications,
Databases, Components, Forms,
Reports, Data Flows....
Technology for IS Architecture,
network, servers, installed instances,
Access points,
From current to planned
11 – Company Confidential – June 18, 2012
12. INFORMATION ARCHITECTURE FOR DWH
Method (ADM)
– To Define Architecture Development Method
Define at any point of the project Who is doing What, How and When
Define Phases, Workflows, Artifacts, Models, and Deliverable
– Essential for DWH/BI with Backward Requirement process
– Presented ADM and IA is:
Agile – simplification of RUP, TOGAF
Iterative and Incremental – cyclic repetition of workflows,
Data Driven – based on Data Assets
Comprehensive – includes all activities including maintenance and RFC
Model Driven – all artifacts are represented with modeling artifacts
Requirement Driven – placed in the center of methodology
Sustainable – at any point knowledge is collected, formally specified
and properly presented
12 – Company Confidential – June 18, 2012
13. INFORMATION ARCHITECTURE FOR DWH
Main ADM Cycle
Requirements&Constraints in
the center
Not all are mandatory
Presented main cycle, others
possible
Many cycles are expected
All that is needed to obtain
sustainable system
• Development
• Deployment
• Maintenance
Active, in IME
13 – Company Confidential – June 18, 2012
14. INFORMATION ARCHITECTURE FOR DWH
B.Data Analysis
– Objective:
Discover, identify, collect, elaborate, specify, define and present Data Assets
Different abstraction levels: from conceptual to implementation
– Viewpoints:
Architectural viewpoint, Data Providers and Consumers, data flow process,
engaged systems, applications, components, usage, access rights
Structural viewpoint, structure, attributes, relationships, dependencies,
rules applied on conceptual and physical level
– Inputs:
IA (others not in the scope),
– Outputs
DA, Sources Conceptual and Physical Data
14 – Company Confidential – June 18, 2012
15. INFORMATION ARCHITECTURE FOR DWH
C.DWH Design and Implementation
– Objective:
D&I of integrated and unified data collection, organized in dimension of
time, which is subject oriented used for analysis, planning and evaluation of
business performances
Establish common view (unified/integrated/complete) over the enterprise
data, stable source of historical information, accommodate data growth
– Activities:
Full and detailed schema specification for DWH, Staging and ODS
– Inputs:
IA & DA, Sources Conceptual/Physical Data,
– Outputs
IA & DA, Conceptual and Physical DWH
15 – Company Confidential – June 18, 2012
16. INFORMATION ARCHITECTURE FOR DWH
D.ETL Design and Implementation
– Objective:
To analyze, elaborate, define, specify, present and implement full and
incremental ETL flows between source2staging, staging2DWH, DWH2MDM
To discover, identify, collect, elaborate, specify, define and present all
characteristics of ETL flows,
– Activities:
Extraction Method, Schema, Condition and Frequency for increments,
– Source 2 Target Mapping,
Transformation processes on the appropriate level of details,
Data Flow Architecture, Trash management
– Inputs:
IA&DA, Sources Conceptual/Physical Data and DWH,
– Outputs
DA, Data Flow, Sources Conceptual/Physical Data,
16 – Company Confidential – June 18, 2012
17. INFORMATION ARCHITECTURE FOR DWH
E.BI Design and Implementation
– Objective:
Establish Multidimensional space (Business Universe) with Facts, Measures,
Dimensions and Hierarchies,
Build visualization including Reports, Dashboards, OLAP views,
Check if requested KPI set is supported and presented,
– Activities:
MDM Space (above)
Detailed specification of requested KPI with mapping
Detailed specification of Reports, Dashboards and OLAP Views
Access rights and delivery mechanisms
– Inputs:
DWH, IA and DA
– Outputs
IA, DA, DWH (MDM) models
17 – Company Confidential – June 18, 2012
18. INFORMATION ARCHITECTURE FOR DWH
Paths – Simplified Analysis, Design and Implementation
Generally many paths are
possible
Gap Analysis may discover
missing info
Analysis, Design and
Implementation of DWH, ETL
and BI are tightly
interconnected and
dependent on each other
18 – Company Confidential – June 18, 2012
19. INFORMATION ARCHITECTURE FOR DWH
Models – Business&Information Architecture
– Specifies
Application systems and applications,
Data Assets, Databases, Data Source/Destination, Data
Providers/Consumers,
Usage of Data Assets and Applications, cooperation and
collaboration of Applications and/or services,
ownership over the Data Assets, Applications and Services,
elements of SLA
ETL procedures on high abstraction level.
– Viewpoints
Architecture of the system
– Represents a “hat” for the rest of the system
19 – Company Confidential – June 18, 2012
20. INFORMATION ARCHITECTURE FOR DWH
Models - Source Model
– Specifies
details to understand structural relationships and meaning (conceptual)
internal structure of data source with implementation details (physical)
– tables, columns, views, keys, procedures, indexes, rights, constraints, triggers
consolidated by bidirectional synchronization and associated
transformation
Specifies extraction scheme for every data source,
Source for Source2Target mapping
– Viewpoint
One or more diagrams per source to represent subject area
– Used to synchronize changes from source into DWH IA
Starting point for change management
20 – Company Confidential – June 18, 2012
21. INFORMATION ARCHITECTURE FOR DWH
Models – DWH model
– Specifies
details to understand structural relationships and meaning (conceptual)
internal structure of DWH with implementation details (physical),
Internal MDM structure of Data Marts (physical),
For Staging, Trash, ODS, DWH and MDM
Target for Source2Target mapping
Relational2MDM mapping,
– Viewpoints
One or more diagrams to represent subject area
One or more MDM diagrams to represent Data Marts
– Used to synchronize changes from DWH IA to actual RDBMS
Ending point for change management
21 – Company Confidential – June 18, 2012
22. INFORMATION ARCHITECTURE FOR DWH
Models – Data Flow model
– Specifies
Connects all important data sources to destination
Data flows from source to destination with all attributes and constraints
– characteristics of flow processes, source and destination tables, kind of the
flow (ETL, replication or federation), integration preconditions, used
integration service, possible outcomes etc.
Mapping source2target within every step of the flow
– Viewpoints
One or more diagrams to represent actual flow task
– Aggregation, sort, filter, projection, split, join, merge, lookup
One or more diagrams to represent transformation control flow
– Used to present integration, consolidation and migration
22 – Company Confidential – June 18, 2012
23. MAPPING ON SYBASE POWERDESIGNER
DWH RELATED MODELS AND FEATURES
23 – Company Confidential – June 18, 2012
24. MAPPING ON SYBASE POWERDESIGNER
Data Models
24 – Company Confidential – June 18, 2012
25. MAPPING ON SYBASE POWERDESIGNER
Architecture and Requirements
25 – Company Confidential – June 18, 2012
26. MAPPING ON SYBASE POWERDESIGNER
DHW related features – Dependency Matrix
– What
Two dimensional hierarchical matrix
Present, review and create/delete links of particular kind between
two artifacts
– Any model, any diagram, any two artifacts
– Indirect (two or more links) dependency, drilling
Hierarchy of objects on row/column, Copy to CVS
– Reasoning
Full, rich, useful dependency analysis (network)
EAM to understand and present dependency between Data Assets
and Data Providers and Consumers,
PDM to create mapping overview
DMM to present actual source to target dependencies
26 – Company Confidential – June 18, 2012
27. MAPPING ON SYBASE POWERDESIGNER
DHW related features – Mappings
– What
Modeling Connection between objects
– Mapping with transformation (O/R, R/R, O/O)
– Generation (Generate Mappings)
Wizard to convert mappings into Transformation Task (ILM)
Mapping Editor
– Reasoning
Data Flow specification
Relational to Multidimensional,
– DWH2MDM
Relational to Relational,
– Source2Target
Any descriptive dependency
– Federation concept (not Replication or ETL)
27 – Company Confidential – June 18, 2012
28. MAPPING ON SYBASE POWERDESIGNER
DHW related features – Impact/Lineage Analysis
– What
Impact Analysis – consequences of the change
Lineage Analysis – objects forming the basis for object
Temporary View for Review
IAM for permanent view, snapshot (Drilling, Exploring)
Analysis Rules changeable (Impact/Lineage),
– Reasoning
Change Management evaluation, estimation and planning,
– To asses change impact before it happens (costs, time, resources)
Snapshots
– Development points, different version of system,
Meta Data BI
– To explore meta-data set, discover implicit dependencies,
28 – Company Confidential – June 18, 2012
29. DEMONSTRATION
EXAMPLE
29 – Company Confidential – June 18, 2012
32. REFERENT INFORMATION ARCHITECTURE
Data Access Layer - recommendations
– Set of processes, tools, activities and models:
Data extraction (E) from operational system to DWH,
Transformation to suitable shape (T),
– Data cleansing, consistency check, integrity,
– Translation from operational to enterprise format,
– Enterprise DWH data structure is inevitable different then operational,
Data loading into DWH (L),
– Recommendations :
Understand OS structure, rules and dynamics,
Dynamics of data refresh rate should be realistic,
Changed Data Capture Algorithm: Intrusive/Non Intrusive,
Apply effective ETL/ELT, use staging area,
Document everything - very intensive and complex changes,
– Mappings between Data Sources and DWH Destination,
32 – Company Confidential – June 18, 2012
33. REFERENT INFORMATION ARCHITECTURE
DWH Layer and Recommendations
– DWH, Staging, Trash and ODS
Common view on enterprise data, regardless of how/who will use it,
Unification offers flexibility in how the data is later interpreted
A stable source of historical information,
Efficient accommodation of a data explosion (growth),
Supply data for Analytical layer on required granularity,
Trash and Alerts&Matching to jump over initial cleansing (blocker),
– Recommendations:
Use pre-packaged solution and existing experience,
Use relational and multidimensional modeling (document all),
Relational to address performance issues, follow OS paradigm,
Multidimensional to present later Business Universe for Analytics,
Use views to transform and map Relational to Multidimensional and
back,
33 – Company Confidential – June 18, 2012
34. REFERENT INFORMATION ARCHITECTURE
Delivery Layer and Recommendations
– Set of processes, tools, activities and models:
Selection of data subset to be delivered,
Reorganization (format) of the data to be delivered,
– Aggregation, Summing, Counting, additional classification,
– Data transformation (Date, Time), slowly changing dimensions
Transform DWH structures to “Business Universe”,
– Facts, Measurements, Dimensions, Hierarchies, Business lang. abstraction
Granularity accordingly to the End User needs,
– Recommendations:
Use multidimensional modeling,
Model transformation/mappings between DWH and Data Mart(s),
Extend multidimensional model with required BI meta-data,
Define refresh rate on the basis on user needs (constraints),
Aggregations are difficult for incremental update
34 – Company Confidential – June 18, 2012
35. REFERENT INFORMATION ARCHITECTURE
MDM Layer and Recommendations
– Data marts
DWH derivatives, provide the business community answers to asked
questions and strategic analysis,
Tailored for a particular capability or function of enterprise,
Vertically organized and bounded to one business function,
Organized in multidimensional structure,
OLAP – On line Analytical Processing – ROLAP/MOLAP
– Recommendations:
Choose proper storage technology (ROLAP/MOLAP),
– Special storage may not be standard and may narrow your choices,
– Common storage may not be performative enough,
Choose Virtual Marts as basement of Analytics,
Use separate V. Mart for separate business concerns,
Adjust to BI meta-data requirements, use automated access,
35 – Company Confidential – June 18, 2012
36. INFORMATION ARCHITECTURE
FOR DWH PROJECTS
QUESTIONS?
RUAIRI PRENDIVILLE
SENIOR CONSULTANT , SYBASE (UK)
JUNE I4TH, 2012, ISTANBUL