Dynamics Day 2014: Microsoft Dynamics AX - Business Insight Leveraging AnalyticsIntergen
A close-up look at how to leverage analytics functionality in Microsoft Dynamics AX, covering traditional BI, modifying role centres and Power BI.
Dynamics Day is Australasia's leading event for users of Microsoft Dynamics. For those of you who couldn't make it along to the event, we have made all session content available online.
Dynamics Day 2014: Microsoft Dynamics AX - Business Insight Leveraging AnalyticsIntergen
A close-up look at how to leverage analytics functionality in Microsoft Dynamics AX, covering traditional BI, modifying role centres and Power BI.
Dynamics Day is Australasia's leading event for users of Microsoft Dynamics. For those of you who couldn't make it along to the event, we have made all session content available online.
What are the benefits of metadata-driven automation for big data and data warehouse solutions? Here you can find the slide of Gregor Zeiler's session at the Data Modeling Zone (#DMZone) on the 25th of September in Dusseldorf, Germany
This slide deck explains in a comprehensive way what Power BI is, how the Power BI architecture looks like and what the usage scenarios are for using Power BI and related tools
What is the Power BI and learn the Power BI by self and this presentation contains some use full links which help us at time of developing the Power BI.
Slide deck from the free session on PowerBI I have given at numerous SQLSaturday and user group events. You would have to attend to get the full picture, but this should give you an idea!
There is cool functionality with PowerQuery, Powerpivot, PowerView and PowerMap, and plenty of sessions on how to use it. But how do you hook it all together so you can take advantage of both public data and on-prem data for greater data insights.
The Microsoft business intelligence front end tools are rich and varied and have also changed and grown over the years. We give an overview of the Microsoft business analytics tools, from Power BI to Excel and SQL Server and include a chart that compares their features.
Power BI - Finally I can make decisions based on factsUlysses Maclaren
SSW's General Manager, Ulysses Maclaren, will take you through his personal journey of discovery into the world of Power BI.
Many companies have valuable data, but the challenge is being able to easily visualize it in a way that allows you to make decisions.
He will show in a live demo how Power BI allows even non-technical users to produce and share invaluable reports and dashboards.
Are you really curious about Power BI Dashboards, but you are scared that you won’t understand any of the words? This is the session for you! During our short time together, I will define some basic terms and best practices in Data Analysis, provide a quick demo of Power BI, and show what it takes to create a few simple reports and dashboards.
What Will You Learn?
• How to ask the right questions
• How to quickly get and model data
• How to design easy reports and dashboards
Denodo DataFest 2017: Enabling Single View of Entities with MicroservicesDenodo
Watch the live session on-demand now: https://goo.gl/n2js3M
Microservices is an advanced architecture for rapidly building applications using a suite of loosely-coupled modular services.
Watch this Denodo DataFest 2017 session to discover:
• A deeper understanding of delivering single view of entities such as students as microservices enabled by MDM and data virtualization.
• How GetSmarter are using microservices to improve their business intelligence.
• A future use case of data virtualization as a microservice.
What are the benefits of metadata-driven automation for big data and data warehouse solutions? Here you can find the slide of Gregor Zeiler's session at the Data Modeling Zone (#DMZone) on the 25th of September in Dusseldorf, Germany
This slide deck explains in a comprehensive way what Power BI is, how the Power BI architecture looks like and what the usage scenarios are for using Power BI and related tools
What is the Power BI and learn the Power BI by self and this presentation contains some use full links which help us at time of developing the Power BI.
Slide deck from the free session on PowerBI I have given at numerous SQLSaturday and user group events. You would have to attend to get the full picture, but this should give you an idea!
There is cool functionality with PowerQuery, Powerpivot, PowerView and PowerMap, and plenty of sessions on how to use it. But how do you hook it all together so you can take advantage of both public data and on-prem data for greater data insights.
The Microsoft business intelligence front end tools are rich and varied and have also changed and grown over the years. We give an overview of the Microsoft business analytics tools, from Power BI to Excel and SQL Server and include a chart that compares their features.
Power BI - Finally I can make decisions based on factsUlysses Maclaren
SSW's General Manager, Ulysses Maclaren, will take you through his personal journey of discovery into the world of Power BI.
Many companies have valuable data, but the challenge is being able to easily visualize it in a way that allows you to make decisions.
He will show in a live demo how Power BI allows even non-technical users to produce and share invaluable reports and dashboards.
Are you really curious about Power BI Dashboards, but you are scared that you won’t understand any of the words? This is the session for you! During our short time together, I will define some basic terms and best practices in Data Analysis, provide a quick demo of Power BI, and show what it takes to create a few simple reports and dashboards.
What Will You Learn?
• How to ask the right questions
• How to quickly get and model data
• How to design easy reports and dashboards
Denodo DataFest 2017: Enabling Single View of Entities with MicroservicesDenodo
Watch the live session on-demand now: https://goo.gl/n2js3M
Microservices is an advanced architecture for rapidly building applications using a suite of loosely-coupled modular services.
Watch this Denodo DataFest 2017 session to discover:
• A deeper understanding of delivering single view of entities such as students as microservices enabled by MDM and data virtualization.
• How GetSmarter are using microservices to improve their business intelligence.
• A future use case of data virtualization as a microservice.
What is BI,Definition, examples, BI industry, Solutions, Evolution, Catogeries, Key Stages of BI, BI significance, BI technologies, tools, future of BI
Trends in Big Data & Business Challenges Experian_US
Join our #DataTalk on Thursdays at 5 p.m. ET. This week, we tweeted with Sushil Pramanick – who is the founder and president of the The Big Data Institute (TBDI).
You can learn about upcoming chats and see the archive of past big data tweetchats here
http://www.experian.com/blogs/news/about/datadriven
SharePoint BCS, OK. But what is the SharePoint Business Data List Connector (...Layer2
The Layer2 Business Data List Connector for SharePoint makes it as easy as possible to connect native SharePoint lists to almost any external data source, codeless and bi-directional. The app closes many issues and overcomes limitations that still exist with SharePoint out-of-the-box data integration today.
To configure simply:
1. Enter connection string
2. Enter select statement
3. Enter primary keys.
Fast background updates (changed data only) via timer job. Alerts and workflows can be used to take business actions in SharePoint, when external data records are changed. Optionally write-back changes to data source.
Take advantage of FME Server’s capabilities for real-time integration and change data capture. Learn about workflows for monitoring and updating your data as it changes. We’ll look at what data sources/systems are monitored out-of-the-box and how you can enable change data capture for other data sources/systems.
Go beyond spatial data and connect to a range of web and business formats. Plus, learn techniques for generating reports, dashboards, and analytics — whether you prefer Tableau, PDFs, spreadsheets, or web interfaces. We’ll look at what people have been doing to make their data more readable and how you can do it too.
"Big Data" is a term as ubiquitous as data itself, but it is more than just a way to describe the massive amount of information created every day. In fact, I would argue that it is more of a dynamic than a one-dimensional term.
In this presentation, I walk business audiences through the history and rise of big data, the four Vs of big Data, and end by looking at some practical applications and recommendations.
Originally presented on February 26, 2013 in Washington, DC at the US Chamber of Commerce.
This report offers a thorough, in-depth review of all the key stats for the Social, Digital and Mobile landscape in China in 2014. Packed with 95 slides covering platform preferences, behavioural usage and economic indicators, the deck presents stand-out infographics that are ready to copy-paste direct into your own presentations and blogs.
"Conceptually, a data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created “on demand”, providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we will introduce key concepts for a data lake and present aspects related to its implementation. We will discuss critical success factors, pitfalls to avoid as well as operational aspects such as security, governance, search, indexing and metadata management. We will also provide insight on how AWS enables a data lake architecture.
A data lake is a flat data store to collect data in its original form, without the need to enforce a predefined schema. Instead, new schemas or views are created ""on demand"", providing a far more agile and flexible architecture while enabling new types of analytical insights. AWS provides many of the building blocks required to help organizations implement a data lake. In this session, we introduce key concepts for a data lake and present aspects related to its implementation. We discuss critical success factors and pitfalls to avoid, as well as operational aspects such as security, governance, search, indexing, and metadata management. We also provide insight on how AWS enables a data lake architecture. Attendees get practical tips and recommendations to get started with their data lake implementations on AWS."
Modern Data Architecture for a Data Lake with Informatica and Hortonworks Dat...Hortonworks
How do you turn data from many different sources into actionable insights and manufacture those insights into innovative information-based products and services?
Industry leaders are accomplishing this by adding Hadoop as a critical component in their modern data architecture to build a data lake. A data lake collects and stores data across a wide variety of channels including social media, clickstream data, server logs, customer transactions and interactions, videos, and sensor data from equipment in the field. A data lake cost-effectively scales to collect and retain massive amounts of data over time, and convert all this data into actionable information that can transform your business.
Join Hortonworks and Informatica as we discuss:
- What is a data lake?
- The modern data architecture for a data lake
- How Hadoop fits into the modern data architecture
- Innovative use-cases for a data lake
Big data architectures and the data lakeJames Serra
With so many new technologies it can get confusing on the best approach to building a big data architecture. The data lake is a great new concept, usually built in Hadoop, but what exactly is it and how does it fit in? In this presentation I'll discuss the four most common patterns in big data production implementations, the top-down vs bottoms-up approach to analytics, and how you can use a data lake and a RDBMS data warehouse together. We will go into detail on the characteristics of a data lake and its benefits, and how you still need to perform the same data governance tasks in a data lake as you do in a data warehouse. Come to this presentation to make sure your data lake does not turn into a data swamp!
Today organizations find themselves in a data rich world with a growing need for increased agility and accessibility of all this data for analysis and deriving keen insights to drive strategic decisions. Creating a data lake helps you to manage all the disparate sources of data you are collecting, in its original format and extract value. In this session learn how to architect and implement an Analytics Data Lake. Hear customer examples of best practices and learn from their architectural blueprints.
Data Warehouse Design and Best PracticesIvo Andreev
A data warehouse is a database designed for query and analysis rather than for transaction processing. An appropriate design leads to scalable, balanced and flexible architecture that is capable to meet both present and long-term future needs. This session covers a comparison of the main data warehouse architectures together with best practices for the logical and physical design that support staging, load and querying.
Implementing Change Systems in SQL Server 2016Douglas McClurg
Features or concepts like Change Tracking, Change Data Capture, Temporal Tables, and other similar delta systems are complex and may carry a stigma or misapprehension in your organization around performance or security or cost. Even if you do not directly implement these features or methods absolutely, most information systems rely on tracking changes especially from legacy line of business applications. I'm here to show you robust techniques for implementing delta systems in SQL Server to increase the trustworthiness of your data warehouse. I will also steer you away from common pitfalls.
This was a very interesting conference, TIC students oriented where I take him to the azure ecosystem for data warehousing architecture and best practices to reach powerful Business Intelligence Solutions according to the new era
Azure Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data. In this session we will learn how to create data integration solutions using the Data Factory service and ingest data from various data stores, transform/process the data, and publish the result data to the data stores.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
12. Slowly Changing Dimensions
Support primary role of data warehouse to describe
the past accurately
Maintain historical context as new or changed data is
loaded into dimension tables
Slowly Changing Dimension (SCD) types
Type 1: Overwrite the existing dimension record
Type 2: Insert a new ‘versioned’ dimension record
Type 3: Track limited history with attributes
The concept of Slowly Changing Dimensions was
introduced by Ralph Kimball
The dimensions reflect the business processes (functional structure) and measures reflect numeric data flow , A dimensional model is made up a central fact table (or tables) and its associated dimensions. The dimensional model is also called a star schema because it looks like a star with the fact table in the middle and the dimensions serving as the points on the star. From a relational data modeling perspective, the dimensional model consists of a normalized fact table with denormalized dimension tables.
Think about dimensions as tables in a database because that's how it implements. Each table contains a list of homogeneous entities—products in a manufacturing company, patients in a hospital, vehicles on auto insurance policies, or customers in just about every organization. Usually, a dimension Includes all instances of its entity—all the products the company sells, for example. There is only one active row for each particular instance in the table at any time, and each row has a set of attributes that identify, describe, define, and classify the instance. A product will have a certain size and a standard weight, and belong to a product group. These sizes and groups have descriptions, like a food product might come in Mini-Pak or Jumbo size. A vehicle is painted a certain color, like white, and has a certain option package, such as the Jungle Jim sports utility package (which includes side impact air bags, six-disc CD player, DVD system, and simulated leopard skin seats).
Most facts are numeric and each fact value can vary widely depending on the business process being measured. Most facts are additive (such as dollar or unit sales), meaning they can be summed up across all dimensions. Additivity is important because DW/BI applications seldom retrieve a single fact table record. User queries generally select hundreds or thousands of records at a time and add them up. Other facts are semi-additive (such as market share or account balance), and still others are non-additive (such as unit price).Not all numeric data are facts. Exceptions include discrete descriptive information like package size or weight (describes a product) or customer age (describes a customer). Generally, these less volatile numeric values end up as descriptive attributes in dimension tables. Such descriptive information is more naturally used for constraining a query, rather than being summed in a computation. This distinction is helpful when deciding whether a data element is part of a dimension or fact.Some business processes track events without any real measures. If the event happens, we get an entry in the source system; if not, there is no row. Common examples of this kind of event include employment activities, such as hiring and firing, and event attendance, such as when a student attends a class. The fact tables that track these events typically do not have any actual fact measurements, so they're called factlessfact tables. Actually, we usually add a column called something like EventCount that contains the number 1. This provides users with an easy way to count the number of events by summing the EventCount fact.Some facts are derived or computed from other facts, just as a Net Sale num¬ber is calculated from Gross Sales minus Sales Tax. Some semi-additive facts can be handled using a derived column that is based on the context of the query. Month End Balance would add up across accounts, but not across date, for example. The non-additive Unit Price example could be avoided by defin¬ing it as a computation done in the query, which is Total Amount divided by Total Quantity. There are several options for dealing with these derived or computed facts. You can calculate them as part of the ETL process and store them in the fact table, you can put them in the fact table view definition, or you can include them in the definition of the Analysis Services database. The only way we find unacceptable is to leave the calculation to the user.
A surrogate key is a unique value, usually an integer, assigned to each row in the dimension. This surrogate key becomes the primary key of the dimension table and is used to join the dimension to the associated foreign key field in the fact table. Surrogate keys protect the DW/BI system from changes in the source system. Surrogate keys allow the DW/BI system to integrate data from multiple source systems. Different source systems might keep data on the same customers or products, but with different keys. Surrogate keys enable you to add rows to dimensions that do not exist in the source system. Surrogate keys provide the means for tracking changes in dimension attributes over time.
A surrogate key is a unique value, usually an integer, assigned to each row in the dimension. This surrogate key becomes the primary key of the dimension table and is used to join the dimension to the associated foreign key field in the fact table. Surrogate keys protect the DW/BI system from changes in the source system. Surrogate keys allow the DW/BI system to integrate data from multiple source systems. Different source systems might keep data on the same customers or products, but with different keys. Surrogate keys enable you to add rows to dimensions that do not exist in the source system. Surrogate keys provide the means for tracking changes in dimension attributes over time.
It a Dimensions than have changeable attribute values (SCD).There is three types of SCD:Type 1 SCD overwrites the existing attribute value with the new value.The Type 1 change does not preserve the attribute value that was in place at the time a historical transaction occurred. Type 2 change tracking is a powerful technique for capturing the attribute values that were in effect at a point in time and relating them to the business events in which they participated. When a change to a Type 2 attribute occurs, the ETL process creates a new row in the dimension table to capture the new values of the changed item. Type 3, keeps separate columns for both the old and new attribute, Type 3 is less common because it involves changing the physical tables and is not very scalable.
It a Dimensions than have changeable attribute values (SCD).There is three types of SCD:Type 1 SCD overwrites the existing attribute value with the new value.The Type 1 change does not preserve the attribute value that was in place at the time a historical transaction occurred. Type 2 change tracking is a powerful technique for capturing the attribute values that were in effect at a point in time and relating them to the business events in which they participated. When a change to a Type 2 attribute occurs, the ETL process creates a new row in the dimension table to capture the new values of the changed item. Type 3, keeps separate columns for both the old and new attribute, Type 3 is less common because it involves changing the physical tables and is not very scalable.
It a Dimensions than have changeable attribute values (SCD).There is three types of SCD:Type 1 SCD overwrites the existing attribute value with the new value.The Type 1 change does not preserve the attribute value that was in place at the time a historical transaction occurred. Type 2 change tracking is a powerful technique for capturing the attribute values that were in effect at a point in time and relating them to the business events in which they participated. When a change to a Type 2 attribute occurs, the ETL process creates a new row in the dimension table to capture the new values of the changed item. Type 3, keeps separate columns for both the old and new attribute, Type 3 is less common because it involves changing the physical tables and is not very scalable.
It a Dimensions than have changeable attribute values (SCD).There is three types of SCD:Type 1 SCD overwrites the existing attribute value with the new value.The Type 1 change does not preserve the attribute value that was in place at the time a historical transaction occurred. Type 2 change tracking is a powerful technique for capturing the attribute values that were in effect at a point in time and relating them to the business events in which they participated. When a change to a Type 2 attribute occurs, the ETL process creates a new row in the dimension table to capture the new values of the changed item. Type 3, keeps separate columns for both the old and new attribute, Type 3 is less common because it involves changing the physical tables and is not very scalable.