Because every organization produces and propagates data as part of their day-to-day operations, data trends are becoming more and more important in the mainstream business world’s consciousness. For many organizations in various industries, though, comprehension of this development begins and ends with buzzwords: “big data,” “NoSQL,” “data scientist,” and so on. Few realize that any and all solutions to their business problems, regardless of platform or relevant technology, rely to a critical extent on the data model supporting them. As such, Data Modeling is not an optional task for an organization’s data effort, but rather a vital activity that facilitates the solutions driving your business. Since quality engineering/architecture work products do not happen accidentally, the more your organization depends on automation, the more important the data models driving the engineering and architecture activities of your organization become. This webinar illustrates Data Modeling as a key activity upon which so much technology depends.
Too often I hear the question “Can you help me with our Data Strategy?” Unfortunately, for most, this is the wrong request because it focuses on the least valuable component – the Data Strategy itself. A more useful request is this: “Can you help me apply data strategically?”Yes, at early maturity phases the process of developing strategic thinking about data is more important than the actual product! Trying to write a good (must less perfect) Data Strategy on the first attempt is generally not productive –particularly given the widespread acceptance of Mike Tyson’s truism: “Everybody has a plan until they get punched in the face.” Refocus on learning how to iteratively improve the way data is strategically applied. This will permit data-based strategy components to keep up with agile, evolving organizational strategies. This approach can also contribute to three primary organizational data goals.
In this webinar, you will learn how improving your organization’s data, the way your people use data, and the way your people use data to achieve your organizational strategy will help in ways never imagined. Data are your sole non-depletable, non-degradable, durable strategic assets, and they are pervasively shared across every organizational area. Addressing existing challenges programmatically includes overcoming necessary but insufficient prerequisites and developing a disciplined, repeatable means of improving business objectives. This process (based on the theory of constraints) is where the strategic data work really occurs, as organizations identify prioritized areas where better assets, literacy, and support (Data Strategy components) can help an organization better achieve specific strategic objectives. Then the process becomes lather, rinse, and repeat. Several complementary concepts are also covered, including:
- A cohesive argument for why Data Strategy is necessary for effective Data Governance
- An overview of prerequisites for effective strategic use of Data Strategy, as well as common pitfalls
- A repeatable process for identifying and removing data constraints
- The importance of balancing business operation and innovation
Information management plays a critical role in supporting strategic business initiatives. Despite the apparent value of providing the data infrastructure for these initiatives, many executives question the economic feasibility of business intelligence and analytics. This requires information professionals to calculate and present the business value in terms business executives can understand.
Unfortunately, most IT professionals lack the knowledge required to develop comprehensive cost-benefit analyses and return on investment (ROI) measurements.
This session provides a framework to help IT professionals research, measure, and present the economic value of a proposed or existing information initiative. The session will provide practical advice about how to calculate ROI, which formula to use, and how to collect the necessary information.
Slides: Migrate BI Dashboards to Run Directly on a Cloud Data Lake in Five Ea...DATAVERSITY
While BI dashboards are great at democratizing analytics in organizations, the architecture that traditionally powers them has hidden consequences that have serious impacts on the business.
This architecture is based on a 30-year-old paradigm that requires many different systems, ETL jobs, and copies of data in data marts, data warehouses, and BI extracts. One downside of many is that it takes many days if not weeks to answer a different business question with this architecture. The negative consequences are further multiplied by the tens, hundreds, or even thousands of dashboards needed to run a data-driven organization.
Now, there’s a straightforward way to overcome these challenges that many organizations are already taking advantage of, an open cloud data lake architecture and Dremio
Join Jason Hughes, Technical Director at Dremio, for this webinar to learn how you can migrate BI dashboards to Dremio to quickly provide interactive dashboards to data consumers without the issues of the traditional architecture — and finally deliver the benefits always promised by BI.
What you’ll learn:
• Why BI dashboards’ traditional architecture implemented at scale causes many issues, which hinder the very insights it promises.
• How a Dremio-powered cloud data lake architecture eliminates or mitigates the negative consequences of the traditional approach.
• Step-by-step instructions for migrating a BI dashboard to run directly on a cloud data lake, both a self-contained example and your own dashboards.
ADV Slides: 2021 Trends in Enterprise AnalyticsDATAVERSITY
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the mission will be executed, and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and Data Architecture. William will kick off the third year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
Good data is like good water: best served fresh, and ideally well-filtered. Data Management strategies can produce tremendous procedural improvements and increased profit margins across the board, but only if the data being managed is of high quality. Determining how Data Quality should be engineered provides a useful framework for utilizing Data Quality management effectively in support of business strategy. This, in turn, allows for speedy identification of business problems, the delineation between structural and practice-oriented defects in Data Management, and proactive prevention of future issues. Organizations must realize what it means to utilize Data Quality engineering in support of business strategy. This webinar will illustrate how organizations with chronic business challenges often can trace the root of the problem to poor Data Quality. Showing how Data Quality should be engineered provides a useful framework in which to develop an effective approach. This, in turn, allows organizations to more quickly identify business problems as well as data problems caused by structural issues versus practice-oriented defects and prevent these from re-occurring.
Emerging Trends in Data Architecture – What’s the Next Big ThingDATAVERSITY
Digital Transformation is a top priority for many organizations, and a successful digital journey requires a strong data foundation. Creating this digital transformation requires a number of core data management capabilities such as MDM, With technological innovation and change occurring at an ever-increasing rate, it’s hard to keep track of what’s hype and what can provide practical value for your organization. Join this webinar to see the results of a recent DATAVERSITY survey on emerging trends in Data Architecture, along with practical commentary and advice from industry expert Donna Burbank.
Advanced Analytics: Analytic Platforms Should Be Columnar OrientationDATAVERSITY
A columnar database is an implementation of the relational theory, but with a twist. The data storage layer does not contain records. It contains a grouping of columns.
Due to the variable column lengths within a row, a small column with low cardinality, or variability of values, may reside completely within one block while another column with high cardinality and longer length may take a thousand blocks. In columnar, all the same data — your data — is there. It’s just organized differently (automatically, by the DBMS).
The main reason why you would want to utilize a columnar approach is simply to speed up the native performance of analytic queries.
Learn about the columnar orientation and how it can be effective for your needs. This is the native orientation of many databases and several others that have optional column-oriented storage layers.
There is also the equivalent in the cloud storage world, which is open format Parquet.
The first step towards understanding data assets’ impact on your organization is understanding what those assets mean for each other. Metadata — literally, data about data — is a practice area required by good systems development, and yet is also perhaps the most mislabeled and misunderstood Data Management practice. Understanding metadata and its associated technologies as more than just straightforward technological tools can provide powerful insight into the efficiency of organizational practices, and enable you to combine practices into sophisticated techniques, supporting larger and more complex business initiatives. Program learning objectives include:
* Understanding how to leverage metadata practices in support of business strategy
* Discuss foundational metadata concepts
* Guiding principles for and lessons previously learned from metadata and its practical uses applied strategy
* Understanding how to leverage metadata practices in support of business strategy
* Metadata strategies, including:
* Metadata is a gerund so don’t try to treat it as a noun
* Metadata is the language of Data Governance
* Treat glossaries/repositories as capabilities, not technology
Too often I hear the question “Can you help me with our Data Strategy?” Unfortunately, for most, this is the wrong request because it focuses on the least valuable component – the Data Strategy itself. A more useful request is this: “Can you help me apply data strategically?”Yes, at early maturity phases the process of developing strategic thinking about data is more important than the actual product! Trying to write a good (must less perfect) Data Strategy on the first attempt is generally not productive –particularly given the widespread acceptance of Mike Tyson’s truism: “Everybody has a plan until they get punched in the face.” Refocus on learning how to iteratively improve the way data is strategically applied. This will permit data-based strategy components to keep up with agile, evolving organizational strategies. This approach can also contribute to three primary organizational data goals.
In this webinar, you will learn how improving your organization’s data, the way your people use data, and the way your people use data to achieve your organizational strategy will help in ways never imagined. Data are your sole non-depletable, non-degradable, durable strategic assets, and they are pervasively shared across every organizational area. Addressing existing challenges programmatically includes overcoming necessary but insufficient prerequisites and developing a disciplined, repeatable means of improving business objectives. This process (based on the theory of constraints) is where the strategic data work really occurs, as organizations identify prioritized areas where better assets, literacy, and support (Data Strategy components) can help an organization better achieve specific strategic objectives. Then the process becomes lather, rinse, and repeat. Several complementary concepts are also covered, including:
- A cohesive argument for why Data Strategy is necessary for effective Data Governance
- An overview of prerequisites for effective strategic use of Data Strategy, as well as common pitfalls
- A repeatable process for identifying and removing data constraints
- The importance of balancing business operation and innovation
Information management plays a critical role in supporting strategic business initiatives. Despite the apparent value of providing the data infrastructure for these initiatives, many executives question the economic feasibility of business intelligence and analytics. This requires information professionals to calculate and present the business value in terms business executives can understand.
Unfortunately, most IT professionals lack the knowledge required to develop comprehensive cost-benefit analyses and return on investment (ROI) measurements.
This session provides a framework to help IT professionals research, measure, and present the economic value of a proposed or existing information initiative. The session will provide practical advice about how to calculate ROI, which formula to use, and how to collect the necessary information.
Slides: Migrate BI Dashboards to Run Directly on a Cloud Data Lake in Five Ea...DATAVERSITY
While BI dashboards are great at democratizing analytics in organizations, the architecture that traditionally powers them has hidden consequences that have serious impacts on the business.
This architecture is based on a 30-year-old paradigm that requires many different systems, ETL jobs, and copies of data in data marts, data warehouses, and BI extracts. One downside of many is that it takes many days if not weeks to answer a different business question with this architecture. The negative consequences are further multiplied by the tens, hundreds, or even thousands of dashboards needed to run a data-driven organization.
Now, there’s a straightforward way to overcome these challenges that many organizations are already taking advantage of, an open cloud data lake architecture and Dremio
Join Jason Hughes, Technical Director at Dremio, for this webinar to learn how you can migrate BI dashboards to Dremio to quickly provide interactive dashboards to data consumers without the issues of the traditional architecture — and finally deliver the benefits always promised by BI.
What you’ll learn:
• Why BI dashboards’ traditional architecture implemented at scale causes many issues, which hinder the very insights it promises.
• How a Dremio-powered cloud data lake architecture eliminates or mitigates the negative consequences of the traditional approach.
• Step-by-step instructions for migrating a BI dashboard to run directly on a cloud data lake, both a self-contained example and your own dashboards.
ADV Slides: 2021 Trends in Enterprise AnalyticsDATAVERSITY
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the mission will be executed, and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and Data Architecture. William will kick off the third year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
Good data is like good water: best served fresh, and ideally well-filtered. Data Management strategies can produce tremendous procedural improvements and increased profit margins across the board, but only if the data being managed is of high quality. Determining how Data Quality should be engineered provides a useful framework for utilizing Data Quality management effectively in support of business strategy. This, in turn, allows for speedy identification of business problems, the delineation between structural and practice-oriented defects in Data Management, and proactive prevention of future issues. Organizations must realize what it means to utilize Data Quality engineering in support of business strategy. This webinar will illustrate how organizations with chronic business challenges often can trace the root of the problem to poor Data Quality. Showing how Data Quality should be engineered provides a useful framework in which to develop an effective approach. This, in turn, allows organizations to more quickly identify business problems as well as data problems caused by structural issues versus practice-oriented defects and prevent these from re-occurring.
Emerging Trends in Data Architecture – What’s the Next Big ThingDATAVERSITY
Digital Transformation is a top priority for many organizations, and a successful digital journey requires a strong data foundation. Creating this digital transformation requires a number of core data management capabilities such as MDM, With technological innovation and change occurring at an ever-increasing rate, it’s hard to keep track of what’s hype and what can provide practical value for your organization. Join this webinar to see the results of a recent DATAVERSITY survey on emerging trends in Data Architecture, along with practical commentary and advice from industry expert Donna Burbank.
Advanced Analytics: Analytic Platforms Should Be Columnar OrientationDATAVERSITY
A columnar database is an implementation of the relational theory, but with a twist. The data storage layer does not contain records. It contains a grouping of columns.
Due to the variable column lengths within a row, a small column with low cardinality, or variability of values, may reside completely within one block while another column with high cardinality and longer length may take a thousand blocks. In columnar, all the same data — your data — is there. It’s just organized differently (automatically, by the DBMS).
The main reason why you would want to utilize a columnar approach is simply to speed up the native performance of analytic queries.
Learn about the columnar orientation and how it can be effective for your needs. This is the native orientation of many databases and several others that have optional column-oriented storage layers.
There is also the equivalent in the cloud storage world, which is open format Parquet.
The first step towards understanding data assets’ impact on your organization is understanding what those assets mean for each other. Metadata — literally, data about data — is a practice area required by good systems development, and yet is also perhaps the most mislabeled and misunderstood Data Management practice. Understanding metadata and its associated technologies as more than just straightforward technological tools can provide powerful insight into the efficiency of organizational practices, and enable you to combine practices into sophisticated techniques, supporting larger and more complex business initiatives. Program learning objectives include:
* Understanding how to leverage metadata practices in support of business strategy
* Discuss foundational metadata concepts
* Guiding principles for and lessons previously learned from metadata and its practical uses applied strategy
* Understanding how to leverage metadata practices in support of business strategy
* Metadata strategies, including:
* Metadata is a gerund so don’t try to treat it as a noun
* Metadata is the language of Data Governance
* Treat glossaries/repositories as capabilities, not technology
Slides: Beyond Metadata — Enrich Your Metadata Management with Deep-Level Dat...DATAVERSITY
Today’s growing complexity to the data ecosystem requires organizations to understand data at the data element level. Challenges in data collection such as open text boxes/free form text fields added to the velocity of incoming data increases risk for organizations. This risk is amplified when those organizations rely exclusively on metadata scanning when it comes to discovering and actioning their data. The need to look deeper than basic metadata becomes even more pronounced when dealing with semi-structured or unstructured data commonly found in file shares and email systems. Maintaining compliance and driving business value often requires scanning actual files, interpreting data, flagging risks, and integrating that risk into a data catalog. Going beyond metadata to the actual data element level ensures that your data catalog is a source of truth, which ultimately allows organizations to create agile Data Governance programs.
We’ll walk you through key considerations for going beyond knowing what metadata you have by:
• Underlining the importance of an enhanced, AI-driven data discovery tool to better understand your data and how it is being used
• Discussing components of an effective Metadata Management strategy including data inventories, data dictionaries, and usage requests
• Highlighting how the OneTrust platform embedded with regulatory intelligence helps you to go beyond metadata and address key use cases around unexpected or at-risk unstructured data
DataOps - The Foundation for Your Agile Data ArchitectureDATAVERSITY
Achieving agility in data and analytics is hard. It’s no secret that most data organizations struggle to deliver the on-demand data products that their business customers demand. Recently, there has been much hype around new design patterns that promise to deliver this much sought-after agility.
In this webinar, Chris Bergh, CEO and Head Chef of DataKitchen will cut through the noise and describe several elegant and effective data architecture design patterns that deliver low errors, rapid development, and high levels of collaboration. He’ll cover:
• DataOps, Data Mesh, Functional Design, and Hub & Spoke design patterns;
• Where Data Fabric fits into your architecture;
• How different patterns can work together to maximize agility; and
• How a DataOps platform serves as the foundational superstructure for your agile architecture.
View the companion webinar at: http://embt.co/1L8V6dI
Some claim that, in the age of Big Data, data modeling is less important or even not needed. However, with the increased complexity of the data landscape, it is actually more important to incorporate data modeling in order to understand the nature of the data and how they are interrelated. In order to do this effectively, the way that we do data modeling needs to adapt to this complex environment.
One of the key data modeling issues is how to foster collaboration between new groups, such as data scientists, and traditional data management groups. There are often different paradigms, and yet it is critical to have a common understanding of data and semantics between different parts of an organization. In this presentation, Len Silverston will discuss:
+ How Big Data has changed our landscape and affected data modeling
+ How to conduct data modeling in a more ‘agile’ way for Big Data environments
+ How we can collaborate effectively within an organization, even with differing perspectives
About the Presenter:
Len Silverston is a best-selling author, consultant, and a fun and top rated speaker in the field of data modeling, data governance, as well as human behavior in the data management industry, where he has pioneered new approaches to effectively tackle enterprise data management. He has helped many organizations world-wide to integrate their data, systems and even their people. He is well known for his work on "Universal Data Models", which are described in The Data Model Resource Book series (Volumes 1, 2, and 3).
ADV Slides: The Evolution of the Data Platform and What It Means to Enterpris...DATAVERSITY
Thirty years is a long time for a technology foundation to be as active as relational databases. Are their replacements here?
In this webinar, we look at this foundational technology for modern Data Management and show how it evolved to meet the workloads of today, as well as when other platforms make sense for enterprise data.
You had a strategy. You were executing it. You were then side-swiped by COVID, spending countless cycles blocking and tackling. It is now time to step back onto your path.
CCG is holding a workshop to help you update your roadmap and get your team back on track and review how Microsoft Azure Solutions can be leveraged to build a strong foundation for governed data insights.
Webinar: Decoding the Mystery - How to Know if You Need a Data Catalog, a Dat...DATAVERSITY
There’s a lot of confusion out there about the differences between a data catalog, a data dictionary and a business glossary, and it's not always easy to understand who needs which and why. Join Malcolm Chisholm, Ph.D., President of Data Millennium, and Amichai Fenner, Product Lead at Octopai, as they help decode the mystery. Spoiler alert: one of these enables collaboration across BI and IT, which is it?
DataEd Slides: Data Management + Data Strategy = InteroperabilityDATAVERSITY
Few organizations operate without having to exchange data. (Many do it professionally and well!) The larger the data exchange burden (DEB), the greater the organizational overhead incurred. This death by 1,000 cuts must be factored into each organization’s calculations. Unfortunately, most organizations do not know if their organization’s DEB is great or small. A somewhat greater number of organizations have organized Data Management practices. Focusing Data Management efforts on increasing interoperability by decreasing the DEB friction is a good area to “practice.”
Learning Objectives:
• Gaining a good understanding of both important topics
• Understanding that data only operates at a very intricate, specifically dependent intent and what this means
• Understand state-of-the-practice
• Coordination is key, requiring necessary but insufficient interdependencies and sequencing
• Practice makes perfect
How you can gain rapid insights and create more flexibility by capturing and storing data from a variety of sources and structures into a NoSQL database.
Platforming the Major Analytic Use Cases for Modern EngineeringDATAVERSITY
We’ll describe some use cases as examples of a broad range of modern use cases that need a platform. We will describe some popular valid technology stacks that enterprises use in accomplishing these modern use cases of customer churn, predictive analytics, fraud detection, and supply chain management.
In many industries, to achieve top-line growth, it is imperative that companies get the most out of existing customer relationships. Customer churn use cases are about generating high levels of profitable customer satisfaction through the use of knowledge generated from corporate and external data to help drive a more positive customer experience (CX).
Many organizations are turning to predictive analytics to increase their bottom line and efficiency and, therefore, competitive advantage. It can make the difference between business success or failure.
Fraudulent activity detection is exponentially more effective when risk actions are taken immediately (i.e., stop the fraudulent transaction), instead of after the fact. Fast digestion of a wide network of risk exposures across the network is required in order to minimize adverse outcomes.
Supply chain leaders are under constant pressure to reduce overall supply chain management (SCM) costs while maintaining a flexible and diverse supplier ecosystem. They will leverage IoT, sensors, cameras, and blockchain. Major investments in advanced analytics, warehouse relocation, and automation, both in distribution centers and stores, will be essential for survival.
Today, data lakes are widely used and have become extremely affordable as data volumes have grown. However, they are only meant for storage and by themselves provide no direct value. With up to 80% of data stored in the data lake today, how do you unlock the value of the data lake? The value lies in the compute engine that runs on top of a data lake.
Join us for this webinar where Ahana co-founder and Chief Product Officer Dipti Borkar will discuss how to unlock the value of your data lake with the emerging Open Data Lake analytics architecture.
Dipti will cover:
-Open Data Lake analytics - what it is and what use cases it supports
-Why companies are moving to an open data lake analytics approach
-Why the open source data lake query engine Presto is critical to this approach
In that session we will discuss about Data Governance, mainly around that fantastic platform Power BI (but also around on-prem concerns).
How to avoid dataset-hell ? What are the best practices for sharing queries ? Who is the famous Data Steward and what is its role in a department or in the whole company ? How do you choose the right person ?
Keywords : Power Query, Data Management Gateway, Power BI Admin Center, Datastewardship, SharePoint 2013, eDiscovery
Level 200
Do-It-Yourself (DIY) Data Governance FrameworkDATAVERSITY
A worthwhile Data Governance framework includes the core component of a successful program as viewed by the different levels of the organization. Each of the components is addressed at each of the levels, providing insight into key ideas and terminology used to attract participation across the organization. A framework plays a key role in setting up and sustaining a Data Governance program.
In this RWDG webinar, Bob Seiner will share two frameworks. The first is a basic cross-reference of components and levels, while the second can be used to compare and contrast different approaches to implementing Data Governance. When this webinar is finished, you will be able to customize the frameworks to outline the most appropriate manner for you to improve your likelihood of DG success.
In this webinar, Bob will discuss and share:
- Customizing a framework to match organizational requirements
- The core components and levels of an industry framework
- How to complete a Data Governance framework
- Using the framework to enable DG program success
- Measuring value through the DIY DG framework
Data-Ed Online: Data Architecture RequirementsDATAVERSITY
Data architecture is foundational to an information-based operational environment. It is your data architecture that organizes your data assets so they can be leveraged in your business strategy to create real business value. Even though this is important, not all data architectures are used effectively. This webinar describes the use of data architecture as a basic analysis method. Various uses of data architecture to inform, clarify, understand, and resolve aspects of a variety of business problems will be demonstrated. As opposed to showing how to architect data, your presenter Dr. Peter Aiken will show how to use data architecting to solve business problems. The goal is for you to be able to envision a number of uses for data architectures that will raise the perceived utility of this analysis method in the eyes of the business.
Takeaways:
Understanding how to contribute to organizational challenges beyond traditional data architecting
How to utilize data architectures in support of business strategy
Understanding foundational data architecture concepts based on the DAMA DMBOK
Data architecture guiding principles & best practices
Slides: How AI Makes Analytics More HumanDATAVERSITY
People think AI makes analytics less human, replacing human decision making. But the truth is, AI actually makes analytics more human. Augmented analytics are helping organizations finally break through the low levels of adoption and limitations typical of 2nd generation visualization tools.
Most business problems cannot be solved purely by algorithms or machine learning — they require human interaction and perspective. Uniting precedent-based machine learning systems with natural human intuition and curiosity is the foundation of 3rd generation BI and democratizing data across an enterprise.
It is a natural flow to enhance your data eco-system by deploying a platform with augmented intelligence to work alongside users in the pursuit of surfacing new insights, automating tasks, and supporting natural language interaction. All work as accelerators for achieving active intelligence and Data Literacy.
Estimating the Total Costs of Your Cloud Analytics PlatformDATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a platform designed to address multi-faceted needs by offering multi-function Data Management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion. They need a worry-free experience with the architecture and its components.
The Shifting Landscape of Data IntegrationDATAVERSITY
Enterprises and organizations from every industry and scale are working to leverage data to achieve their strategic objectives — whether they are to be more profitable, effective, risk-tolerant, prepared, sustainable, and/or adaptable in an ever-changing world. Data has exploded in volume during the last decade as humans and machines alike produce data at an exponential pace. Also, exciting technologies have emerged around that data to improve our abilities and capabilities around what we can do with data.
Behind this data revolution, there are forces at work, causing enterprises to shift the way they leverage data and accelerate the demand for leverageable data. Organizations (and the climates in which they operate) are becoming more and more complex. They are also becoming increasingly digital and, thus, dependent on how data informs, transforms, and automates their operations and decisions. With increased digitization comes an increased need for both scale and agility at scale.
In this session, we have undertaken an ambitious goal of evaluating the current vendor landscape and assessing which platforms have made, or are in the process of making, the leap to this new generation of Data Management and integration capabilities.
Because every organization produces and propagates data as part of their day-to-day operations, data trends are becoming more and more important in the mainstream business world’s consciousness. For many organizations in various industries, though, comprehension of this development begins and ends with buzzwords such as “big data,” “NoSQL,” “data scientist,” and so on. Few realize that any and all solutions to their business problems, regardless of platform or relevant technology, rely to a critical extent on the data model supporting them. As such, Data Modeling is not an optional task for an organization’s data effort, but rather a vital activity that facilitates the solutions driving your business. Since quality engineering/architecture work products do not happen accidentally, the more your organization depends on automation, the more important are the data models driving the engineering and architecture activities o
Data-Ed Online: Trends in Data ModelingDATAVERSITY
Businesses cannot compete without data. Every organization produces and consumes it. Data trends are hitting the mainstream and businesses are adopting buzzwords such as Big Data, data vault, data scientist, etc., to seek solutions for their fundamental data issues. Few realize that the importance of any solution, regardless of platform or technology, relies on the data model supporting it. Data modeling is not an optional task for an organization’s data remediation effort. Instead, it is a vital activity that supports the solution driving your business.
This webinar will address emerging trends around data model application methodology, as well as trends around the practice of data modeling itself. We will discuss abstract models and entity frameworks, as well as the general shift from data modeling being segmented to becoming more integrated with business practices.
Takeaways:
How are anchor modeling, data vault, etc. different and when should I apply them?
Integrating data models to business models and the value this creates
Application development (Data first, code first, object first)
Data Architecture, Solution Architecture, Platform Architecture — What’s the ...DATAVERSITY
A solid data architecture is critical to the success of any data initiative. But what is meant by “data architecture”? Throughout the industry, there are many different “flavors” of data architecture, each with its own unique value and use cases for describing key aspects of the data landscape. Join this webinar to demystify the various architecture styles and understand how they can add value to your organization.
Learn about the three advances in database technologies that eliminate the need for star schemas and the resulting maintenance nightmare.
Relational databases in the 1980s were typically designed using the Codd-Date rules for data normalization. It was the most efficient way to store data used in operations. As BI and multi-dimensional analysis became popular, the relational databases began to have performance issues when multiple joins were requested. The development of the star schema was a clever way to get around performance issues and ensure that multi-dimensional queries could be resolved quickly. But this design came with its own set of problems.
Unfortunately, the analytic process is never simple. Business users always think up unimaginable ways to query the data. And the data itself often changes in unpredictable ways. These result in the need for new dimensions, new and mostly redundant star schemas and their indexes, maintenance difficulties in handling slowly changing dimensions, and other problems causing the analytical environment to become overly complex, very difficult to maintain, long delays in new capabilities, resulting in an unsatisfactory environment for both the users and those maintaining it.
There must be a better way!
Watch this webinar to learn:
- The three technological advances in data storage that eliminate star schemas
- How these innovations benefit analytical environments
- The steps you will need to take to reap the benefits of being star schema-free
Data-Ed Slides: Data Modeling Strategies - Getting Your Data Ready for the Ca...DATAVERSITY
Because every organization produces and propagates data as part of their day-to-day operations, data trends are becoming more and more important in the mainstream business world’s consciousness. For many organizations in various industries, though, comprehension of this development begins and ends with buzzwords: “Big Data”, “NoSQL”, “data scientist”, and so on. Few realize that any and all solutions to their business problems, regardless of platform or relevant technology, rely to a critical extent on the data model supporting them. As such, data modeling is not an optional task for an organization’s data effort, but rather a vital activity that facilitates the solutions driving your business.
Instead of the technical minutiae of data modeling, this webinar will focus on its value and practicality for your organization. In doing so, we will:
- Address fundamental data modeling methodologies, their differences and various practical applications, and trends around the practice of data modeling itself
- Discuss abstract models and entity frameworks, as well as some basic tenets for application development
- Examine the general shift from segmented data modeling to more business-integrated practices
Data-Ed Webinar: Data Modeling FundamentalsDATAVERSITY
Every organization produces and consumes data. Because data is so important to day to day operations, data trends are hitting the mainstream and businesses are adopting buzzwords such as Big Data, NoSQL, data scientist, etc., to seek solutions for their fundamental issues. Few realize that the importance of any solution, regardless of platform or technology, relies on the data model supporting it. Data modeling is not an optional task for an organization’s data effort. It is a vital activity that supports the solutions driving your business.
This webinar will address fundamental data modeling methodologies, as well as trends around the practice of data modeling itself. We will discuss abstract models and entity frameworks, as well as the general shift from data modeling being segmented to becoming more integrated with business practices.
Learning Objectives:
How are anchor modeling, data vault, etc. different and when should I apply them?
Integrating data models to business models and the value this creates
Application development (Data first, code first, object first)
Slides: Beyond Metadata — Enrich Your Metadata Management with Deep-Level Dat...DATAVERSITY
Today’s growing complexity to the data ecosystem requires organizations to understand data at the data element level. Challenges in data collection such as open text boxes/free form text fields added to the velocity of incoming data increases risk for organizations. This risk is amplified when those organizations rely exclusively on metadata scanning when it comes to discovering and actioning their data. The need to look deeper than basic metadata becomes even more pronounced when dealing with semi-structured or unstructured data commonly found in file shares and email systems. Maintaining compliance and driving business value often requires scanning actual files, interpreting data, flagging risks, and integrating that risk into a data catalog. Going beyond metadata to the actual data element level ensures that your data catalog is a source of truth, which ultimately allows organizations to create agile Data Governance programs.
We’ll walk you through key considerations for going beyond knowing what metadata you have by:
• Underlining the importance of an enhanced, AI-driven data discovery tool to better understand your data and how it is being used
• Discussing components of an effective Metadata Management strategy including data inventories, data dictionaries, and usage requests
• Highlighting how the OneTrust platform embedded with regulatory intelligence helps you to go beyond metadata and address key use cases around unexpected or at-risk unstructured data
DataOps - The Foundation for Your Agile Data ArchitectureDATAVERSITY
Achieving agility in data and analytics is hard. It’s no secret that most data organizations struggle to deliver the on-demand data products that their business customers demand. Recently, there has been much hype around new design patterns that promise to deliver this much sought-after agility.
In this webinar, Chris Bergh, CEO and Head Chef of DataKitchen will cut through the noise and describe several elegant and effective data architecture design patterns that deliver low errors, rapid development, and high levels of collaboration. He’ll cover:
• DataOps, Data Mesh, Functional Design, and Hub & Spoke design patterns;
• Where Data Fabric fits into your architecture;
• How different patterns can work together to maximize agility; and
• How a DataOps platform serves as the foundational superstructure for your agile architecture.
View the companion webinar at: http://embt.co/1L8V6dI
Some claim that, in the age of Big Data, data modeling is less important or even not needed. However, with the increased complexity of the data landscape, it is actually more important to incorporate data modeling in order to understand the nature of the data and how they are interrelated. In order to do this effectively, the way that we do data modeling needs to adapt to this complex environment.
One of the key data modeling issues is how to foster collaboration between new groups, such as data scientists, and traditional data management groups. There are often different paradigms, and yet it is critical to have a common understanding of data and semantics between different parts of an organization. In this presentation, Len Silverston will discuss:
+ How Big Data has changed our landscape and affected data modeling
+ How to conduct data modeling in a more ‘agile’ way for Big Data environments
+ How we can collaborate effectively within an organization, even with differing perspectives
About the Presenter:
Len Silverston is a best-selling author, consultant, and a fun and top rated speaker in the field of data modeling, data governance, as well as human behavior in the data management industry, where he has pioneered new approaches to effectively tackle enterprise data management. He has helped many organizations world-wide to integrate their data, systems and even their people. He is well known for his work on "Universal Data Models", which are described in The Data Model Resource Book series (Volumes 1, 2, and 3).
ADV Slides: The Evolution of the Data Platform and What It Means to Enterpris...DATAVERSITY
Thirty years is a long time for a technology foundation to be as active as relational databases. Are their replacements here?
In this webinar, we look at this foundational technology for modern Data Management and show how it evolved to meet the workloads of today, as well as when other platforms make sense for enterprise data.
You had a strategy. You were executing it. You were then side-swiped by COVID, spending countless cycles blocking and tackling. It is now time to step back onto your path.
CCG is holding a workshop to help you update your roadmap and get your team back on track and review how Microsoft Azure Solutions can be leveraged to build a strong foundation for governed data insights.
Webinar: Decoding the Mystery - How to Know if You Need a Data Catalog, a Dat...DATAVERSITY
There’s a lot of confusion out there about the differences between a data catalog, a data dictionary and a business glossary, and it's not always easy to understand who needs which and why. Join Malcolm Chisholm, Ph.D., President of Data Millennium, and Amichai Fenner, Product Lead at Octopai, as they help decode the mystery. Spoiler alert: one of these enables collaboration across BI and IT, which is it?
DataEd Slides: Data Management + Data Strategy = InteroperabilityDATAVERSITY
Few organizations operate without having to exchange data. (Many do it professionally and well!) The larger the data exchange burden (DEB), the greater the organizational overhead incurred. This death by 1,000 cuts must be factored into each organization’s calculations. Unfortunately, most organizations do not know if their organization’s DEB is great or small. A somewhat greater number of organizations have organized Data Management practices. Focusing Data Management efforts on increasing interoperability by decreasing the DEB friction is a good area to “practice.”
Learning Objectives:
• Gaining a good understanding of both important topics
• Understanding that data only operates at a very intricate, specifically dependent intent and what this means
• Understand state-of-the-practice
• Coordination is key, requiring necessary but insufficient interdependencies and sequencing
• Practice makes perfect
How you can gain rapid insights and create more flexibility by capturing and storing data from a variety of sources and structures into a NoSQL database.
Platforming the Major Analytic Use Cases for Modern EngineeringDATAVERSITY
We’ll describe some use cases as examples of a broad range of modern use cases that need a platform. We will describe some popular valid technology stacks that enterprises use in accomplishing these modern use cases of customer churn, predictive analytics, fraud detection, and supply chain management.
In many industries, to achieve top-line growth, it is imperative that companies get the most out of existing customer relationships. Customer churn use cases are about generating high levels of profitable customer satisfaction through the use of knowledge generated from corporate and external data to help drive a more positive customer experience (CX).
Many organizations are turning to predictive analytics to increase their bottom line and efficiency and, therefore, competitive advantage. It can make the difference between business success or failure.
Fraudulent activity detection is exponentially more effective when risk actions are taken immediately (i.e., stop the fraudulent transaction), instead of after the fact. Fast digestion of a wide network of risk exposures across the network is required in order to minimize adverse outcomes.
Supply chain leaders are under constant pressure to reduce overall supply chain management (SCM) costs while maintaining a flexible and diverse supplier ecosystem. They will leverage IoT, sensors, cameras, and blockchain. Major investments in advanced analytics, warehouse relocation, and automation, both in distribution centers and stores, will be essential for survival.
Today, data lakes are widely used and have become extremely affordable as data volumes have grown. However, they are only meant for storage and by themselves provide no direct value. With up to 80% of data stored in the data lake today, how do you unlock the value of the data lake? The value lies in the compute engine that runs on top of a data lake.
Join us for this webinar where Ahana co-founder and Chief Product Officer Dipti Borkar will discuss how to unlock the value of your data lake with the emerging Open Data Lake analytics architecture.
Dipti will cover:
-Open Data Lake analytics - what it is and what use cases it supports
-Why companies are moving to an open data lake analytics approach
-Why the open source data lake query engine Presto is critical to this approach
In that session we will discuss about Data Governance, mainly around that fantastic platform Power BI (but also around on-prem concerns).
How to avoid dataset-hell ? What are the best practices for sharing queries ? Who is the famous Data Steward and what is its role in a department or in the whole company ? How do you choose the right person ?
Keywords : Power Query, Data Management Gateway, Power BI Admin Center, Datastewardship, SharePoint 2013, eDiscovery
Level 200
Do-It-Yourself (DIY) Data Governance FrameworkDATAVERSITY
A worthwhile Data Governance framework includes the core component of a successful program as viewed by the different levels of the organization. Each of the components is addressed at each of the levels, providing insight into key ideas and terminology used to attract participation across the organization. A framework plays a key role in setting up and sustaining a Data Governance program.
In this RWDG webinar, Bob Seiner will share two frameworks. The first is a basic cross-reference of components and levels, while the second can be used to compare and contrast different approaches to implementing Data Governance. When this webinar is finished, you will be able to customize the frameworks to outline the most appropriate manner for you to improve your likelihood of DG success.
In this webinar, Bob will discuss and share:
- Customizing a framework to match organizational requirements
- The core components and levels of an industry framework
- How to complete a Data Governance framework
- Using the framework to enable DG program success
- Measuring value through the DIY DG framework
Data-Ed Online: Data Architecture RequirementsDATAVERSITY
Data architecture is foundational to an information-based operational environment. It is your data architecture that organizes your data assets so they can be leveraged in your business strategy to create real business value. Even though this is important, not all data architectures are used effectively. This webinar describes the use of data architecture as a basic analysis method. Various uses of data architecture to inform, clarify, understand, and resolve aspects of a variety of business problems will be demonstrated. As opposed to showing how to architect data, your presenter Dr. Peter Aiken will show how to use data architecting to solve business problems. The goal is for you to be able to envision a number of uses for data architectures that will raise the perceived utility of this analysis method in the eyes of the business.
Takeaways:
Understanding how to contribute to organizational challenges beyond traditional data architecting
How to utilize data architectures in support of business strategy
Understanding foundational data architecture concepts based on the DAMA DMBOK
Data architecture guiding principles & best practices
Slides: How AI Makes Analytics More HumanDATAVERSITY
People think AI makes analytics less human, replacing human decision making. But the truth is, AI actually makes analytics more human. Augmented analytics are helping organizations finally break through the low levels of adoption and limitations typical of 2nd generation visualization tools.
Most business problems cannot be solved purely by algorithms or machine learning — they require human interaction and perspective. Uniting precedent-based machine learning systems with natural human intuition and curiosity is the foundation of 3rd generation BI and democratizing data across an enterprise.
It is a natural flow to enhance your data eco-system by deploying a platform with augmented intelligence to work alongside users in the pursuit of surfacing new insights, automating tasks, and supporting natural language interaction. All work as accelerators for achieving active intelligence and Data Literacy.
Estimating the Total Costs of Your Cloud Analytics PlatformDATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a platform designed to address multi-faceted needs by offering multi-function Data Management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion. They need a worry-free experience with the architecture and its components.
The Shifting Landscape of Data IntegrationDATAVERSITY
Enterprises and organizations from every industry and scale are working to leverage data to achieve their strategic objectives — whether they are to be more profitable, effective, risk-tolerant, prepared, sustainable, and/or adaptable in an ever-changing world. Data has exploded in volume during the last decade as humans and machines alike produce data at an exponential pace. Also, exciting technologies have emerged around that data to improve our abilities and capabilities around what we can do with data.
Behind this data revolution, there are forces at work, causing enterprises to shift the way they leverage data and accelerate the demand for leverageable data. Organizations (and the climates in which they operate) are becoming more and more complex. They are also becoming increasingly digital and, thus, dependent on how data informs, transforms, and automates their operations and decisions. With increased digitization comes an increased need for both scale and agility at scale.
In this session, we have undertaken an ambitious goal of evaluating the current vendor landscape and assessing which platforms have made, or are in the process of making, the leap to this new generation of Data Management and integration capabilities.
Because every organization produces and propagates data as part of their day-to-day operations, data trends are becoming more and more important in the mainstream business world’s consciousness. For many organizations in various industries, though, comprehension of this development begins and ends with buzzwords such as “big data,” “NoSQL,” “data scientist,” and so on. Few realize that any and all solutions to their business problems, regardless of platform or relevant technology, rely to a critical extent on the data model supporting them. As such, Data Modeling is not an optional task for an organization’s data effort, but rather a vital activity that facilitates the solutions driving your business. Since quality engineering/architecture work products do not happen accidentally, the more your organization depends on automation, the more important are the data models driving the engineering and architecture activities o
Data-Ed Online: Trends in Data ModelingDATAVERSITY
Businesses cannot compete without data. Every organization produces and consumes it. Data trends are hitting the mainstream and businesses are adopting buzzwords such as Big Data, data vault, data scientist, etc., to seek solutions for their fundamental data issues. Few realize that the importance of any solution, regardless of platform or technology, relies on the data model supporting it. Data modeling is not an optional task for an organization’s data remediation effort. Instead, it is a vital activity that supports the solution driving your business.
This webinar will address emerging trends around data model application methodology, as well as trends around the practice of data modeling itself. We will discuss abstract models and entity frameworks, as well as the general shift from data modeling being segmented to becoming more integrated with business practices.
Takeaways:
How are anchor modeling, data vault, etc. different and when should I apply them?
Integrating data models to business models and the value this creates
Application development (Data first, code first, object first)
Data Architecture, Solution Architecture, Platform Architecture — What’s the ...DATAVERSITY
A solid data architecture is critical to the success of any data initiative. But what is meant by “data architecture”? Throughout the industry, there are many different “flavors” of data architecture, each with its own unique value and use cases for describing key aspects of the data landscape. Join this webinar to demystify the various architecture styles and understand how they can add value to your organization.
Learn about the three advances in database technologies that eliminate the need for star schemas and the resulting maintenance nightmare.
Relational databases in the 1980s were typically designed using the Codd-Date rules for data normalization. It was the most efficient way to store data used in operations. As BI and multi-dimensional analysis became popular, the relational databases began to have performance issues when multiple joins were requested. The development of the star schema was a clever way to get around performance issues and ensure that multi-dimensional queries could be resolved quickly. But this design came with its own set of problems.
Unfortunately, the analytic process is never simple. Business users always think up unimaginable ways to query the data. And the data itself often changes in unpredictable ways. These result in the need for new dimensions, new and mostly redundant star schemas and their indexes, maintenance difficulties in handling slowly changing dimensions, and other problems causing the analytical environment to become overly complex, very difficult to maintain, long delays in new capabilities, resulting in an unsatisfactory environment for both the users and those maintaining it.
There must be a better way!
Watch this webinar to learn:
- The three technological advances in data storage that eliminate star schemas
- How these innovations benefit analytical environments
- The steps you will need to take to reap the benefits of being star schema-free
Data-Ed Slides: Data Modeling Strategies - Getting Your Data Ready for the Ca...DATAVERSITY
Because every organization produces and propagates data as part of their day-to-day operations, data trends are becoming more and more important in the mainstream business world’s consciousness. For many organizations in various industries, though, comprehension of this development begins and ends with buzzwords: “Big Data”, “NoSQL”, “data scientist”, and so on. Few realize that any and all solutions to their business problems, regardless of platform or relevant technology, rely to a critical extent on the data model supporting them. As such, data modeling is not an optional task for an organization’s data effort, but rather a vital activity that facilitates the solutions driving your business.
Instead of the technical minutiae of data modeling, this webinar will focus on its value and practicality for your organization. In doing so, we will:
- Address fundamental data modeling methodologies, their differences and various practical applications, and trends around the practice of data modeling itself
- Discuss abstract models and entity frameworks, as well as some basic tenets for application development
- Examine the general shift from segmented data modeling to more business-integrated practices
Data-Ed Webinar: Data Modeling FundamentalsDATAVERSITY
Every organization produces and consumes data. Because data is so important to day to day operations, data trends are hitting the mainstream and businesses are adopting buzzwords such as Big Data, NoSQL, data scientist, etc., to seek solutions for their fundamental issues. Few realize that the importance of any solution, regardless of platform or technology, relies on the data model supporting it. Data modeling is not an optional task for an organization’s data effort. It is a vital activity that supports the solutions driving your business.
This webinar will address fundamental data modeling methodologies, as well as trends around the practice of data modeling itself. We will discuss abstract models and entity frameworks, as well as the general shift from data modeling being segmented to becoming more integrated with business practices.
Learning Objectives:
How are anchor modeling, data vault, etc. different and when should I apply them?
Integrating data models to business models and the value this creates
Application development (Data first, code first, object first)
Data-Ed Webinar: Data Architecture RequirementsDATAVERSITY
Data architecture is foundational to an information-based operational environment. It is your data architecture that organizes your data assets so they can be leveraged in your business strategy to create real business value. Even though this is important, not all data architectures are used effectively. This webinar describes the use of data architecture as a basic analysis method. Various uses of data architecture to inform, clarify, understand, and resolve aspects of a variety of business problems will be demonstrated. As opposed to showing how to architect data, your presenter Dr. Peter Aiken will show how to use data architecting to solve business problems. The goal is for you to be able to envision a number of uses for data architectures that will raise the perceived utility of this analysis method in the eyes of the business.
Takeaways:
Understanding how to contribute to organizational challenges beyond traditional data architecting
How to utilize data architectures in support of business strategy
Understanding foundational data architecture concepts based on the DAMA DMBOK
Data architecture guiding principles & best practices
Data architecture is foundational to an information-based operational environment. It is your data architecture that organizes your data assets so they can be leveraged in your business strategy to create real business value. Even though this is important, not all data architectures are used effectively. This webinar describes the use of data architecture as a basic analysis method. Various uses of data architecture to inform, clarify, understand, and resolve aspects of a variety of business problems will be demonstrated. As opposed to showing how to architect data, your presenter Dr. Peter Aiken will show how to use data architecting to solve business problems. The goal is for you to be able to envision a number of uses for data architectures that will raise the perceived utility of this analysis method in the eyes of the business.
Find out more: http://www.datablueprint.com/resource-center/webinar-schedule/
Conceptual vs. Logical vs. Physical Data ModelingDATAVERSITY
A model is developed for a purpose. Understanding the strengths of each of the three Data Modeling types will prepare you with a more robust analyst toolkit. The program will describe modeling characteristics shared by each modeling type. Using the context of a reverse engineering exercise, delegates will be able to trace model components as they are used in a common data reengineering exercise that is also tied to a Data Governance exercise.
Learning objectives:
-Understanding the role played by models
-Differentiate appropriate use among conceptual, logical, and physical data models
- Understand the rigor of the round-trip data reengineering analyses
- Apply appropriate use of various Data Modeling types
Data architecture is foundational to an information-based operational environment. It is your data architecture that organizes your data assets so they can be leveraged in your business strategy to create real business value. Even though this is important, not all data architectures are used effectively. This webinar describes the use of data architecture as a basic analysis method. Various uses of data architecture to inform, clarify, understand, and resolve aspects of a variety of business problems will be demonstrated. As opposed to showing how to architect data, your presenter Dr. Peter Aiken will show how to use data architecting to solve business problems. The goal is for you to be able to envision a number of uses for data architectures that will raise the perceived utility of this analysis method in the eyes of the business.
Find more Data-Ed webinars here: www.datablueprint.com
Business Value Through Reference and Master Data StrategiesDATAVERSITY
Data tends to pile up and can be rendered unusable or obsolete without careful maintenance processes. Reference and Master Data Management (MDM) has been a popular Data Management approach to effectively gain mastery over not just the data but the supporting architecture for processing it. This webinar presents MDM as a strategic approach to improving and formalizing practices around those data items that provide context for many organizational transactions — the master data. Too often, MDM has been implemented technology-first and achieved the same very poor track record (one-third succeeding on time, within budget, and achieving planned functionality). MDM success depends on a coordinated approach, typically involving Data Governance and Data Quality activities.
Learning Objectives:
• Understand foundational reference and MDM concepts based on the Data Management Body of Knowledge (DMBoK)
• Understand why these are an important component of your Data Architecture
• Gain awareness of reference and MDM frameworks and building blocks
• Know what MDM guiding principles consist of and best practices
• Know how to utilize reference and MDM in support of business strategy
Data Architecture is foundational to an information-based operational environment. Without proper structure and efficiency in organization, data assets cannot be utilized to their full potential, which in turn harms bottom-line business value. When designed well and used effectively, however, a strong Data Architecture can be referenced to inform, clarify, understand, and resolve aspects of a variety of business problems commonly encountered in organizations.
The goal of this webinar is not to instruct you in being an outright Data Architect, but rather to enable you to envision a number of uses for Data Architectures that will maximize your organization’s competitive advantage. With that being said, we will:
Discuss Data Architecture’s guiding principles and best practices
Demonstrate how to utilize Data Architecture to address a broad variety of organizational challenges and support your overall business strategy
Illustrate how best to understand foundational Data Architecture concepts based on “The DAMA Guide to the Data Management Body of Knowledge” (DAMA DMBOK)
Whether you call it data munging, data cleansing, or data wrangling, everyone agrees that data preparation activities account for 80% of analysts’ time, leaving only 20% for analysis. Shifting this work to more specialized talent represents a major source of data analysis productivity improvements. This program “walks” through the major preparation categories including collection, evaluation, evolution, access design, and storage requirements. Understanding each in context also provides opportunities to develop complementary Data Governance/ethics frameworks. A generalized approach is presented.
Learning objectives:
- Appreciate the savings that can accrue from transforming data preparation from one-off to an improvable process
- Recognize what data preparation knowledge/skills your organization has and/or needs
- Better know the transformations that data can survive as it is prepared to be analyzed
DataEd Slides: Data Architecture versus Data ModelingDATAVERSITY
Data Modeling is how we do Data Architecture. Many are confused when it comes to data. Architecture, models, data – it can seem a bit overwhelming. This webinar offers a clear explanation of Data Modeling as the primary means of achieving better understanding of Data Architecture components. Using a storytelling format, this webinar presents an organization approaching the daunting process of attempting to better leverage its data. The organization is currently not knowledgeable of these concepts and begins the process of understating its current state as well as a desired future state. We join as the organization takes steps to better understand what is has, and what it needs to accomplish to employ Data Modeling and Data Architecture to achieve its mission.
Architecting Data For The Modern Enterprise - Data Summit 2017, Closing KeynoteCaserta
The “Big Data era” has ushered in an avalanche of new technologies and approaches for delivering information and insights to business users. What is the role of the cloud in your analytical environment? How can you make your migration as seamless as possible? This closing keynote, delivered by Joe Caserta, a prominent consultant who has helped many global enterprises adopt Big Data, provided the audience with the inside scoop needed to supplement data warehousing environments with data intelligence—the amalgamation of Big Data and business intelligence.
This presentation was given as the closing keynote at DBTA's annual Data Summit in NYC.
Data-Ed Slides: Data Architecture Strategies - Constructing Your Data GardenDATAVERSITY
Data architecture is foundational to an information-based operational environment. Without proper structure and efficiency in organization, data assets cannot be utilized to their full potential, which in turn harms bottom-line business value. When designed well and used effectively, however, a strong data architecture can be referenced to inform, clarify, understand, and resolve aspects of a variety of business problems commonly encountered in organizations.
The goal of this webinar is not to instruct you in being an outright data architect, but rather to enable you to envision a number of uses for data architectures that will maximize your organization’s competitive advantage.
With that being said, we will:
- Discuss data architecture’s guiding principles and best practices
- Demonstrate how to utilize data architecture to address a broad variety of organizational challenges and support your overall business strategy
- Illustrate how best to understand foundational data architecture concepts based on the DAMA International Guide to Data Management Body of Knowledge (DAMA DMBOK)
Good data is like good water: best served fresh, and ideally well-filtered. Data Management strategies can produce tremendous procedural improvements and increased profit margins across the board, but only if the data being managed is of high quality. Determining how Data Quality should be engineered provides a useful framework for utilizing Data Quality management effectively in support of business strategy. This, in turn, allows for speedy identification of business problems, delineation between structural and practice-oriented defects in Data Management, and proactive prevention of future issues. Organizations must realize what it means to utilize Data Quality engineering in support of business strategy. This webinar will illustrate how organizations with chronic business challenges can often trace the root of the problem to poor Data Quality. Showing how Data Quality should be engineered provides a useful framework in which to develop an effective approach. This in turn allows organizations to more quickly identify business problems as well as data problems caused by structural issues versus practice-oriented defects and prevent these from reoccurring.
Learning objectives:
-Help you understand foundational Data Quality concepts for improving Data Quality at your organization
-Demonstrate how chronic business challenges for organizations are often rooted in poor Data Quality
-Share case studies illustrating the hallmarks and benefits of Data Quality success
DataEd Slides: Growing Practical Data Governance ProgramsDATAVERSITY
At its core, Data Governance (DG) is managing data with guidance. This immediately provokes the question: Would you tolerate any of your assets to be managed without guidance? (In all likelihood, your organization has been managing data without adequate guidance, and this accounts for its current, less-than-optimal state.) This program provides a practical guide to implementing DG or recharging your existing program. It provides an understanding of what Data Governance functions are required and how they fit with other Data Management disciplines. Understanding these aspects is a necessary prerequisite to eliminate the ambiguity that often surrounds initial discussions and implement effective Data Governance/stewardship programs that manage data in support of the organizational strategy. Program learning objectives include:
• Understanding why Data Governance can be tricky for organizations due to data’s confounding characteristics
• Strategy #1: Keeping DG practically focused
• Strategy #2: DG must exist at the same level as HR
• Strategy #3: Gradually add ingredients
• Data Governance in action: storytelling
DataEd Online: Data Architecture and Data Modeling Differences — Achieving a ...DATAVERSITY
<!-- wp:paragraph -->
<p>Many can be confused when it comes to data topics. Architecture, models, data — it can seem a bit overwhelming. This program offers a clear explanation of Data Modeling and Data Architecture with a focus on the power of their interdependence. Both Data Architecture and data models are made more useful by each other. Data models are a primary means to achieve a shared understanding of specific data challenges. They are literally the pages that intersect data assets and the organizational response. Data models, as documentation, are the currency of data coordination, used to verify integration, and are mandated input to any data systems evolution. Ideally, Data Architecture is the sum of the organizational data models. However, coverage is rarely complete. Anytime you are talking about architecture, it is important to include the complementary role of engineered data models. Developing these models often incorporates both forward and reverse perspectives. Only when working in a coordinated manner, can organizations take steps to better understand what they have and what they need to accomplish by employing Data Modeling and Data Architecture.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>This program's learning objectives include:</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><li>Understanding the role played by models</li><li>Incorporating the interrelated concepts of architecture/engineering</li><li>What is taught: forward engineering with a goal of building</li><li>What is also needed: reverse engineering with a goal of understanding</li><li>How increasing coordination requirements increase design simplicity</li></ul>
<!-- /wp:list -->
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the mission will be executed and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and Data Architecture. William will kick off the fourth year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
Essential Reference and Master Data ManagementDATAVERSITY
Data tends to pile up and can be rendered unusable or obsolete without careful maintenance processes. Reference and Master Data Management (MDM) has been a popular Data Management approach to effectively gain mastery over not just the data but the supporting architecture for processing it. This webinar presents MDM as a strategic approach to improving and formalizing practices around those data items that provide context for many organizational transactions: its master data. Too often, MDM has been implemented technology-first and achieved the same very poor track record (one-third succeeding on-time, within budget, and achieving planned functionality). MDM success depends on a coordinated approach typically involving Data Governance and Data Quality activities.
Learning objectives:
- Understand foundational reference and MDM concepts based on the Data Management Body of Knowledge (DMBOK)
- Understand why these are an important component of your Data Architecture
- Gain awareness of Reference and MDM Frameworks and building blocks
- Know what MDM guiding principles consist of and best practices
- Know how to utilize reference and MDM in support of business strategy
DAS Slides: Data Modeling Case Study — Business Data Modeling at KiewitDATAVERSITY
Kiewit has been a leader in the construction industry since 1884. Key to the organization’s success is not only its focus on high quality engineering and its forward-thinking workforce, but its ability to manage complexity in a clear, concise, and data-driven way. As part of the organization’s strategic initiative to become even more data-driven in the way it estimates and manages projects, conceptual data models were built to create an overview of critical key data assets. Data architecture diagrams resonated well with key stakeholders who were well accustomed to driving success based on architectural diagrams, and these models were a key driver for the future data strategy for the organization. Join this webinar to learn more about Kiewit’s path to success through business-focused data models.
Architecture, Products, and Total Cost of Ownership of the Leading Machine Le...DATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a comprehensive platform designed to address multi-faceted needs by offering multi-function data management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion.
In this research-based session, I’ll discuss what the components are in multiple modern enterprise analytics stacks (i.e., dedicated compute, storage, data integration, streaming, etc.) and focus on total cost of ownership.
A complete machine learning infrastructure cost for the first modern use case at a midsize to large enterprise will be anywhere from $3 million to $22 million. Get this data point as you take the next steps on your journey into the highest spend and return item for most companies in the next several years.
Data at the Speed of Business with Data Mastering and GovernanceDATAVERSITY
Do you ever wonder how data-driven organizations fuel analytics, improve customer experience, and accelerate business productivity? They are successful by governing and mastering data effectively so they can get trusted data to those who need it faster. Efficient data discovery, mastering and democratization is critical for swiftly linking accurate data with business consumers. When business teams can quickly and easily locate, interpret, trust, and apply data assets to support sound business judgment, it takes less time to see value.
Join data mastering and data governance experts from Informatica—plus a real-world organization empowering trusted data for analytics—for a lively panel discussion. You’ll hear more about how a single cloud-native approach can help global businesses in any economy create more value—faster, more reliably, and with more confidence—by making data management and governance easier to implement.
What is data literacy? Which organizations, and which workers in those organizations, need to be data-literate? There are seemingly hundreds of definitions of data literacy, along with almost as many opinions about how to achieve it.
In a broader perspective, companies must consider whether data literacy is an isolated goal or one component of a broader learning strategy to address skill deficits. How does data literacy compare to other types of skills or “literacy” such as business acumen?
This session will position data literacy in the context of other worker skills as a framework for understanding how and where it fits and how to advocate for its importance.
Building a Data Strategy – Practical Steps for Aligning with Business GoalsDATAVERSITY
Developing a Data Strategy for your organization can seem like a daunting task – but it’s worth the effort. Getting your Data Strategy right can provide significant value, as data drives many of the key initiatives in today’s marketplace – from digital transformation, to marketing, to customer centricity, to population health, and more. This webinar will help demystify Data Strategy and its relationship to Data Architecture and will provide concrete, practical ways to get started.
Uncover how your business can save money and find new revenue streams.
Driving profitability is a top priority for companies globally, especially in uncertain economic times. It's imperative that companies reimagine growth strategies and improve process efficiencies to help cut costs and drive revenue – but how?
By leveraging data-driven strategies layered with artificial intelligence, companies can achieve untapped potential and help their businesses save money and drive profitability.
In this webinar, you'll learn:
- How your company can leverage data and AI to reduce spending and costs
- Ways you can monetize data and AI and uncover new growth strategies
- How different companies have implemented these strategies to achieve cost optimization benefits
Data Catalogs Are the Answer – What is the Question?DATAVERSITY
Organizations with governed metadata made available through their data catalog can answer questions their people have about the organization’s data. These organizations get more value from their data, protect their data better, gain improved ROI from data-centric projects and programs, and have more confidence in their most strategic data.
Join Bob Seiner for this lively webinar where he will talk about the value of a data catalog and how to build the use of the catalog into your stewards’ daily routines. Bob will share how the tool must be positioned for success and viewed as a must-have resource that is a steppingstone and catalyst to governed data across the organization.
Data Catalogs Are the Answer – What Is the Question?DATAVERSITY
Organizations with governed metadata made available through their data catalog can answer questions their people have about the organization’s data. These organizations get more value from their data, protect their data better, gain improved ROI from data-centric projects and programs, and have more confidence in their most strategic data.
Join Bob Seiner for this lively webinar where he will talk about the value of a data catalog and how to build the use of the catalog into your stewards’ daily routines. Bob will share how the tool must be positioned for success and viewed as a must-have resource that is a steppingstone and catalyst to governed data across the organization.
In this webinar, Bob will focus on:
-Selecting the appropriate metadata to govern
-The business and technical value of a data catalog
-Building the catalog into people’s routines
-Positioning the data catalog for success
-Questions the data catalog can answer
Because every organization produces and propagates data as part of their day-to-day operations, data trends are becoming more and more important in the mainstream business world’s consciousness. For many organizations in various industries, though, comprehension of this development begins and ends with buzzwords: “Big Data,” “NoSQL,” “Data Scientist,” and so on. Few realize that all solutions to their business problems, regardless of platform or relevant technology, rely to a critical extent on the data model supporting them. As such, data modeling is not an optional task for an organization’s data effort, but rather a vital activity that facilitates the solutions driving your business. Since quality engineering/architecture work products do not happen accidentally, the more your organization depends on automation, the more important the data models driving the engineering and architecture activities of your organization. This webinar illustrates data modeling as a key activity upon which so much technology and business investment depends.
Specific learning objectives include:
- Understanding what types of challenges require data modeling to be part of the solution
- How automation requires standardization on derivable via data modeling techniques
- Why only a working partnership between data and the business can produce useful outcomes
Analytics play a critical role in supporting strategic business initiatives. Despite the obvious value to analytic professionals of providing the analytics for these initiatives, many executives question the economic return of analytics as well as data lakes, machine learning, master data management, and the like.
Technology professionals need to calculate and present business value in terms business executives can understand. Unfortunately, most IT professionals lack the knowledge required to develop comprehensive cost-benefit analyses and return on investment (ROI) measurements.
This session provides a framework to help technology professionals research, measure, and present the economic value of a proposed or existing analytics initiative, no matter the form that the business benefit arises. The session will provide practical advice about how to calculate ROI and the formulas, and how to collect the necessary information.
How a Semantic Layer Makes Data Mesh Work at ScaleDATAVERSITY
Data Mesh is a trending approach to building a decentralized data architecture by leveraging a domain-oriented, self-service design. However, the pure definition of Data Mesh lacks a center of excellence or central data team and doesn’t address the need for a common approach for sharing data products across teams. The semantic layer is emerging as a key component to supporting a Hub and Spoke style of organizing data teams by introducing data model sharing, collaboration, and distributed ownership controls.
This session will explain how data teams can define common models and definitions with a semantic layer to decentralize analytics product creation using a Hub and Spoke architecture.
Attend this session to learn about:
- The role of a Data Mesh in the modern cloud architecture.
- How a semantic layer can serve as the binding agent to support decentralization.
- How to drive self service with consistency and control.
Enterprise data literacy. A worthy objective? Certainly! A realistic goal? That remains to be seen. As companies consider investing in data literacy education, questions arise about its value and purpose. While the destination – having a data-fluent workforce – is attractive, we wonder how (and if) we can get there.
Kicking off this webinar series, we begin with a panel discussion to explore the landscape of literacy, including expert positions and results from focus groups:
- why it matters,
- what it means,
- what gets in the way,
- who needs it (and how much they need),
- what companies believe it will accomplish.
In this engaging discussion about literacy, we will set the stage for future webinars to answer specific questions and feature successful literacy efforts.
The Data Trifecta – Privacy, Security & Governance Race from Reactivity to Re...DATAVERSITY
Change is hard, especially in response to negative stimuli or what is perceived as negative stimuli. So organizations need to reframe how they think about data privacy, security and governance, treating them as value centers to 1) ensure enterprise data can flow where it needs to, 2) prevent – not just react – to internal and external threats, and 3) comply with data privacy and security regulations.
Working together, these roles can accelerate faster access to approved, relevant and higher quality data – and that means more successful use cases, faster speed to insights, and better business outcomes. However, both new information and tools are required to make the shift from defense to offense, reducing data drama while increasing its value.
Join us for this panel discussion with experts in these fields as they discuss:
- Recent research about where data privacy, security and governance stand
- The most valuable enterprise data use cases
- The common obstacles to data value creation
- New approaches to data privacy, security and governance
- Their advice on how to shift from a reactive to resilient mindset/culture/organization
You’ll be educated, entertained and inspired by this panel and their expertise in using the data trifecta to innovate more often, operate more efficiently, and differentiate more strategically.
Emerging Trends in Data Architecture – What’s the Next Big Thing?DATAVERSITY
With technological innovation and change occurring at an ever-increasing rate, it’s hard to keep track of what’s hype and what can provide practical value for your organization. Join this webinar to see the results of a recent DATAVERSITY survey on emerging trends in Data Architecture, along with practical commentary and advice from industry expert Donna Burbank.
Data Governance Trends - A Look Backwards and ForwardsDATAVERSITY
As DATAVERSITY’s RWDG series hurdles into our 12th year, this webinar takes a quick look behind us, evaluates the present, and predicts the future of Data Governance. Based on webinar numbers, hot Data Governance topics have evolved over the years from policies and best practices, roles and tools, data catalogs and frameworks, to supporting data mesh and fabric, artificial intelligence, virtualization, literacy, and metadata governance.
Join Bob Seiner as he reflects on the past and what has and has not worked, while sharing examples of enterprise successes and struggles. In this webinar, Bob will challenge the audience to stay a step ahead by learning from the past and blazing a new trail into the future of Data Governance.
In this webinar, Bob will focus on:
- Data Governance’s past, present, and future
- How trials and tribulations evolve to success
- Leveraging lessons learned to improve productivity
- The great Data Governance tool explosion
- The future of Data Governance
Data Governance Trends and Best Practices To Implement TodayDATAVERSITY
Would you share your bank account information on social media? How about shouting your social security number on the New York City subway? We didn’t think so either – that’s why data governance is consistently top of mind.
In this webinar, we’ll discuss the common Cloud data governance best practices – and how to apply them today. Join us to uncover Google Cloud’s investment in data governance and learn practical and doable methods around key management and confidential computing. Hear real customer experiences and leave with insights that you can share with your team. Let’s get solving.
Topics that you will hear addressed in this webinar:
- Understanding the basics of Cloud Incident Response (IR) and anticipated data governance trends
- Best practices for key management and apply data governance to your day-to-day
- The next wave of Confidential Computing and how to get started, including a demo
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the enterprise mission will be executed and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and data architecture. William will kick off the fifth year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
Too often I hear the question “Can you help me with our data strategy?” Unfortunately, for most, this is the wrong request because it focuses on the least valuable component: the data strategy itself. A more useful request is: “Can you help me apply data strategically?” Yes, at early maturity phases the process of developing strategic thinking about data is more important than the actual product! Trying to write a good (must less perfect) data strategy on the first attempt is generally not productive –particularly given the widespread acceptance of Mike Tyson’s truism: “Everybody has a plan until they get punched in the face.” This program refocuses efforts on learning how to iteratively improve the way data is strategically applied. This will permit data-based strategy components to keep up with agile, evolving organizational strategies. It also contributes to three primary organizational data goals. Learn how to improve the following:
- Your organization’s data
- The way your people use data
- The way your people use data to achieve your organizational strategy
This will help in ways never imagined. Data are your sole non-depletable, non-degradable, durable strategic assets, and they are pervasively shared across every organizational area. Addressing existing challenges programmatically includes overcoming necessary but insufficient prerequisites and developing a disciplined, repeatable means of improving business objectives. This process (based on the theory of constraints) is where the strategic data work really occurs as organizations identify prioritized areas where better assets, literacy, and support (data strategy components) can help an organization better achieve specific strategic objectives. Then the process becomes lather, rinse, and repeat. Several complementary concepts are also covered, including:
- A cohesive argument for why data strategy is necessary for effective data governance
- An overview of prerequisites for effective strategic use of data strategy, as well as common pitfalls
- A repeatable process for identifying and removing data constraints
- The importance of balancing business operation and innovation
Who Should Own Data Governance – IT or Business?DATAVERSITY
The question is asked all the time: “What part of the organization should own your Data Governance program?” The typical answers are “the business” and “IT (information technology).” Another answer to that question is “Yes.” The program must be owned and reside somewhere in the organization. You may ask yourself if there is a correct answer to the question.
Join this new RWDG webinar with Bob Seiner where Bob will answer the question that is the title of this webinar. Determining ownership of Data Governance is a vital first step. Figuring out the appropriate part of the organization to manage the program is an important second step. This webinar will help you address these questions and more.
In this session Bob will share:
- What is meant by “the business” when it comes to owning Data Governance
- Why some people say that Data Governance in IT is destined to fail
- Examples of IT positioned Data Governance success
- Considerations for answering the question in your organization
- The final answer to the question of who should own Data Governance
It is clear that Data Management best practices exist and so does a useful process for improving existing Data Management practices. The question arises: Since we understand the goal, how does one design a process for Data Management goal achievement? This program describes what must be done at the programmatic level to achieve better data use and a way to implement this as part of your data program. The approach combines DMBoK content and CMMI/DMM processes – permitting organizations with the opportunity to benefit from the best of both. It also permits organizations to understand:
- Their current Data Management practices
- Strengths that should be leveraged
- Remediation opportunities
MLOps – Applying DevOps to Competitive AdvantageDATAVERSITY
MLOps is a practice for collaboration between Data Science and operations to manage the production machine learning (ML) lifecycles. As an amalgamation of “machine learning” and “operations,” MLOps applies DevOps principles to ML delivery, enabling the delivery of ML-based innovation at scale to result in:
Faster time to market of ML-based solutions
More rapid rate of experimentation, driving innovation
Assurance of quality, trustworthiness, and ethical AI
MLOps is essential for scaling ML. Without it, enterprises risk struggling with costly overhead and stalled progress. Several vendors have emerged with offerings to support MLOps: the major offerings are Microsoft Azure ML and Google Vertex AI. We looked at these offerings from the perspective of enterprise features and time-to-value.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).