The term disruptive innovation was popularized by Harvard professor Clayton Christensen in his 1997 book “The Innovator’s Dilemma.” Nearly 20 years later “Disrupt!” is a popular leadership mantra that is more frequently uttered than experienced. You can't productize it. You can't always control it – at least what effects it has in practice. You aren't necessarily going to like every product of innovation. So are you sure you want it? If so, how do you promote a culture in which innovation can flower – and, potentially, thrive? Because that's probably the best that you can do.
Perhaps there's a better framing for innovation than just "disruption.“ This session is an overview of commmoditization and innovation theories followed by basic things you can do to apply that theory to your daily job architecting, choosing and managing a data environment in your company.
Everything Has Changed Except Us: Modernizing the Data Warehousemark madsen
Keynote, Munich, June 2016
The way we make decisions has changed. The data we use has changed. The techniques we can apply to data and decisions have changed. Yet what we build and how we build it has barely changed in 20 years.
The definition of madness is doing more of what you already do and expecting different results. The threat to the data warehouse is not from new technology that will replace the data warehouse. It is from destabilization caused by new technology as it changes the architecture, and from failure to adapt to those changes.
The technology that we use is problematic because it constrains and sometimes prevents necessary activities. We don’t need more technology and bigger machines. We need different technology that does different things. More product features from the same vendors won’t solve the problem.
The data we want to use is challenging. We can’t model and clean and maintain it fast enough. We don’t need more data modeling to solve this problem. We need less modeling and more metadata.
And lastly, a change in scale has occurred. It isn’t a simple problem of “big”. The problem with current workloads has been solved, despite the performance problems that many people still have today. Scale has many dimensions – important among them are the number of discrete sources and structures, the rate of change of individual structures, the rate of change in data use, the variety of uses and the concurrency of those uses.
In short, we need new architecture that is not focused on creating stability in data, but one that is adaptable to continuous and rapidly changing uses of data.
The way we make decisions has changed. The data we use has changed. The techniques we can apply to data and decisions have changed. Yet what we build and how we build it has barely changed in 20 years.
The definition of madness is doing more of what you already do and expecting different results. The threat to the data warehouse is not from new technology that will replace the data warehouse. It is from destabilization caused by new technology as it changes the architecture, and from failure to adapt to those changes.
The technology that we use is problematic because it constrains and sometimes prevents necessary activities. We don’t need more technology and bigger machines. We need different technology that does different things. More product features from the same vendors won’t solve the problem.
The data we want to use is challenging. We can’t model and clean and maintain it fast enough. We don’t need more data modeling to solve this problem. We need less modeling and more metadata.
And lastly, a change in scale has occurred. It isn’t a simple problem of “big”. The problem with current workloads has been solved, despite the performance problems that many people still have today. Scale has many dimensions – important among them are the number of discrete sources and structures, the rate of change of individual structures, the rate of change in data use, the variety of uses and the concurrency of those uses.
In short, we need new architecture that is not focused on creating stability in data, but one that is adaptable to continuous and rapidly changing uses of data.
Bi isn't big data and big data isn't BI (updated)mark madsen
Big data is hyped, but isn't hype. There are definite technical, process and business differences in the big data market when compared to BI and data warehousing, but they are often poorly understood or explained. BI isn't big data, and big data isn't BI. By distilling the technical and process realities of big data systems and projects we can separate fact from fiction. This session examines the underlying assumptions and abstractions we use in the BI and DW world, the abstractions that evolved in the big data world, and how they are different. Armed with this knowledge, you will be better able to make design and architecture decisions. The session is sometimes conceptual, sometimes detailed technical explorations of data, processing and technology, but promises to be entertaining regardless of the level.
Yes, it’s about the data normally called “big”, but it’s not Hadoop for the database crowd, despite the prominent role Hadoop plays. The session will be technical, but in a technology preview/overview fashion. I won’t be teaching you to write MapReduce jobs or anything of the sort.
The first part will be an overview of the types, formats and structures of data that aren’t normally in the data warehouse realm. The second part will cover some of the basic technology components, vendors and architecture.
The goal is to provide an overview of the extent of data available and some of the nuances or challenges in processing it, coupled with some examples of tools or vendors that may be a starting point if you are building in a particular area.
Data lakes, data exhaust, web scale, data is the new oil. Vendors are throwing new terms and analogies at us to convince us to buy their products as the market around data technologies grows. We change data persistence and transaction layers because "databases don't scale" or because data is "unstructured". If data had no structure then it wouldn't be data, it would be noise. Schema on read, schema on write, schemaless databases; they imply structure underlying the data. All data has schema, but that word may not mean what you think it means.
This presentation will describe concepts of data storage and retrieval from technology prehistory (i.e. before the 1980s) and examine the design principles behind both old and new technology for managing data because sometimes post-relational is actually pre-relational. It is important to separate what is identical to things that were tried in the past from new twists on old topics that deliver new capabilities.
Directly related to these topics are performance, scalability and the realities of what organizations do with data over time. All of these topics should guide architecture decisions to avoid the trap of creating technical debts that must be paid later, after systems are in place and change is difficult.
Briefing room: An alternative for streaming data collectionmark madsen
Knowing what’s happening in your enterprise right now can mark the difference between success and failure. The key is to have a rich view of activity, such that analysts and others can explore in a fully multidimensional fashion. Benefiting from such a detailed perspective can help professionals identify the exact nature of problems or opportunities, thus enabling precise actions that make a difference quickly.
Register for this episode of The Briefing Room to hear veteran Analyst Mark Madsen of Third Nature explain how a nexus of innovations for analyzing network traffic can help companies stay on top of their game. He’ll be briefed by Erik Giesa of ExtraHop, who will showcase his company’s stream analytics technology for wire data, which provides real-time, multidimensional views of network traffic. He’ll share success stories of how ExtraHop has solved otherwise intractable problems and enabled a new level of root-cause analysis.
Data Architecture: OMG It’s Made of Peoplemark madsen
Do you have data? Do you have users? Do they use that data to solve problems? Then you have a data architecture. Maybe your architecture is organic and accidental, or maybe it’s an accumulation of the latest practices and technologies you heard about on Stack Overflow.
Spoiler: data architecture is about people and how they use data, not the latest pipeline framework or AI model. Data architecture is about enabling users to be productive, not adding the next “shiny object” and then blaming the users for using it wrong. What you design needs to focus on a different subject than either technology or data.
Join Kevin Bogusch, Ecosystem Architect, as he talks with Mark Madsen, Fellow at the Technology Innovation Office, on the crucial elements you’re missing in a successful data architecture: people and process. Find out why Mark says, “don’t buy one problem to solve another problem.”
Solve User Problems: Data Architecture for Humansmark madsen
We are bombarded with stories of the latest products to hit the market – products that will change everything we do. This causes us to focus on the latest technology, building IT for the sake of building IT. Meanwhile, the world still seems to run on Excel.
The “big innovators” who have and use unimaginably large amounts of data are not the norm. Aspiring to use the same complex technologies and patterns they do leads to poor investments and tradeoffs. This is an age-old problem rooted in the over-emphasis of technology as the agent of change. Technology isn’t the answer – it’s the platform on which people build answers.
To emphasize technology is to ignore the way tools change people and practices. The design focus in our market was on storing and making data accessible. If we want to make progress then we need to step back from the details and look at data from the perspective of the organization. Our design focus shifts to people learning and applying new insights, asking questions about how an organization can be more resilient, more efficient, or faster to sense and respond to changing conditions.
In this talk you will learn how to put your data architecture into a human frame of reference. Drawing inspiration from the history of technology and urban planning, we will see that the services provided by the things we build are what drive success, not the latest shiny distraction.
Assumptions about Data and Analysis: Briefing room webcast slidesmark madsen
In many ways, moving data is like moving furniture: it's an unpleasant process dubbed an occasional necessary evil. But as the data pipelines of old decay, a new reality is taking shape: the data-native architecture. Unlike traditional data processing for BI and Analytics, this approach works on data right where it lives, thus eliminating the pain of forklifting, narrowing the margin of error, and expediting the time to business benefit. The new architecture embodies new assumptions, some of which we will talk about here.
Register for this episode of The Briefing Room to hear veteran Analyst Mark Madsen of Third Nature explain why this shift is truly tectonic. He'll be briefed by Steve Wooledge of Arcadia Data who will showcase his company's technology, which leverages a data-native architecture to fuel rapid-fire visualization and analysis of both big data and small.
Everything Has Changed Except Us: Modernizing the Data Warehousemark madsen
Keynote, Munich, June 2016
The way we make decisions has changed. The data we use has changed. The techniques we can apply to data and decisions have changed. Yet what we build and how we build it has barely changed in 20 years.
The definition of madness is doing more of what you already do and expecting different results. The threat to the data warehouse is not from new technology that will replace the data warehouse. It is from destabilization caused by new technology as it changes the architecture, and from failure to adapt to those changes.
The technology that we use is problematic because it constrains and sometimes prevents necessary activities. We don’t need more technology and bigger machines. We need different technology that does different things. More product features from the same vendors won’t solve the problem.
The data we want to use is challenging. We can’t model and clean and maintain it fast enough. We don’t need more data modeling to solve this problem. We need less modeling and more metadata.
And lastly, a change in scale has occurred. It isn’t a simple problem of “big”. The problem with current workloads has been solved, despite the performance problems that many people still have today. Scale has many dimensions – important among them are the number of discrete sources and structures, the rate of change of individual structures, the rate of change in data use, the variety of uses and the concurrency of those uses.
In short, we need new architecture that is not focused on creating stability in data, but one that is adaptable to continuous and rapidly changing uses of data.
The way we make decisions has changed. The data we use has changed. The techniques we can apply to data and decisions have changed. Yet what we build and how we build it has barely changed in 20 years.
The definition of madness is doing more of what you already do and expecting different results. The threat to the data warehouse is not from new technology that will replace the data warehouse. It is from destabilization caused by new technology as it changes the architecture, and from failure to adapt to those changes.
The technology that we use is problematic because it constrains and sometimes prevents necessary activities. We don’t need more technology and bigger machines. We need different technology that does different things. More product features from the same vendors won’t solve the problem.
The data we want to use is challenging. We can’t model and clean and maintain it fast enough. We don’t need more data modeling to solve this problem. We need less modeling and more metadata.
And lastly, a change in scale has occurred. It isn’t a simple problem of “big”. The problem with current workloads has been solved, despite the performance problems that many people still have today. Scale has many dimensions – important among them are the number of discrete sources and structures, the rate of change of individual structures, the rate of change in data use, the variety of uses and the concurrency of those uses.
In short, we need new architecture that is not focused on creating stability in data, but one that is adaptable to continuous and rapidly changing uses of data.
Bi isn't big data and big data isn't BI (updated)mark madsen
Big data is hyped, but isn't hype. There are definite technical, process and business differences in the big data market when compared to BI and data warehousing, but they are often poorly understood or explained. BI isn't big data, and big data isn't BI. By distilling the technical and process realities of big data systems and projects we can separate fact from fiction. This session examines the underlying assumptions and abstractions we use in the BI and DW world, the abstractions that evolved in the big data world, and how they are different. Armed with this knowledge, you will be better able to make design and architecture decisions. The session is sometimes conceptual, sometimes detailed technical explorations of data, processing and technology, but promises to be entertaining regardless of the level.
Yes, it’s about the data normally called “big”, but it’s not Hadoop for the database crowd, despite the prominent role Hadoop plays. The session will be technical, but in a technology preview/overview fashion. I won’t be teaching you to write MapReduce jobs or anything of the sort.
The first part will be an overview of the types, formats and structures of data that aren’t normally in the data warehouse realm. The second part will cover some of the basic technology components, vendors and architecture.
The goal is to provide an overview of the extent of data available and some of the nuances or challenges in processing it, coupled with some examples of tools or vendors that may be a starting point if you are building in a particular area.
Data lakes, data exhaust, web scale, data is the new oil. Vendors are throwing new terms and analogies at us to convince us to buy their products as the market around data technologies grows. We change data persistence and transaction layers because "databases don't scale" or because data is "unstructured". If data had no structure then it wouldn't be data, it would be noise. Schema on read, schema on write, schemaless databases; they imply structure underlying the data. All data has schema, but that word may not mean what you think it means.
This presentation will describe concepts of data storage and retrieval from technology prehistory (i.e. before the 1980s) and examine the design principles behind both old and new technology for managing data because sometimes post-relational is actually pre-relational. It is important to separate what is identical to things that were tried in the past from new twists on old topics that deliver new capabilities.
Directly related to these topics are performance, scalability and the realities of what organizations do with data over time. All of these topics should guide architecture decisions to avoid the trap of creating technical debts that must be paid later, after systems are in place and change is difficult.
Briefing room: An alternative for streaming data collectionmark madsen
Knowing what’s happening in your enterprise right now can mark the difference between success and failure. The key is to have a rich view of activity, such that analysts and others can explore in a fully multidimensional fashion. Benefiting from such a detailed perspective can help professionals identify the exact nature of problems or opportunities, thus enabling precise actions that make a difference quickly.
Register for this episode of The Briefing Room to hear veteran Analyst Mark Madsen of Third Nature explain how a nexus of innovations for analyzing network traffic can help companies stay on top of their game. He’ll be briefed by Erik Giesa of ExtraHop, who will showcase his company’s stream analytics technology for wire data, which provides real-time, multidimensional views of network traffic. He’ll share success stories of how ExtraHop has solved otherwise intractable problems and enabled a new level of root-cause analysis.
Data Architecture: OMG It’s Made of Peoplemark madsen
Do you have data? Do you have users? Do they use that data to solve problems? Then you have a data architecture. Maybe your architecture is organic and accidental, or maybe it’s an accumulation of the latest practices and technologies you heard about on Stack Overflow.
Spoiler: data architecture is about people and how they use data, not the latest pipeline framework or AI model. Data architecture is about enabling users to be productive, not adding the next “shiny object” and then blaming the users for using it wrong. What you design needs to focus on a different subject than either technology or data.
Join Kevin Bogusch, Ecosystem Architect, as he talks with Mark Madsen, Fellow at the Technology Innovation Office, on the crucial elements you’re missing in a successful data architecture: people and process. Find out why Mark says, “don’t buy one problem to solve another problem.”
Solve User Problems: Data Architecture for Humansmark madsen
We are bombarded with stories of the latest products to hit the market – products that will change everything we do. This causes us to focus on the latest technology, building IT for the sake of building IT. Meanwhile, the world still seems to run on Excel.
The “big innovators” who have and use unimaginably large amounts of data are not the norm. Aspiring to use the same complex technologies and patterns they do leads to poor investments and tradeoffs. This is an age-old problem rooted in the over-emphasis of technology as the agent of change. Technology isn’t the answer – it’s the platform on which people build answers.
To emphasize technology is to ignore the way tools change people and practices. The design focus in our market was on storing and making data accessible. If we want to make progress then we need to step back from the details and look at data from the perspective of the organization. Our design focus shifts to people learning and applying new insights, asking questions about how an organization can be more resilient, more efficient, or faster to sense and respond to changing conditions.
In this talk you will learn how to put your data architecture into a human frame of reference. Drawing inspiration from the history of technology and urban planning, we will see that the services provided by the things we build are what drive success, not the latest shiny distraction.
Assumptions about Data and Analysis: Briefing room webcast slidesmark madsen
In many ways, moving data is like moving furniture: it's an unpleasant process dubbed an occasional necessary evil. But as the data pipelines of old decay, a new reality is taking shape: the data-native architecture. Unlike traditional data processing for BI and Analytics, this approach works on data right where it lives, thus eliminating the pain of forklifting, narrowing the margin of error, and expediting the time to business benefit. The new architecture embodies new assumptions, some of which we will talk about here.
Register for this episode of The Briefing Room to hear veteran Analyst Mark Madsen of Third Nature explain why this shift is truly tectonic. He'll be briefed by Steve Wooledge of Arcadia Data who will showcase his company's technology, which leverages a data-native architecture to fuel rapid-fire visualization and analysis of both big data and small.
BioIT World 2016 - HPC Trends from the TrenchesChris Dagdigian
As presented at BioIT World 2016. In one of the more popular presentations of the Expo, Chris delivers a candid assessment of the best, the worthwhile, and the most overhyped information technologies (IT) for life sciences. He’ll cover what has changed (or not) in the past year around infrastructure, storage, computing, and networks. This presentation will help you understand IT to build and support data intensive science.
Video link from the presentation: biote.am/bs
[Note: email chris@bioteam.net if you would like a PDF copy of this presentation]
This was a 30 min talk intended as one of the opening/overview presentations before a full-day deep dive into ScienceDMZ design patterns and architectures.
Direct downloads are not enabled. Contact me directly (chris@bioteam.net) if you for some odd reason want a copy of this slide deck!
How to understand trends in the data & software marketmark madsen
The big challenge most analytics and IT professionals face today is dealing with complexity. Trends are still not clear. It helps to look at the past and current state to understand what’s really happening in the data technology market – a whole lot of reinvention and some innovation, but not where you expect it.
We have the (well-understood) problems that we have, with their (well-understood) limitations and intractabilities.
We deal with them in the world in which they were first codified and framed. Paradigms (world views) change as a function of political, economic, technological, cultural, use and growth, however, and when the world changes we’ll have a criteria for framing not just the problems/shortcomings/intractabilities of the prior paradigm, but that paradigm itself.
At that point, however, it will have ceased to matter because we’ll be dealing with fundamentally new problems/shortcomings/intractabilities.
Taming Big Science Data Growth with Converged InfrastructureThe BioTeam Inc.
2014 BioIT World Expo presentation
"Many of the largest NGS sites have identified IO bottlenecks as their number one concern in growing their infrastructure to support current and projected data growth rates. In this talk Aaron D. Gardner, Senior Scientific Consultant, BioTeam, Inc. will share real-world strategies and implementation details for building converged storage infrastructure to support the performance, scalability and collaborative requirements of today's NGS workflows. "
For a copy of this presentation please email: chris@bioteam.net
This is a very short slide deck I did for a 10-minute slot on a http://pistoiaalliance.org/ webinar. The slides do not fully cover what I intend to talk about so if the webinar is recorded and available afterwards I'll update this description with the recording URL.
PDF copy of the slides available upon request ("chris@bioteam.net")
This is a custom "Bio IT trends/problems" deck that I did for a general but highly technical audience at the 2014 Internet2 Technology Exchange conference.
Download of the raw PPT is disabled; contact me at chris@bioteam.net if a direct copy or PDF of the presentation would be useful.
Innovation med big data – chr. hansens erfaringerMicrosoft
Mange steder er Big Data stadig det nye og ukendte, der ikke har topprioritet hos IT, da ”vi ikke har store datamængder”. Men Big Data er meget mere end store datamængder. I Chr. Hansen A/S har Forskning og Udvikling (Innovation) afdelingen arbejdet med værdien af data og som resultat etableret et tværfagligt BioInformatik-program på Big Data teknologier fra Microsoft.
The talk presents the evolution of Big-Data systems from single-purpose MapReduce frameworks to fully general computational infrastructures. In particular, I will follow the evolution of Hadoop, and show the benefits and challenges of a new architectural paradigm that decouples the resource management component (YARN) from the specifics of the application frameworks (e.g., MapReduce, Tez, REEF, Giraph, Naiad, Dryad, Spark,...). We argue that beside the primary goals of increasing scalability and programming model flexibility, this transformation dramatically facilitates innovation.
In this context, I will present some of our contributions to the evolution of Hadoop (namely: work-preserving preemption, and predictable resource allocation), and comment on the fascinating experience of working on open- source technologies from within Microsoft. The current Hadoop APIs (HDFS and YARN) provide the cluster equivalent of an OS API. With this as a backdrop, I will present our attempt to create the equivalent of stdlib for the cluster: the REEF project.
Carlo A. Curino received a PhD from Politecnico di Milano, and spent two years as Post Doc Associate at CSAIL MIT leading the relational cloud project. He worked at Yahoo! Research as Research Scientist focusing on mobile/cloud platforms and entity deduplication at scale. Carlo is currently a Senior Scientist at Microsoft in the Cloud and Information Services Lab (CISL) where he is working on big-data platforms and cloud computing.
Facilitating Collaborative Life Science Research in Commercial & Enterprise E...Chris Dagdigian
This is a talk I put together for a http://www.neren.org/ seminar called "Bridging the Gap: Research Facilitation". Tried to give a biotech/pharma view for a mostly academic audience.
Bio-IT & Cloud Sobriety: 2013 Beyond The Genome MeetingChris Dagdigian
October 2013 "Beyond the Genome" presentation slides. Talk is mostly focused on issues around IaaS cloud usage for "Bio-IT" and life science informatics & scientific computing.
PDF SLIDES AVAILABLE DIRECTLY - PLEASE EMAIL "CHRIS@BIOTEAM.NET" FOR SLIDES
Mapping Life Science Informatics to the CloudChris Dagdigian
Infrastructure cloud platforms such as those offered by Amazon Web Services are not designed and built with scientific research as the primary use case. These presentation slides cover the current state of mapping life science research and HPC technique onto “the cloud” and how to work around the common engineering, orchestration and data movement problems.
[Note: I've replaced the 2011 version of this talk deck with a slightly updated version as delivered at the AIRI Petabyte Challenge Meeting]
2014 BioIT World - Trends from the trenches - Annual presentationChris Dagdigian
Talk slides from the annual "trends from the trenches" address at BioITWorld Expo. 2014 Edition.
### Email chris@bioteam.net if you'd like a PDF copy of this deck ###
Big Data Information Architecture PowerPoint Presentation SlideSlideTeam
Feel enthralled by all the attention by our Big data information architecture PowerPoint presentation slide offers. While designing the perfect framework for a durable system, it could get tricky to represent all the data in a systematic manner. Manifesting complex ideas in a simplified manner doesn't always comes handy. That's the reason we have well-researched formats and designs for professional and prolonging solutions. Our team of experts makes sure that all the PPT slides are framed to work for the best of the client. Numerous icons and images are used here for visual engagement. We have covered up every viewpoint of data structure possible, including, data market forecast, financial aspects, social media approach and different comparisons used in data analysis for an out of box view. Our sole and intriguing PowerPoint slides are your gateway to progress and serves you in holding your viewer's consideration towards the concept of discernment and improves the quality and accuracy of the business processes. Discourage injudicious comments with our Big Data Information Architecture PowerPoint Presentation Slide. Ensure folks adhere to the decorum.
BioITWorld 2013 presentation - Best practices for building multi-tenant HPC clusters for Pharma/BioTech
Essentially a mini case study of a recent deployment of a multi-petabyte, 1000+ CPU core Linux cluster in the Boston area.
Please email me at: chris@bioteam.net if you would like the actual PDF file itself.
The Evolving Role of the Data Engineer - Whitepaper | QuboleVasu S
A whitepaper about how the evolving data engineering profession helps data-driven companies work smarter and lower cloud costs with Qubole.
https://www.qubole.com/resources/white-papers/the-evolving-role-of-the-data-engineer
Disruptive & Breakthrough innovations alter our world. Some domains of Technology are altering and evolving at a pace that is almost alarming. However, the future is never predictable and a breakthrough technology in a domain can revolutionaries the way the world works and conducts without much warning. The Moore's Law was expected to hit a plateau and now with advent of Quantum computing it has again become relevant and computational speeds may even outpace Moore's Law. The material technologies including nano-science will continue to excite the researchers and Bio-sciences with synergising affects of other domains of science can be predicted to take giant leaps. Artificial Intelligence is probably expected to pervade everything we touch and feel.
BioIT World 2016 - HPC Trends from the TrenchesChris Dagdigian
As presented at BioIT World 2016. In one of the more popular presentations of the Expo, Chris delivers a candid assessment of the best, the worthwhile, and the most overhyped information technologies (IT) for life sciences. He’ll cover what has changed (or not) in the past year around infrastructure, storage, computing, and networks. This presentation will help you understand IT to build and support data intensive science.
Video link from the presentation: biote.am/bs
[Note: email chris@bioteam.net if you would like a PDF copy of this presentation]
This was a 30 min talk intended as one of the opening/overview presentations before a full-day deep dive into ScienceDMZ design patterns and architectures.
Direct downloads are not enabled. Contact me directly (chris@bioteam.net) if you for some odd reason want a copy of this slide deck!
How to understand trends in the data & software marketmark madsen
The big challenge most analytics and IT professionals face today is dealing with complexity. Trends are still not clear. It helps to look at the past and current state to understand what’s really happening in the data technology market – a whole lot of reinvention and some innovation, but not where you expect it.
We have the (well-understood) problems that we have, with their (well-understood) limitations and intractabilities.
We deal with them in the world in which they were first codified and framed. Paradigms (world views) change as a function of political, economic, technological, cultural, use and growth, however, and when the world changes we’ll have a criteria for framing not just the problems/shortcomings/intractabilities of the prior paradigm, but that paradigm itself.
At that point, however, it will have ceased to matter because we’ll be dealing with fundamentally new problems/shortcomings/intractabilities.
Taming Big Science Data Growth with Converged InfrastructureThe BioTeam Inc.
2014 BioIT World Expo presentation
"Many of the largest NGS sites have identified IO bottlenecks as their number one concern in growing their infrastructure to support current and projected data growth rates. In this talk Aaron D. Gardner, Senior Scientific Consultant, BioTeam, Inc. will share real-world strategies and implementation details for building converged storage infrastructure to support the performance, scalability and collaborative requirements of today's NGS workflows. "
For a copy of this presentation please email: chris@bioteam.net
This is a very short slide deck I did for a 10-minute slot on a http://pistoiaalliance.org/ webinar. The slides do not fully cover what I intend to talk about so if the webinar is recorded and available afterwards I'll update this description with the recording URL.
PDF copy of the slides available upon request ("chris@bioteam.net")
This is a custom "Bio IT trends/problems" deck that I did for a general but highly technical audience at the 2014 Internet2 Technology Exchange conference.
Download of the raw PPT is disabled; contact me at chris@bioteam.net if a direct copy or PDF of the presentation would be useful.
Innovation med big data – chr. hansens erfaringerMicrosoft
Mange steder er Big Data stadig det nye og ukendte, der ikke har topprioritet hos IT, da ”vi ikke har store datamængder”. Men Big Data er meget mere end store datamængder. I Chr. Hansen A/S har Forskning og Udvikling (Innovation) afdelingen arbejdet med værdien af data og som resultat etableret et tværfagligt BioInformatik-program på Big Data teknologier fra Microsoft.
The talk presents the evolution of Big-Data systems from single-purpose MapReduce frameworks to fully general computational infrastructures. In particular, I will follow the evolution of Hadoop, and show the benefits and challenges of a new architectural paradigm that decouples the resource management component (YARN) from the specifics of the application frameworks (e.g., MapReduce, Tez, REEF, Giraph, Naiad, Dryad, Spark,...). We argue that beside the primary goals of increasing scalability and programming model flexibility, this transformation dramatically facilitates innovation.
In this context, I will present some of our contributions to the evolution of Hadoop (namely: work-preserving preemption, and predictable resource allocation), and comment on the fascinating experience of working on open- source technologies from within Microsoft. The current Hadoop APIs (HDFS and YARN) provide the cluster equivalent of an OS API. With this as a backdrop, I will present our attempt to create the equivalent of stdlib for the cluster: the REEF project.
Carlo A. Curino received a PhD from Politecnico di Milano, and spent two years as Post Doc Associate at CSAIL MIT leading the relational cloud project. He worked at Yahoo! Research as Research Scientist focusing on mobile/cloud platforms and entity deduplication at scale. Carlo is currently a Senior Scientist at Microsoft in the Cloud and Information Services Lab (CISL) where he is working on big-data platforms and cloud computing.
Facilitating Collaborative Life Science Research in Commercial & Enterprise E...Chris Dagdigian
This is a talk I put together for a http://www.neren.org/ seminar called "Bridging the Gap: Research Facilitation". Tried to give a biotech/pharma view for a mostly academic audience.
Bio-IT & Cloud Sobriety: 2013 Beyond The Genome MeetingChris Dagdigian
October 2013 "Beyond the Genome" presentation slides. Talk is mostly focused on issues around IaaS cloud usage for "Bio-IT" and life science informatics & scientific computing.
PDF SLIDES AVAILABLE DIRECTLY - PLEASE EMAIL "CHRIS@BIOTEAM.NET" FOR SLIDES
Mapping Life Science Informatics to the CloudChris Dagdigian
Infrastructure cloud platforms such as those offered by Amazon Web Services are not designed and built with scientific research as the primary use case. These presentation slides cover the current state of mapping life science research and HPC technique onto “the cloud” and how to work around the common engineering, orchestration and data movement problems.
[Note: I've replaced the 2011 version of this talk deck with a slightly updated version as delivered at the AIRI Petabyte Challenge Meeting]
2014 BioIT World - Trends from the trenches - Annual presentationChris Dagdigian
Talk slides from the annual "trends from the trenches" address at BioITWorld Expo. 2014 Edition.
### Email chris@bioteam.net if you'd like a PDF copy of this deck ###
Big Data Information Architecture PowerPoint Presentation SlideSlideTeam
Feel enthralled by all the attention by our Big data information architecture PowerPoint presentation slide offers. While designing the perfect framework for a durable system, it could get tricky to represent all the data in a systematic manner. Manifesting complex ideas in a simplified manner doesn't always comes handy. That's the reason we have well-researched formats and designs for professional and prolonging solutions. Our team of experts makes sure that all the PPT slides are framed to work for the best of the client. Numerous icons and images are used here for visual engagement. We have covered up every viewpoint of data structure possible, including, data market forecast, financial aspects, social media approach and different comparisons used in data analysis for an out of box view. Our sole and intriguing PowerPoint slides are your gateway to progress and serves you in holding your viewer's consideration towards the concept of discernment and improves the quality and accuracy of the business processes. Discourage injudicious comments with our Big Data Information Architecture PowerPoint Presentation Slide. Ensure folks adhere to the decorum.
BioITWorld 2013 presentation - Best practices for building multi-tenant HPC clusters for Pharma/BioTech
Essentially a mini case study of a recent deployment of a multi-petabyte, 1000+ CPU core Linux cluster in the Boston area.
Please email me at: chris@bioteam.net if you would like the actual PDF file itself.
The Evolving Role of the Data Engineer - Whitepaper | QuboleVasu S
A whitepaper about how the evolving data engineering profession helps data-driven companies work smarter and lower cloud costs with Qubole.
https://www.qubole.com/resources/white-papers/the-evolving-role-of-the-data-engineer
Disruptive & Breakthrough innovations alter our world. Some domains of Technology are altering and evolving at a pace that is almost alarming. However, the future is never predictable and a breakthrough technology in a domain can revolutionaries the way the world works and conducts without much warning. The Moore's Law was expected to hit a plateau and now with advent of Quantum computing it has again become relevant and computational speeds may even outpace Moore's Law. The material technologies including nano-science will continue to excite the researchers and Bio-sciences with synergising affects of other domains of science can be predicted to take giant leaps. Artificial Intelligence is probably expected to pervade everything we touch and feel.
A brief description of the Clayton Christensen's concept of disruptive technology and how it helps us to understand why companies go bankrupt under conditions of technological change.
WECREATE Innovation presents a thought piece on 'next practice' on how to co-create breakthrough purpose-driven innovations. It contains tools, approaches, processes, mindsets and cultures, killers of innovation and drivers of innovation and more. A thorough synthesis of available thinking and cutting-edge tools from the WECREATE experience of doing disruptive innovation with leading NGOs, national and local government and Fortune 500 companies. With the intention of helping all innovators generate and implement breakthroughs - particularly those working in the complex social and impact economies.
Crossing the chasm with a high performance dynamically scalable open source p...mark madsen
Many open source projects are seeking commercial backing and growing into businesses – e.g. Spark, Cassandra, Hadoop. How do you market your project or product to investors or to enterprises? What should your pitch deck or customer presentation look like? This talk will show you the archetype (based on watching hundreds of these presentations) and exactly how not to do a presentation like this.
Intro:
Mark Madsen’s exposure to the software industry can be measured in terapoints (that’s 1 million powerpoints). He received his Master’s degree in Science from an unnamed university in 1993 and has been doing what is described in technical jargon as “a lot of stuff” since then. With knowledge as broad as the wide Mississippi, his qualifications need not be divulged, and certainly not investigated. He doesn't want anyone to make a fuss over him—just treat him as you would any great man. Please welcome Mark Madsen
A Pragmatic Approach to Analyzing Customersmark madsen
The business market is different today than it was 20 years ago when BI got started. We're just beginning to grasp how to work within the new economic and communication models. Companies can't rely solely on financial and operational metrics any more, and need to analyze customer behaviors in more detail.
The big change in analysis is a move from mass market metrics to individualized data, no longer analyzing or managing by averages. The stream of events and observations available from applications today combined with new platforms for collecting and processing data enables (relatively) easy analysis.
Despite this, many companies struggle to analyze customer data. This talk will describe a handful of customer metrics and models that are (relatively) easy to do, yet are often not done. It's often easier to succeed by stringing together a handful of simple techniques rather than applying advanced techniques.
Expect to come away from this session with:
- a little history of customer data use by marketing and how that has changed in the last 10 years.
- the most common behavioral data sources you have available.
- some of the basic questions that often go unanswered, and data that is not assessed in the proper context.
- some basic analyses you can perform.
Slides for Briefing Room webcast ( https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=869f964b1380f728cedde802779a1e12 )
Organizations worldwide are learning hard lessons these days about the constraints of dated information systems. The time-tested process of Extract-Transform-Load (ETL) is fast losing its ability to cope with the volume, velocity and variety of Big Data coming down the pike. Forward-thinking companies are therefore prepping the battle field by designing on-ramps to the future of streaming analytics. Register for this episode of The Briefing Room to hear Analyst Mark Madsen explain how a new era of data solutions is rising to the challenge of streaming data. He'll be briefed by Steve Wilkes, founder and CTO of the Striim platform. Steve will share how enterprises are turning to streaming data integration, in-memory transformations and continuous processing to achieve the goals of ETL in milliseconds – at a fraction of the cost and complexity of legacy systems. Several case studies will be shared.
The Innovator's Dilemma of the Translation IndustryRobert Laing
The translation industry is undergoing disruption in a textbook manner described by Clayton Christensen in his book "The Innovator's Dilemma". Companies like Gengo are disrupting traditional players with new production methods, while the industry watches on.
There are many well understood and widely adopted methodologies for building software products. However, the nuances in application often differ widely from company to company.
This deck articulates a framework that I have developed over my career. Some concepts are exclusively my own. Some borrow wholesale from people much smarter than myself: Eric Ries, Anthony Ulwick, Michael Cohn, Clayton Christensen, Alan Klement (I cite sources extensively throughout this deck).
This framework takes an inherently unpredictable, creative process, and makes it repeatable while maintaining flexibility. There is lots of room to adapt and innovate within this framework, both individually and as a team. However, I think it is important that a product organization in a company (developers, designers and PMs) speaks a consistent language and shares the same fundamental methodology. That’s what this framework provides.
Determine the Right Analytic Database: A Survey of New Data Technologiesmark madsen
There has been an explosion in database technology designed to handle big data and deep analytics from both established vendors and startups. This session will provide a quick tour of the primary technology innovations and systems powering the analytic database landscape—from data warehousing appliances and columnar databases to massively parallel processing and in-memory technology. The goal is to help you understand the strengths and limitations of these alternatives and how they are evolving so you can select technology that is best suited to your organization and needs.
Presentation from the O'Reilly Strata conference, February 2011
There has been a lack of substantive data about the state of open source in the business intelligence and data warehousing market. In this presentation noted industry analyst Mark Madsen will present the results of recent market research on adoption profiles and characteristics for open source BI/DW.
This research surveyed adopters of open source to understand their reasons for adoption and the benefits they experienced. It also captured user demographics to identify who is adopting open source for BI/DW, where they are deploying it, and how it’s being used. Two highly experienced open source BI practitioners, Bruce Belvin (President, Monolith Software Solutions) and Jay Webster (President and COO at Consorte Media) will describe their BI implementations, their criteria and selection methodology, and share best practices.
In the Jobs to Be Done space, I assume from my research that Anthony Ulwick, author of What Customers Want, is the originator of the thought, but Clayton Christensen has helped popularized the concept. On this theory though, I am staying with Ulwick's work and have used it numerous times. It works! It was not till several months ago that I actually finally created a mind map of the process. This is my rendition of it.
You wouldn't be surprised if i told you that we do live in interesting times. New business models are created at the same pace today at which older ones are being destroyed. Technology is no longer just an enabler for business, it has become the business for most organizations. In this session we will touch upon some of the challenges and opportunities that the cloud has to offer. The cloud (IaaS, PaaS, SaaS) as we know it offers organizations immense opportunity in terms of reducing time to market on delivering engaging customer experiences but with all of that agility a move to the cloud also brings numerous challenges, some obvious and some not very obvious. In this session we will go over challenges engineering systems for the cloud including a case study engineering a complex legacy application for the cloud.
Presentation given at IMCW 2013 in Limerick. Discussing how the combination of cloud, social, mobile and big data will transform our world moving forward. What can you do to be part of this new revolution
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2pjvrpW.
Joe Duffy talks about the concurrency's explosion onto the mainstream over the past 15 years. He looks at some of today's hottest trends (Cloud, IoT, Microservices) and attempts to predict what lies ahead not only for concurrent programming, but also distributed, from now to 15 years into the future. Filmed at qconlondon.com.
Joe Duffy is Director of Engineering for the Compiler and Language Group at Microsoft. He leads the teams building C++, C#, VB, and F# languages, compilers, and static analysis platforms, across many architectures and platforms.
Analysis of applying TRIZ in and on a Large Scale System - SemiconductorsRichard Platt
An analysis of applying TRIZ towards a engineering system (semiconductor technology) and the necessary process factors and issues that were found and resolved as a part of the implementation of the TRIZ methodology at Intel, including a methodology for designing innovation methods into the design for manufacturability process
Imagine a world where everything you have doubles every two years without fail, this is the world Gordon Moore created for us in 1965. Today it quietly governs everything we do by controlling everyday items from toasters to our cars all while deflating our economy at rate which man has never seen before, but for how long can Gordon’s magic continue before Moore’s Law becomes Moore’s Wall?
Chaos Engineering: Why the World Needs More Resilient SystemsC4Media
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2luk9iS.
Tammy Butow shares her experiences using chaos engineering to build resilient systems, when they couldn’t build their systems from scratch. Filmed at qconlondon.com.
Tammy Butow is a Principal SRE at Gremlin where she works on Chaos Engineering, the facilitation of controlled experiments to identify systemic weaknesses. Previously, she led SRE teams at Dropbox responsible for Databases and Storage systems used by over 500 million customers.
About an Immune System Understanding for Cloud-native Applications - Biology ...Nane Kratzke
Presentation for 9th International Conference on Cloud Computing, GRIDS, and Virtualization (CLOUD COMPUTING 2018) in Barcelona, Spain, 2018.
There is no such thing as an impenetrable system, although the penetration of systems does get harder from year to year. The median days that intruders remained undetected on victim systems dropped from 416 days in 2010 down to 99 in 2016. Perhaps because of that, a new trend in security breaches is to compromise the forensic trail to allow the intruder to remain undetected for longer in victim systems and to retain valuable footholds for as long as possible. This paper proposes an immune system inspired solution which uses a more frequent regeneration of cloud application nodes to ensure that undetected compromised nodes can be purged. This makes it much harder for intruders to maintain a presence on victim systems. Basically the biological concept of cell-regeneration is combined with the information systems concept of append-only logs. Evaluation experiments performed on popular cloud service infrastructures (Amazon Web Services, Google Compute Engine, Azure and OpenStack) have shown that between 6 and 40 nodes of elastic container platforms can be regenerated per hour. Even a large cluster of 400 nodes could be regenerated in somewhere between 9 and 66 hours. So, regeneration shows the potential to reduce the foothold of undetected intruders from months to just hours.
An analysis of novel business practices (Technology in the corporate world) Roberto Gregoratti
An investigation of the role of technology in corporate world for the creation of more efficient, newer business practices and models on the side of production, communication, finances and more.
This presentation was prepared as part of a University project and is therefore protected by copyright, held by myself and fellow classmates. Partial or total reproduction is unauthorised without prior consent, please contact me if you wish to use this material.
Similar to Disruptive Innovation: how do you use these theories to manage your IT? (20)
The Black Box: Interpretability, Reproducibility, and Data Managementmark madsen
The growing complexity of data science leads to black box solutions that few people in an organization understand. You often hear about the difficulty of interpretability—explaining how an analytic model works—and that you need it to deploy models. But people use many black boxes without understanding them…if they’re reliable. It’s when the black box becomes unreliable that people lose trust.
Mistrust is more likely to be created by the lack of reliability, and the lack of reliability is often the result of misunderstanding essential elements of analytics infrastructure and practice. The concept of reproducibility—the ability to get the same results given the same information—extends your view to include the environment and the data used to build and execute models.
Mark Madsen examines reproducibility and the areas that underlie production analytics and explores the most frequently ignored and yet most essential capability, data management. The industry needs to consider its practices so that systems are more transparent and reliable, improving trust and increasing the likelihood that your analytic solutions will succeed.
This talk will treat the black boxed of ML the way management perceives them, as black boxes.
There is much work on explainable models, interpretability, etc. that are important to the task of reproducibility. Much of that is relevant to the practitioner, but the practitioner can become too focused on the part they are most familiar with and focused on. Reproducing the results needs more.
Operationalizing Machine Learning in the Enterprisemark madsen
TDWI Munich 2019
What does it take to operationalize machine learning and AI in an enterprise setting?
Machine learning in an enterprise setting is difficult, but it seems easy. All you need is some smart people, some tools, and some data. It’s a long way from the environment needed to build ML applications to the environment to run them in an enterprise.
Most of what we know about production ML and AI come from the world of web and digital startups and consumer services, where ML is a core part of the services they provide. These companies have fewer constraints than most enterprises do.
This session describes the nature of ML and AI applications and the overall environment they operate in, explains some important concepts about production operations, and offers some observations and advice for anyone trying to build and deploy such systems.
Building a Data Platform Strata SF 2019mark madsen
Building a data lake involves more than installing Hadoop or putting data into AWS. The goal in most organizations is to build multi-use data infrastructure that is not subject to past constraints. This tutorial covers design assumptions, design principles, and how to approach the architecture and planning for multi-use data infrastructure in IT.
[This is a new, changed version of the presentations of the same title from last year's Strata]
Architecting a Data Platform For Enterprise Use (Strata NY 2018)mark madsen
Building a data lake involves more than installing Hadoop or putting data into AWS. The goal in most organizations is to build multi-use data infrastructure that is not subject to past constraints. This tutorial covers design assumptions, design principles, and how to approach the architecture and planning for multi-use data infrastructure in IT.
Long:
The goal in most organizations is to build multi-use data infrastructure that is not subject to past constraints. This session will discuss hidden design assumptions, review design principles to apply when building multi-use data infrastructure, and provide a reference architecture to use as you work to unify your analytics infrastructure.
The focus in our market has been on acquiring technology, and that ignores the more important part: the larger IT landscape within which this technology lives and the data architecture that lies at its core. If one expects longevity from a platform then it should be a designed rather than accidental architecture.
Architecture is more than just software. It starts from use and includes the data, technology, methods of building and maintaining, and organization of people. What are the design principles that lead to good design and a functional data architecture? What are the assumptions that limit older approaches? How can one integrate with, migrate from or modernize an existing data environment? How will this affect an organization's data management practices? This tutorial will help you answer these questions.
Topics covered:
* A brief history of data infrastructure and past design assumptions
* Categories of data and data use in organizations
* Data architecture
* Functional architecture
* Technology planning assumptions and guidance
Architecting a Platform for Enterprise Use - Strata London 2018mark madsen
The goal in most organizations is to build multi-use data infrastructure that is not subject to past constraints. This session will discuss hidden design assumptions, review design principles to apply when building multi-use data infrastructure, and provide a reference architecture to use as you work to unify your analytics infrastructure.
The focus in our market has been on acquiring technology, and that ignores the more important part: the larger IT landscape within which this technology lives and the data architecture that lies at its core. If one expects longevity from a platform then it should be a designed rather than accidental architecture.
Architecture is more than just software. It starts from use and includes the data, technology, methods of building and maintaining, and organization of people. What are the design principles that lead to good design and a functional data architecture? What are the assumptions that limit older approaches? How can one integrate with, migrate from or modernize an existing data environment? How will this affect an organization's data management practices? This tutorial will help you answer these questions.
Topics covered:
* A brief history of data infrastructure and past design assumptions
* Categories of data and data use in organizations
* Analytic workload characteristics and constraints
* Data architecture
* Functional architecture
* Tradeoffs between different classes of technology
* Technology planning assumptions and guidance
#strataconf
A Brief Tour through the Geology & Endemic Botany of the Klamath-Siskiyou Rangemark madsen
A hotspot of diversity for rare plants, butterflies and birds, the Klamath-Siskiyou region of southern Oregon is a scientist's (and naturalist's) paradise. This is transverse range running from the Cascades range to the Pacific Ocean, creating an east-west corridor between the coast and the volcanic Cascades range. Mark Madsen’s love of biology while living in the area for 15 years sparked an interest in botanical taxonomy in the world of serpentine soils and the plant communities thriving in the region, including remnant species from the last ice age.
Pay no attention to the man behind the curtain - the unseen work behind data ...mark madsen
Goal: explain the nature of the work of an analytics team to a manager, and enable people on those teams to explain what a data science team needs to a manager.
It seems as if every organization wants to enable analytical-decision making and embed analytics into operational processes. What can you do with analytics? It looks like anything is possible. What can you really do? Probably a lot less than you expect. Why is this? Vendors promise easy-to-use analytics tools and services but they rarely deliver. The products may be easy but the work is still hard.
Using analytics to solve problems depends on many factors beyond the math: people, processes, the skills of the analyst, the technology used, the data. Technology is the easy part. Figuring out what to do and how to do it is a lot harder. Despite this, fancy new tools get all the attention and budget.
People and data are the truly hard parts. People, because many believe that data is absolute rather than relative, and that analytic models produce an answer rather than a range of answers with varying degrees of truth, accuracy and applicability. Data, because managing data for analytics is a nuanced, detail-oriented and seemingly dull task left to back-office IT.
If your goal is to build a repeatable analytics capability rather than a one-off analytics project then you will need to address the parts that are rarely mentioned. This talk will explain some of the unseen and little-discussed aspects involved when building and deploying analytics.
Building the Enterprise Data Lake: A look at architecturemark madsen
The topic is building an Enterprise Data Lake, discussing high level data and technology architecture. We will describe the architecture of a data warehouse, how a data lake needs to differ, and show a high level functional and data architecture for a data lake. This webinar will cover:
Why dumping data into Hadoop and letting users get it out doesn't work
The difference between a Hadoop application and a Data Lake
Why new ideas about data architecture are a key element
An Enterprise Data Lake reference architecture to frame what must be built
On the edge: analytics for the modern enterprise (analyst comments)mark madsen
On the Edge: Analytics for the Modern Enterprise
[these are the analyst comments on enterprise data architecture and streaming]
Webcast description: The speed of business today requires new approaches to generating and leveraging analytics. Latencies of a day, an hour or even minutes no longer suffice in many situations. For these use cases, organizations must embrace analytics at the edge: a process that involves targeted number-crunching at the fringe of the enterprise. When designed properly, these systems give companies a leg up on their competitors. Register for this episode of The Briefing Room to hear veteran Analyst Mark Madsen of Third Nature explain how a new era of information architectures is now unfolding, paving the way to much more responsive and agile business models. He'll be briefed by Kim Macpherson of the Cisco Data and Analytics Business Unit, who will explain how her company's platform is uniquely suited for this new, federated analytic paradigm. She'll demonstrate how edge analytics can help companies address opportunities quickly and effectively.
Don't let data get in the way of a good storymark madsen
Storytelling is not about raising someone’s IQ, it’s about raising their blood pressure. Stories engage emotions rather than intellect, making “storytelling with data” a poor metaphor for data visualization when our goal is to communicate clearly.
People are often confused or misled by “story”, thinking they need a classical story structure with protagonists, action and resolution when the job may be simpler, or more complicated. Some of the storytelling tools and suggestions vendors promote would get you kicked out of your boss’s office you used them without taking into account their goals and context.
Narrative is what we are really talking about, not story. We need to focus our attention on narrative techniques rather than “story” and its forced linear structure. This means understanding why we want to communicate: is it to explain, to build shared understanding, to convince others that our interpretation is the right one?
We use visualization as a tool for many different purposes, communication being one. The idea of narratives with data is a good one, but not all narrative is story. The purpose of this talk is to provide clarity around the goals of communicating with data and to provide a goal-oriented framework that escapes the bad metaphorical frame imposed by “storytelling”.
The problems of scale, speed, persistence and context are the most important design problem we'll have to deal with during the next decade. Scale because we're creating and recording more data than at any time in human history – much of it of dubious value, but none of it obviously value-less. Speed because data flows now. Ceaselessly. In high volume. It has to be persisted in multiple latencies, from milliseconds to decades. And context because the context of creation is different from the context of transmission is different from the context of use.
There are a lot of red herrings, false premises and just-plain-dementia that get in the way of us seeing the problem clearly. We must work through what we mean by "structured" and "unstructured", what we mean by “big data” and why we need new technologies to solve some of our data problems. But “new technologies” doesn’t mean reinventing old technologies while ignoring the lessons of the past. There are reasons relational databases survived while hierarchical, document and object databases were market failures, technologies that may be poised to fail again, 20 years later.
What we believe about data’s structure, schema, and semantics are as important as the NoSQL and relational databases we use. The technologies impose constraints on the real problem: how we make sense of data in order to tell a computer what to do, or to inform human decisions. Most discussions of data and code lose the unconscious tradeoffs that are made when selecting these technology handcuffs.
Cloud computing is creating a new era for IT by providing a set of services that appear to have infinite capacity, immediate deployment and high availability at trivial cost. These are all appealing to someone running a data warehouse when data volume, use and cost are growing at a rapid rate.
Today most organizations look at cloud as a way to lower data center and IT costs. While cost reduction is a real benefit, there is more value in the increased scalability, speed to procure (and give up) resources, and ease of delivery in cloud environments.
Database workloads are particularly challenging in the cloud. Cloud deployments beyond a moderate scale favor shared-nothing database architectures designed to run transparently in a multi-node environment. We are still in an early period of standardization and design of software to run in the cloud. Not all workloads are suitable for deployment on a collection of small virtualized servers today. Business intelligence and analytic database workloads fall into this area, raising the importance of analysis for fit with public and private cloud options.
Open Data: Free Data Isn't the Same as Freeing Datamark madsen
Talk given at the South Tyrol Innovation conference on open data, mainly focused on government open data.
Open data doesn’t mean free data and other maunderings about public data, public goods, and networked data as a resource.
The hidden costs of open data (and how to pay for them).
Beyond transparency (which is where a lot of this started).
Description of the basic cloud principles, the cost & deployment model for cloud, shortcomings for BI workloads beyond modest scale, some stats on market adoption/preference of cloud for DW.
Big data is a big part of the disruption hitting this market, but not in the way most people think. It's not replacing the data warehouse, but it is changing the technology stack. It doesn't eliminate data management, but it does redefine enterprise data architecture. Big data is and isn't many things. It's important to understand which information uses are well supported and which have yet to be addressed. Otherwise you risk replacing one set of problems with another. Come to this session to hear some observations on what big data is, isn't and aspires to be.
A video is available, starts at 1:03 into this Strata online event: http://www.youtube.com/watch?v=gLsHI1ZglKw
Big Data Wonderland: Two Views on the Big Data Revolutionmark madsen
To kick off the Big Data for Enterprise IT Day, we present two views of big data. Is it truly something new, or just an evolution of what we have already? Join us for an interesting and entertaining talk that will help frame your thinking on big data. We take on the roles of former bosses: the techno-lustful and the luddite, and debate the key talking points put forth in the market.
An earlier video of this talk can be seen at http://www.youtube.com/watch?v=qnHHOWz5uvM
Using Data Virtualization to Integrate With Big Datamark madsen
Hadoop and big data don't sit as an island in organizations. To analyze event streams and similar data requires integrating with other data from systems in the organization. This isn't easy with big data systems today because there are disparities in the technoogies and environments when compared to traditional IT. Data virtualization is one way to smooth over the integration and allow Hadoop to access other data, or allow SQL-oriented tools to access Hadoop
One Size Doesn't Fit All: The New Database Revolutionmark madsen
Slides from a webcast for the database revolution research report (report will be available at http://www.databaserevolution.com)
Choosing the right database has never been more challenging, or potentially rewarding. The options available now span a wide spectrum of architectures, each of which caters to a particular workload. The range of pricing is also vast, with a variety of free and low-cost solutions now challenging the long-standing titans of the industry. How can you determine the optimal solution for your particular workload and budget? Register for this Webcast to find out!
Robin Bloor, Ph.D. Chief Analyst of the Bloor Group, and Mark Madsen of Third Nature, Inc. will present the findings of their three-month research project focused on the evolution of database technology. They will offer practical advice for the best way to approach the evaluation, procurement and use of today’s database management systems. Bloor and Madsen will clarify market terminology and provide a buyer-focused, usage-oriented model of available technologies.
Webcast video and audio will be available on the report download site as well.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Disruptive Innovation: how do you use these theories to manage your IT?
1. Disruptive Innovation: Past, Present, Future
(how to use these theories to manage your IT)
February, 2016
Mark Madsen - @markmadsen - http://ThirdNature.net
78. Slide 81November 2010 Mark Madsen
If BI is a commodity, why does it cost so much?
Processes Applications Data Integration Storage EDM / BRM Delivery Consumers
Purchasing
Distribution
Manufacturing
Sales &
Service
ERP Data warehouse
ODS
Stream db / cache
Content store
Identify
Analyze
Debt<10% of Income Debt=0%
Good
Credit
Risks
Bad
Credit
Risks
Good
Credit
Risks
Yes
YesYes
NO
NONO
Income>$40K
Predict
Batch ETL
EII
SCM
SFA
CRM ESB
EDR
Monitor
Explore
Data mart
Low-lat ETL
BPM/Workflow
BRE
CEP
Prescribe
Data services
Transaction services
Manual feedback
Automated feedback