What if all members of your software development team from Project Managers, Business Analysts, Testing and documentation members could create and modify web applications and web services? With traditional SQL solutions this was difficult because of the need to convert web pages to objects, objects to tables as well as the reverse functions. But now with native XML databases and drag-and-drop forms builders, data can flow from the XML model of a web form to the database and back again without translation. This radically simpler process combined with standardized query languages makes it easier for non-programmers to build and maintain their own applications and web services.
The Heart of Data Modeling: The Best Data Modeler is a Lazy Data ModelerDATAVERSITY
We're under pressure to do more with fewer resources. And organizations are often short on experienced data modelers. So why should we spend time doing things that can be done by robots. Well, not robots, but automation.
In this month's webinar, Karen demonstrates the types of automation techniques available in leading data modeling tools such as ERwin, ER/Studio and PowerDesigner. She will also leave you with 10 tips on being more lazy. What webinar last promised that, anyway?
The Shifting Landscape of Data IntegrationDATAVERSITY
Enterprises and organizations from every industry and scale are working to leverage data to achieve their strategic objectives — whether they are to be more profitable, effective, risk-tolerant, prepared, sustainable, and/or adaptable in an ever-changing world. Data has exploded in volume during the last decade as humans and machines alike produce data at an exponential pace. Also, exciting technologies have emerged around that data to improve our abilities and capabilities around what we can do with data.
Behind this data revolution, there are forces at work, causing enterprises to shift the way they leverage data and accelerate the demand for leverageable data. Organizations (and the climates in which they operate) are becoming more and more complex. They are also becoming increasingly digital and, thus, dependent on how data informs, transforms, and automates their operations and decisions. With increased digitization comes an increased need for both scale and agility at scale.
In this session, we have undertaken an ambitious goal of evaluating the current vendor landscape and assessing which platforms have made, or are in the process of making, the leap to this new generation of Data Management and integration capabilities.
Karya develops mobile application services that fits the unique needs of your business. Our Mobile Application Services helps the users to better utilize the power of Mobile Technology.
How Enterprise Solutions Break Silos, Increase Communication and Improve Customer Service
Presenters:
Cameron Boland
Vice President of Operations
KeyMark Inc.
Victoria Pruitt
Vice President of Sales
KeyMark Inc.
This presentation covers interdepartmental benefits of an enterprise implementation, including costs, communication, transparency, etc. Learn why stakeholder involvement through the buying process is critical to collaboration and a successful solution. You’ll also gain understanding of enterprise licensing, tactics for successful end-user rollout, and effective enterprise support.
Slides: Accelerating Queries on Cloud Data LakesDATAVERSITY
Using “zero-copy” hybrid bursting on remote data to solve data lake analytics capacity and performance problems.
Data scientists want answers on demand. But in today’s enterprise architectures, the reality is that most data remains on-prem, despite the promise of cloud-based analytics. Moving all that data to the cloud has typically not been possible for many reasons including cost, latency, and technical difficulty. So, what if there was a technology that would connect these on-prem environments to any major cloud platform, enabling high-powered computing without the need to move massive amounts of data?
Join us for this webinar where Alex Ma of Alluxio, an open-source data orchestration platform, will discuss how a data orchestration approach offers a solution for connecting traditional on-prem data centers and cloud data lakes with other clouds and data centers. With Alluxio’s “zero-copy” burst solution, companies can bridge remote data centers and data lakes with computing frameworks in other locations, enabling them to offload, compute, and leverage the flexibility, scalability, and power of the cloud for their remote data.
The Heart of Data Modeling: The Best Data Modeler is a Lazy Data ModelerDATAVERSITY
We're under pressure to do more with fewer resources. And organizations are often short on experienced data modelers. So why should we spend time doing things that can be done by robots. Well, not robots, but automation.
In this month's webinar, Karen demonstrates the types of automation techniques available in leading data modeling tools such as ERwin, ER/Studio and PowerDesigner. She will also leave you with 10 tips on being more lazy. What webinar last promised that, anyway?
The Shifting Landscape of Data IntegrationDATAVERSITY
Enterprises and organizations from every industry and scale are working to leverage data to achieve their strategic objectives — whether they are to be more profitable, effective, risk-tolerant, prepared, sustainable, and/or adaptable in an ever-changing world. Data has exploded in volume during the last decade as humans and machines alike produce data at an exponential pace. Also, exciting technologies have emerged around that data to improve our abilities and capabilities around what we can do with data.
Behind this data revolution, there are forces at work, causing enterprises to shift the way they leverage data and accelerate the demand for leverageable data. Organizations (and the climates in which they operate) are becoming more and more complex. They are also becoming increasingly digital and, thus, dependent on how data informs, transforms, and automates their operations and decisions. With increased digitization comes an increased need for both scale and agility at scale.
In this session, we have undertaken an ambitious goal of evaluating the current vendor landscape and assessing which platforms have made, or are in the process of making, the leap to this new generation of Data Management and integration capabilities.
Karya develops mobile application services that fits the unique needs of your business. Our Mobile Application Services helps the users to better utilize the power of Mobile Technology.
How Enterprise Solutions Break Silos, Increase Communication and Improve Customer Service
Presenters:
Cameron Boland
Vice President of Operations
KeyMark Inc.
Victoria Pruitt
Vice President of Sales
KeyMark Inc.
This presentation covers interdepartmental benefits of an enterprise implementation, including costs, communication, transparency, etc. Learn why stakeholder involvement through the buying process is critical to collaboration and a successful solution. You’ll also gain understanding of enterprise licensing, tactics for successful end-user rollout, and effective enterprise support.
Slides: Accelerating Queries on Cloud Data LakesDATAVERSITY
Using “zero-copy” hybrid bursting on remote data to solve data lake analytics capacity and performance problems.
Data scientists want answers on demand. But in today’s enterprise architectures, the reality is that most data remains on-prem, despite the promise of cloud-based analytics. Moving all that data to the cloud has typically not been possible for many reasons including cost, latency, and technical difficulty. So, what if there was a technology that would connect these on-prem environments to any major cloud platform, enabling high-powered computing without the need to move massive amounts of data?
Join us for this webinar where Alex Ma of Alluxio, an open-source data orchestration platform, will discuss how a data orchestration approach offers a solution for connecting traditional on-prem data centers and cloud data lakes with other clouds and data centers. With Alluxio’s “zero-copy” burst solution, companies can bridge remote data centers and data lakes with computing frameworks in other locations, enabling them to offload, compute, and leverage the flexibility, scalability, and power of the cloud for their remote data.
The Pivotal Business Data Lake provides a flexible blueprint to meet your business's future information and analytics needs while avoiding the pitfalls of typical EDW implementations. Pivotal’s products will help you overcome challenges like reconciling corporate and local needs, providing real-time access to all types of data, integrating data from multiple sources and in multiple formats, and supporting ad hoc analysis.
Learn about the three advances in database technologies that eliminate the need for star schemas and the resulting maintenance nightmare.
Relational databases in the 1980s were typically designed using the Codd-Date rules for data normalization. It was the most efficient way to store data used in operations. As BI and multi-dimensional analysis became popular, the relational databases began to have performance issues when multiple joins were requested. The development of the star schema was a clever way to get around performance issues and ensure that multi-dimensional queries could be resolved quickly. But this design came with its own set of problems.
Unfortunately, the analytic process is never simple. Business users always think up unimaginable ways to query the data. And the data itself often changes in unpredictable ways. These result in the need for new dimensions, new and mostly redundant star schemas and their indexes, maintenance difficulties in handling slowly changing dimensions, and other problems causing the analytical environment to become overly complex, very difficult to maintain, long delays in new capabilities, resulting in an unsatisfactory environment for both the users and those maintaining it.
There must be a better way!
Watch this webinar to learn:
- The three technological advances in data storage that eliminate star schemas
- How these innovations benefit analytical environments
- The steps you will need to take to reap the benefits of being star schema-free
BOSS Technologies is an Oracle Gold Partner offering Solutions for Staffing and Services. Let us build your Oracle Implementation Team. Scalable. Enterprise. Services.
Data-Ed Online: Data Architecture RequirementsDATAVERSITY
Data architecture is foundational to an information-based operational environment. It is your data architecture that organizes your data assets so they can be leveraged in your business strategy to create real business value. Even though this is important, not all data architectures are used effectively. This webinar describes the use of data architecture as a basic analysis method. Various uses of data architecture to inform, clarify, understand, and resolve aspects of a variety of business problems will be demonstrated. As opposed to showing how to architect data, your presenter Dr. Peter Aiken will show how to use data architecting to solve business problems. The goal is for you to be able to envision a number of uses for data architectures that will raise the perceived utility of this analysis method in the eyes of the business.
Takeaways:
Understanding how to contribute to organizational challenges beyond traditional data architecting
How to utilize data architectures in support of business strategy
Understanding foundational data architecture concepts based on the DAMA DMBOK
Data architecture guiding principles & best practices
Estimating the Total Costs of Your Cloud Analytics Platform DATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a platform designed to address multi-faceted needs by offering multi-function Data Management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion. They need a worry-free experience with the architecture and its components.
A complete machine learning infrastructure cost for the first modern use case at a midsize to large enterprise will be anywhere from $2M to $14M. Get this data point as you take the next steps on your journey.
1. What are the difficulties in deploying and managing the life cycle of data-heavy application
2. Review of kubernetes landscape w.r.t data-heavy applications
3. Robin approach to orchestrating data-heavy applications
Caserta Concepts, Datameer and Microsoft shared their combined knowledge and a use case on big data, the cloud and deep analytics. Attendes learned how a global leader in the test, measurement and control systems market reduced their big data implementations from 18 months to just a few.
Speakers shared how to provide a business user-friendly, self-service environment for data discovery and analytics, and focus on how to extend and optimize Hadoop based analytics, highlighting the advantages and practical applications of deploying on the cloud for enhanced performance, scalability and lower TCO.
Agenda included:
- Pizza and Networking
- Joe Caserta, President, Caserta Concepts - Why are we here?
- Nikhil Kumar, Sr. Solutions Engineer, Datameer - Solution use cases and technical demonstration
- Stefan Groschupf, CEO & Chairman, Datameer - The evolving Hadoop-based analytics trends and the role of cloud computing
- James Serra, Data Platform Solution Architect, Microsoft, Benefits of the Azure Cloud Service
- Q&A, Networking
For more information on Caserta Concepts, visit our website: http://casertaconcepts.com/
Using Data Platforms That Are Fit-For-PurposeDATAVERSITY
We must grow the data capabilities of our organization to fully deal with the many and varied forms of data. This cannot be accomplished without an intense focus on the many and growing technical bases that can be used to store, view, and manage data. There are many, now more than ever, that have merit in organizations today.
This session sorts out the valuable data stores, how they work, what workloads they are good for, and how to build the data foundation for a modern competitive enterprise.
What Comes After The Star Schema? Dimensional Modeling For Enterprise Data HubsCloudera, Inc.
Dimensional modeling and the star schema are some of the most important ideas in the history of analytics and data management. They provided a common language and set of patterns that allowed a broad class of users to analyze business processes and spawned an entire ecosystem. With the rise of enterprise data hubs that allow us to combine ETL, search, SQL, and machine learning in a single platform, we need to extend the principles of dimensional modeling to support new and diverse analytical workloads and users. We'll illustrate these concepts by walking through the design of a customer-centric data hub that uses all of the components of an EDH to enable everyone to understand the way that customers experience a company.
Presenter:
Josh Wills, Senior Director Data Science
Updated: October 6, 2014
Intuit's Data Mesh - Data Mesh Leaning Community meetup 5.13.2021Tristan Baker
Past, present and future of data mesh at Intuit. This deck describes a vision and strategy for improving data worker productivity through a Data Mesh approach to organizing data and holding data producers accountable. Delivered at the inaugural Data Mesh Leaning meetup on 5/13/2021.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Data Ninja Webinar Series: Realizing the Promise of Data LakesDenodo
Watch the full webinar: Data Ninja Webinar Series by Denodo: https://goo.gl/QDVCjV
The expanding volume and variety of data originating from sources that are both internal and external to the enterprise are challenging businesses in harnessing their big data for actionable insights. In their attempts to overcome big data challenges, organizations are exploring data lakes as consolidated repositories of massive volumes of raw, detailed data of various types and formats. But creating a physical data lake presents its own hurdles.
Attend this session to learn how to effectively manage data lakes for improved agility in data access and enhanced governance.
This is session 5 of the Data Ninja Webinar Series organized by Denodo. If you want to learn more about some of the solutions enabled by data virtualization, click here to watch the entire series: https://goo.gl/8XFd1O
Data-Ed Online Presents: Data Warehouse StrategiesDATAVERSITY
Integrating data across systems has been a perpetual challenge. Unfortunately, the current technology-focused solutions have not helped IT to improve its dismal project success statistics. Data warehouses, BI implementations, and general analytical efforts achieve the same levels of success as other IT projects – approximately 1/3rd are considered successes when measured against price, schedule, or functionality objectives. The first step is determining the appropriate analysis approach to the data system integration challenge. The second step is understanding the strengths and weaknesses of various approaches. Turns out that proper analysis at this stage makes actual technology selection far more accurate. Only when these are accomplished can proper matching between problem and capabilities be achieved as the third step and true business value be delivered. This webinar will illustrate that good systems development more often depends on at least three data management disciplines in order to provide a solid foundation.
Takeaways:
Data system integration challenge analysis
Understanding of a range of data system-integration technologies including
Problem space (BI, Analytics, Big Data), Data (Warehousing, Vault, Cube) and alternative approaches (Virtualization, Linked Data, Portals, Meta-models)
Understanding foundational data warehousing & BI concepts based on the Data Management Body of Knowledge (DMBOK)
How to utilize data warehousing & BI in support of business strategy
The Heart of Data Modeling: 7 Ways Your Agile Project is Managing Data WrongDATAVERSITY
Is your organization using agile approaches to systems development project? Have you found that there are conflicting opinions with what should be done, when it should be done and who should do it? Is there even a suggestion that data modeling isn’t needed on an Agile project? Are your data architects stuck in a waterfall world? Are you asking for “no more changes” to the data model? Do your developers thing that “just the right documentation” means no modeling allowed? Does anyone even know where the reference data for the application is located? Or how it is updated?
In this month’s webinar, Karen will show you how data modeling and Agile approaches CAN work together to deliver quality information systems and solutions, with fewer dysfunctions and less tears.
Fast and Furious: From POC to an Enterprise Big Data Stack in 2014MapR Technologies
View this webinar presentation as CenturyLink Technology Solutions (Formerly Savvis) and MapR as we deconstruct and demystify “the enterprise big data stack.” We provide you with a more holistic view of the landscape, explore use cases to show how you can derive business value from it, and share best practices for navigating through the fragmented big data environment.
Slides: Proven Strategies for Hybrid Cloud Computing with Mainframes — From A...DATAVERSITY
Mainframes continue to perform mission-critical transaction processing and contain massive amounts of core business data. But digital transformation initiatives and cloud computing have created both opportunities and challenges for unlocking and utilizing this data. Qlik and AWS will share some of the proven strategies from successful customer deployments across a range of different mainframe to cloud use cases, including legacy application modernization, data analytics, and data migrations.
In this presentation, you will learn how to:
• Replicate very large volumes of mainframe data in real-time to the cloud
• Automate the creation of analytics-ready data lakes and data warehouses
• Achieve a 30% reduction in cost of compute
Big Data Made Easy: A Simple, Scalable Solution for Getting Started with HadoopPrecisely
With so many new, evolving frameworks, tools, and languages, a new big data project can lead to confusion and unwarranted risk.
Many organizations have found Data Warehouse Optimization with Hadoop to be a good starting point on their Big Data journey. Offloading ETL workloads from the enterprise data warehouse (EDW) into Hadoop is a well-defined use case that produces tangible results for driving more insights while lowering costs. You gain significant business agility, avoid costly EDW upgrades, and free up EDW capacity for faster queries. This quick win builds credibility and generates savings to reinvest in more Big Data projects.
A proven reference architecture that includes everything you need in a turnkey solution – the Hadoop distribution, data integration software, servers, networking and services – makes it even easier to get started.
Slides: Moving from a Relational Model to NoSQLDATAVERSITY
Businesses are quickly moving to NoSQL databases to power their modern applications. However, a technology migration involves risk, especially if you have to change your data model. What if you could host a relatively unmodified RDBMS schema on your NoSQL database, then optimize it over time?
We’ll show you how Couchbase makes it easy to:
• Use SQL for JSON to query your data and create joins
• Optimize indexes and perform HashMap queries
• Build applications and analysis with NoSQL
The Pivotal Business Data Lake provides a flexible blueprint to meet your business's future information and analytics needs while avoiding the pitfalls of typical EDW implementations. Pivotal’s products will help you overcome challenges like reconciling corporate and local needs, providing real-time access to all types of data, integrating data from multiple sources and in multiple formats, and supporting ad hoc analysis.
Learn about the three advances in database technologies that eliminate the need for star schemas and the resulting maintenance nightmare.
Relational databases in the 1980s were typically designed using the Codd-Date rules for data normalization. It was the most efficient way to store data used in operations. As BI and multi-dimensional analysis became popular, the relational databases began to have performance issues when multiple joins were requested. The development of the star schema was a clever way to get around performance issues and ensure that multi-dimensional queries could be resolved quickly. But this design came with its own set of problems.
Unfortunately, the analytic process is never simple. Business users always think up unimaginable ways to query the data. And the data itself often changes in unpredictable ways. These result in the need for new dimensions, new and mostly redundant star schemas and their indexes, maintenance difficulties in handling slowly changing dimensions, and other problems causing the analytical environment to become overly complex, very difficult to maintain, long delays in new capabilities, resulting in an unsatisfactory environment for both the users and those maintaining it.
There must be a better way!
Watch this webinar to learn:
- The three technological advances in data storage that eliminate star schemas
- How these innovations benefit analytical environments
- The steps you will need to take to reap the benefits of being star schema-free
BOSS Technologies is an Oracle Gold Partner offering Solutions for Staffing and Services. Let us build your Oracle Implementation Team. Scalable. Enterprise. Services.
Data-Ed Online: Data Architecture RequirementsDATAVERSITY
Data architecture is foundational to an information-based operational environment. It is your data architecture that organizes your data assets so they can be leveraged in your business strategy to create real business value. Even though this is important, not all data architectures are used effectively. This webinar describes the use of data architecture as a basic analysis method. Various uses of data architecture to inform, clarify, understand, and resolve aspects of a variety of business problems will be demonstrated. As opposed to showing how to architect data, your presenter Dr. Peter Aiken will show how to use data architecting to solve business problems. The goal is for you to be able to envision a number of uses for data architectures that will raise the perceived utility of this analysis method in the eyes of the business.
Takeaways:
Understanding how to contribute to organizational challenges beyond traditional data architecting
How to utilize data architectures in support of business strategy
Understanding foundational data architecture concepts based on the DAMA DMBOK
Data architecture guiding principles & best practices
Estimating the Total Costs of Your Cloud Analytics Platform DATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a platform designed to address multi-faceted needs by offering multi-function Data Management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion. They need a worry-free experience with the architecture and its components.
A complete machine learning infrastructure cost for the first modern use case at a midsize to large enterprise will be anywhere from $2M to $14M. Get this data point as you take the next steps on your journey.
1. What are the difficulties in deploying and managing the life cycle of data-heavy application
2. Review of kubernetes landscape w.r.t data-heavy applications
3. Robin approach to orchestrating data-heavy applications
Caserta Concepts, Datameer and Microsoft shared their combined knowledge and a use case on big data, the cloud and deep analytics. Attendes learned how a global leader in the test, measurement and control systems market reduced their big data implementations from 18 months to just a few.
Speakers shared how to provide a business user-friendly, self-service environment for data discovery and analytics, and focus on how to extend and optimize Hadoop based analytics, highlighting the advantages and practical applications of deploying on the cloud for enhanced performance, scalability and lower TCO.
Agenda included:
- Pizza and Networking
- Joe Caserta, President, Caserta Concepts - Why are we here?
- Nikhil Kumar, Sr. Solutions Engineer, Datameer - Solution use cases and technical demonstration
- Stefan Groschupf, CEO & Chairman, Datameer - The evolving Hadoop-based analytics trends and the role of cloud computing
- James Serra, Data Platform Solution Architect, Microsoft, Benefits of the Azure Cloud Service
- Q&A, Networking
For more information on Caserta Concepts, visit our website: http://casertaconcepts.com/
Using Data Platforms That Are Fit-For-PurposeDATAVERSITY
We must grow the data capabilities of our organization to fully deal with the many and varied forms of data. This cannot be accomplished without an intense focus on the many and growing technical bases that can be used to store, view, and manage data. There are many, now more than ever, that have merit in organizations today.
This session sorts out the valuable data stores, how they work, what workloads they are good for, and how to build the data foundation for a modern competitive enterprise.
What Comes After The Star Schema? Dimensional Modeling For Enterprise Data HubsCloudera, Inc.
Dimensional modeling and the star schema are some of the most important ideas in the history of analytics and data management. They provided a common language and set of patterns that allowed a broad class of users to analyze business processes and spawned an entire ecosystem. With the rise of enterprise data hubs that allow us to combine ETL, search, SQL, and machine learning in a single platform, we need to extend the principles of dimensional modeling to support new and diverse analytical workloads and users. We'll illustrate these concepts by walking through the design of a customer-centric data hub that uses all of the components of an EDH to enable everyone to understand the way that customers experience a company.
Presenter:
Josh Wills, Senior Director Data Science
Updated: October 6, 2014
Intuit's Data Mesh - Data Mesh Leaning Community meetup 5.13.2021Tristan Baker
Past, present and future of data mesh at Intuit. This deck describes a vision and strategy for improving data worker productivity through a Data Mesh approach to organizing data and holding data producers accountable. Delivered at the inaugural Data Mesh Leaning meetup on 5/13/2021.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Data Ninja Webinar Series: Realizing the Promise of Data LakesDenodo
Watch the full webinar: Data Ninja Webinar Series by Denodo: https://goo.gl/QDVCjV
The expanding volume and variety of data originating from sources that are both internal and external to the enterprise are challenging businesses in harnessing their big data for actionable insights. In their attempts to overcome big data challenges, organizations are exploring data lakes as consolidated repositories of massive volumes of raw, detailed data of various types and formats. But creating a physical data lake presents its own hurdles.
Attend this session to learn how to effectively manage data lakes for improved agility in data access and enhanced governance.
This is session 5 of the Data Ninja Webinar Series organized by Denodo. If you want to learn more about some of the solutions enabled by data virtualization, click here to watch the entire series: https://goo.gl/8XFd1O
Data-Ed Online Presents: Data Warehouse StrategiesDATAVERSITY
Integrating data across systems has been a perpetual challenge. Unfortunately, the current technology-focused solutions have not helped IT to improve its dismal project success statistics. Data warehouses, BI implementations, and general analytical efforts achieve the same levels of success as other IT projects – approximately 1/3rd are considered successes when measured against price, schedule, or functionality objectives. The first step is determining the appropriate analysis approach to the data system integration challenge. The second step is understanding the strengths and weaknesses of various approaches. Turns out that proper analysis at this stage makes actual technology selection far more accurate. Only when these are accomplished can proper matching between problem and capabilities be achieved as the third step and true business value be delivered. This webinar will illustrate that good systems development more often depends on at least three data management disciplines in order to provide a solid foundation.
Takeaways:
Data system integration challenge analysis
Understanding of a range of data system-integration technologies including
Problem space (BI, Analytics, Big Data), Data (Warehousing, Vault, Cube) and alternative approaches (Virtualization, Linked Data, Portals, Meta-models)
Understanding foundational data warehousing & BI concepts based on the Data Management Body of Knowledge (DMBOK)
How to utilize data warehousing & BI in support of business strategy
The Heart of Data Modeling: 7 Ways Your Agile Project is Managing Data WrongDATAVERSITY
Is your organization using agile approaches to systems development project? Have you found that there are conflicting opinions with what should be done, when it should be done and who should do it? Is there even a suggestion that data modeling isn’t needed on an Agile project? Are your data architects stuck in a waterfall world? Are you asking for “no more changes” to the data model? Do your developers thing that “just the right documentation” means no modeling allowed? Does anyone even know where the reference data for the application is located? Or how it is updated?
In this month’s webinar, Karen will show you how data modeling and Agile approaches CAN work together to deliver quality information systems and solutions, with fewer dysfunctions and less tears.
Fast and Furious: From POC to an Enterprise Big Data Stack in 2014MapR Technologies
View this webinar presentation as CenturyLink Technology Solutions (Formerly Savvis) and MapR as we deconstruct and demystify “the enterprise big data stack.” We provide you with a more holistic view of the landscape, explore use cases to show how you can derive business value from it, and share best practices for navigating through the fragmented big data environment.
Slides: Proven Strategies for Hybrid Cloud Computing with Mainframes — From A...DATAVERSITY
Mainframes continue to perform mission-critical transaction processing and contain massive amounts of core business data. But digital transformation initiatives and cloud computing have created both opportunities and challenges for unlocking and utilizing this data. Qlik and AWS will share some of the proven strategies from successful customer deployments across a range of different mainframe to cloud use cases, including legacy application modernization, data analytics, and data migrations.
In this presentation, you will learn how to:
• Replicate very large volumes of mainframe data in real-time to the cloud
• Automate the creation of analytics-ready data lakes and data warehouses
• Achieve a 30% reduction in cost of compute
Big Data Made Easy: A Simple, Scalable Solution for Getting Started with HadoopPrecisely
With so many new, evolving frameworks, tools, and languages, a new big data project can lead to confusion and unwarranted risk.
Many organizations have found Data Warehouse Optimization with Hadoop to be a good starting point on their Big Data journey. Offloading ETL workloads from the enterprise data warehouse (EDW) into Hadoop is a well-defined use case that produces tangible results for driving more insights while lowering costs. You gain significant business agility, avoid costly EDW upgrades, and free up EDW capacity for faster queries. This quick win builds credibility and generates savings to reinvest in more Big Data projects.
A proven reference architecture that includes everything you need in a turnkey solution – the Hadoop distribution, data integration software, servers, networking and services – makes it even easier to get started.
Slides: Moving from a Relational Model to NoSQLDATAVERSITY
Businesses are quickly moving to NoSQL databases to power their modern applications. However, a technology migration involves risk, especially if you have to change your data model. What if you could host a relatively unmodified RDBMS schema on your NoSQL database, then optimize it over time?
We’ll show you how Couchbase makes it easy to:
• Use SQL for JSON to query your data and create joins
• Optimize indexes and perform HashMap queries
• Build applications and analysis with NoSQL
Established since 2008 as an MLM Company
All products manufactured by LLOYD Laboratory and Northfield Laboratory known as a TOP FDA Approved ISO Certified Pharmaceutical Company now venturing into nutraceuticals, food supplements & veterinary products.
LLOYD LABORATORY became ISO 9001:2000 Certified
since December 17, 1998
LLOYD Group of Companies operates in 7 countries worldwide with offices in USA, Singapore, Thailand, Indonesia, Vietnam, India, and the Philippines
Ley 41/2015, de 5 de octubre, de modificación de la Ley de Enjuiciamiento Cri...José Manuel Arroyo Quero
Ley 41/2015, de 5 de octubre, de modificación de la Ley de Enjuiciamiento Criminal para la agilización de la justicia penal y el fortalecimiento de las garantías procesales.
La propuesta de Código Procesal Penal presentada por la Comisión Institucional para la elaboración de un texto articulado de Ley de Enjuiciamiento Criminal, constituida por
Acuerdo del Consejo de Ministros de 2 de marzo de 2012, actualmente sometido a información pública y debate, plantea un cambio radical del sistema de justicia penal cuya
implantación requiere un amplio consenso. En tanto dicho debate se mantiene, en la confianza de encontrar el máximo concierto posible sobre el nuevo modelo procesal penal,
resulta preciso afrontar de inmediato ciertas cuestiones que no pueden aguardar a ser resueltas con la promulgación del nuevo texto normativo que sustituya a la más que centenaria Ley de Enjuiciamiento Criminal.
En esta ley se regularán las cuestiones que no requieren desarrollo mediante ley orgánica, que tendrán una regulación paralela en una norma con dicho rango, y que son
las siguientes: a) la necesidad de establecer disposiciones eficaces de agilización de la justicia penal con el fin de evitar dilaciones indebidas, b) la previsión de un procedimiento
de decomiso autónomo, c) la instauración general de la segunda instancia, d) la ampliación del recurso de casación y e) la reforma del recurso extraordinario de revisión.
Tessera - pelo cortado
Más suave y más lujosa que la moqueta en bucle. Las losetas de moqueta de pelo cortado son idóneas para salones de juntas, salas de reunión y entornos de oficinas.
Tessera ofrece dos impresionantes gamas de losetas de pelo cortado para satisfacer diferentes requisitos estéticos.
Tessera en bucle
Las losetas de moqueta con una composición en bucle están especialmente diseñadas para conservar su buen aspecto bajo las condiciones de tráfico intenso más exigentes, como en las zonas de recepción y pasillos.
La colección Tessera ofrece diversas modalidades de composición en bucle.
Tessera - pelo cortado y en bucle
Las losetas de pelo cortado y en bucle presentan una ingeniosa composición híbrida que ofrece toda la resistencia de la loseta en bucle tradicional con la estética y el tacto de alta calidad de la moqueta en losetas de pelo cortado.
Tessera ofrece losetas de pelo cortado y en bucle en diversos estilos estéticos, adecuados para todo tipo de entornos de oficinas
Tessera, instalación aleatoria
La loseta de instalación aleatoria está diseñada de tal modo que puede instalarse en cualquier dirección para conseguir un pavimento totalmente individualizado. Las losetas de instalación aleatoria procedentes de lotes de producción diferentes pueden utilizarse en una misma instalación sin el riesgo habitual de presentar variaciones de tono visibles. Otra de las ventajas es que no es necesario llevarse el stock sobrante y los residuos de instalación se minimizan al 2%.
La creación de las losetas Tessera de instalación aleatoria ha sido posible gracias al uso de una avanzada tecnología de tufting CMC.
http://bit.ly/1QrYu10
"Moodle Mobile: situación actual y nuevos desarrollos" - Jordi VilaNivel 7
Los números no mienten. Estadísticas recientes nos muestran que la venta de
dispositivos móviles (smartphones y tablets) ha superado la venta de computadoras.
El consumo de contenidos y uso de internet desde dispositivos móviles ha
sobrepasado también al de computadoras (desktop y laptop).
Ante esta dinámica creciente, la respuesta de Moodle ha sido apostar por el desarrollo
de Moodle Mobile para poder hacer uso de funcionalidades que no son posibles con
acceso directo desde un navegador. Explicaremos el estado actual de Moodle Mobile,
sus posibilidades y los planes de futuro de la App oficial de Moodle.
Viernes 17 de enero: Jornada Abierta de Conclusiones de los Workshops de Edificios de Energía Casi Nula EECN y se presentación oficial del "II Congreso de EECN".
Familienunternehmen denken in Generationen und schaffen Werte. Zuverlässigkeit, Bodenständigkeit, kurze Entscheidungswege, Verantwortungsbewusstsein und langfristiges Denken bestimmen traditionell die Unternehmensstrategie. Wenn ein solches Unternehmen scheitert, dann meist nicht am Markt. Die Risiken sind genau da, wo auch ihre Chancen liegen: in der Familie. Denn es geht immer um Liebe, Macht und Geld.
"Norte" is an spanish short film by Pablo Moreno, Contracorriente Producciones. More information: www.facebook.com/Nortecortometraje Trailer: http://youtu.be/hRvPsQjbOcA
Database Development: The Object-oriented and Test-driven WayTechWell
As developers, we've created heuristics that help us build robust systems and employed test-driven development (TDD) to improve code design and counter instability. Yet object-oriented development principles and TDD have failed to gain traction in the database world. That’s because database development involves an additional driving force-the data. Max Guernsey shows how to treat databases as objects with classes of their own-rather than as containers of objects-and how to drive database designs from tests. He illustrates a way to give these database classes the ability to upgrade old data without introducing undue risk. Max also shares how to apply good object-oriented design principles to database classes and how to enforce semantic connections between databases and clients. Max demonstrates how it all works together, ensuring that your production databases work exactly the same as test databases, minimizing the risk of design changes, and enabling client applications to more easily keep up with database changes.
Big Data projects overview at EMC Labs China
• Introduction to Cloud Databases
• Data analytics in the cloud
– Parallel DBMS
– MapReduce
• FlexDB - A cloud-scale database engine based on Hadoop
The Perfect Storm: The Impact of Analytics, Big Data and AnalyticsInside Analysis
The Briefing Room with Barry Devlin and NuoDB
Live Webcast on Oct. 23, 2012
Three major factors in enterprise computing are combining to rewrite how data is stored, accessed and managed: 1) the demand of analytics that now spreads across hundreds, even thousands of users; 2) the pervasiveness of Big Data in all its forms and sizes; and 3) the rise of the commodity data center, aka Cloud computing. The convergence of these forces calls for a new data foundation, one that can handle the scalability and workload issues that face today's information managers.
Check out this episode of The Briefing Room to learn from veteran Analyst Barry Devlin, one of the very first architects of data warehousing, who will explain how today's information architectures require a radically different approach. He'll be briefed by Barry Morris, Founder and CEO of NuoDB, who will tout his company's product, described as a peer-to-peer messaging system that acts as a database. It behaves just like a traditional relational database, but was designed with a completely distributed and scalable architecture.
http://www.insideanalysis.com
Agile Data Rationalization for Operational IntelligenceInside Analysis
The Briefing Room with Eric Kavanagh and Phasic Systems
Live Webcast Mar. 26, 2013
The complexity of today's information architectures creates a wide range of challenges for executives trying to get a strategic view of their current operations. The data and context locked in operational systems often get diluted during the normalization processes of data warehousing and other types of analytic solutions. And the ultimate goal of seeing the big picture gets derailed by a basic inability to reconcile disparate organizational views of key information assets and rules.
Register for this episode of The Briefing Room to learn from Bloor Group CEO Eric Kavanagh, who will explain how a tightly controlled methodology can be combined with modern NoSQL technology to resolve both process and system complexities, thus enabling a much richer, more interconnected information landscape. Kavanagh will be briefed by Geoffrey Malafsky of Phasic Systems who will share his company's tested methodology for capturing and managing the business and process logic that run today's data-driven organizations. He'll demonstrate how a “don't say no” approach to entity definitions can dissolve previously intractable disagreements, opening the door to clear, verifiable operational intelligence.
Visit: http://www.insideanalysis.com
Big Challenges in Data Modeling: NoSQL and Data ModelingDATAVERSITY
Big Data and NoSQL have led to big changes In the data environment, but are they all in the best interest of data? Are they technologies that "free us from the harsh limitations of relational databases?"
In this month's webinar, we will be answering questions like these, plus:
Have we managed to free organizations from having to do Data Modeling?
Is there a need for a Data Modeler on NoSQL projects?
If we build Data Models, which types will work?
If we build Data Models, how will they be used?
If we build Data Models, when will they be used?
Who will use Data Models?
Where does Data Quality happen?
Finally, we will wrap with 10 tips for data modelers in organizations incorporating NoSQL in their modern Data Architectures.
"A Study of I/O and Virtualization Performance with a Search Engine based on ...Lucidworks (Archived)
Documentum xPlore provides an integrated Search facility for the Documentum Content Server. The standalone search engine is based on EMC's xDB (Native XML database) and Lucene. In this talk we will introduce xPlore and some of its key components and capabilities. These include aspects of a tight integration of Lucene with the XML database: xQuery translation and optimization into Lucene query/API's as well as transactional update Lucene). In addition, xPlore is being deployed aggressively into virtualized environments (both disk I/O and VM). We cover some performance results and tuning tips in these areas.
KMA and Metalogix share details on how to beging planning your SharePoint migration. Tips and tricks, gotchas, budgeting and planning techniques, migration tools and more.
The NoSQL movement has introduced four new database architectural patterns that complement, but not replace, traditional relational and analytical databases. This presentation will introduce these four patterns and discuss their relative strengths and weaknesses for solving a variety of business problems. These problems include Big Data (scalability), search, high availability and agility. For each type of problem we look at how NoSQL databases take different approaches to solving these problems and how you can use this knowledge to find the right database architecture for your business challenges.
Architecture, Products, and Total Cost of Ownership of the Leading Machine Le...DATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a comprehensive platform designed to address multi-faceted needs by offering multi-function data management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion.
In this research-based session, I’ll discuss what the components are in multiple modern enterprise analytics stacks (i.e., dedicated compute, storage, data integration, streaming, etc.) and focus on total cost of ownership.
A complete machine learning infrastructure cost for the first modern use case at a midsize to large enterprise will be anywhere from $3 million to $22 million. Get this data point as you take the next steps on your journey into the highest spend and return item for most companies in the next several years.
Data at the Speed of Business with Data Mastering and GovernanceDATAVERSITY
Do you ever wonder how data-driven organizations fuel analytics, improve customer experience, and accelerate business productivity? They are successful by governing and mastering data effectively so they can get trusted data to those who need it faster. Efficient data discovery, mastering and democratization is critical for swiftly linking accurate data with business consumers. When business teams can quickly and easily locate, interpret, trust, and apply data assets to support sound business judgment, it takes less time to see value.
Join data mastering and data governance experts from Informatica—plus a real-world organization empowering trusted data for analytics—for a lively panel discussion. You’ll hear more about how a single cloud-native approach can help global businesses in any economy create more value—faster, more reliably, and with more confidence—by making data management and governance easier to implement.
What is data literacy? Which organizations, and which workers in those organizations, need to be data-literate? There are seemingly hundreds of definitions of data literacy, along with almost as many opinions about how to achieve it.
In a broader perspective, companies must consider whether data literacy is an isolated goal or one component of a broader learning strategy to address skill deficits. How does data literacy compare to other types of skills or “literacy” such as business acumen?
This session will position data literacy in the context of other worker skills as a framework for understanding how and where it fits and how to advocate for its importance.
Building a Data Strategy – Practical Steps for Aligning with Business GoalsDATAVERSITY
Developing a Data Strategy for your organization can seem like a daunting task – but it’s worth the effort. Getting your Data Strategy right can provide significant value, as data drives many of the key initiatives in today’s marketplace – from digital transformation, to marketing, to customer centricity, to population health, and more. This webinar will help demystify Data Strategy and its relationship to Data Architecture and will provide concrete, practical ways to get started.
Uncover how your business can save money and find new revenue streams.
Driving profitability is a top priority for companies globally, especially in uncertain economic times. It's imperative that companies reimagine growth strategies and improve process efficiencies to help cut costs and drive revenue – but how?
By leveraging data-driven strategies layered with artificial intelligence, companies can achieve untapped potential and help their businesses save money and drive profitability.
In this webinar, you'll learn:
- How your company can leverage data and AI to reduce spending and costs
- Ways you can monetize data and AI and uncover new growth strategies
- How different companies have implemented these strategies to achieve cost optimization benefits
Data Catalogs Are the Answer – What is the Question?DATAVERSITY
Organizations with governed metadata made available through their data catalog can answer questions their people have about the organization’s data. These organizations get more value from their data, protect their data better, gain improved ROI from data-centric projects and programs, and have more confidence in their most strategic data.
Join Bob Seiner for this lively webinar where he will talk about the value of a data catalog and how to build the use of the catalog into your stewards’ daily routines. Bob will share how the tool must be positioned for success and viewed as a must-have resource that is a steppingstone and catalyst to governed data across the organization.
Data Catalogs Are the Answer – What Is the Question?DATAVERSITY
Organizations with governed metadata made available through their data catalog can answer questions their people have about the organization’s data. These organizations get more value from their data, protect their data better, gain improved ROI from data-centric projects and programs, and have more confidence in their most strategic data.
Join Bob Seiner for this lively webinar where he will talk about the value of a data catalog and how to build the use of the catalog into your stewards’ daily routines. Bob will share how the tool must be positioned for success and viewed as a must-have resource that is a steppingstone and catalyst to governed data across the organization.
In this webinar, Bob will focus on:
-Selecting the appropriate metadata to govern
-The business and technical value of a data catalog
-Building the catalog into people’s routines
-Positioning the data catalog for success
-Questions the data catalog can answer
Because every organization produces and propagates data as part of their day-to-day operations, data trends are becoming more and more important in the mainstream business world’s consciousness. For many organizations in various industries, though, comprehension of this development begins and ends with buzzwords: “Big Data,” “NoSQL,” “Data Scientist,” and so on. Few realize that all solutions to their business problems, regardless of platform or relevant technology, rely to a critical extent on the data model supporting them. As such, data modeling is not an optional task for an organization’s data effort, but rather a vital activity that facilitates the solutions driving your business. Since quality engineering/architecture work products do not happen accidentally, the more your organization depends on automation, the more important the data models driving the engineering and architecture activities of your organization. This webinar illustrates data modeling as a key activity upon which so much technology and business investment depends.
Specific learning objectives include:
- Understanding what types of challenges require data modeling to be part of the solution
- How automation requires standardization on derivable via data modeling techniques
- Why only a working partnership between data and the business can produce useful outcomes
Analytics play a critical role in supporting strategic business initiatives. Despite the obvious value to analytic professionals of providing the analytics for these initiatives, many executives question the economic return of analytics as well as data lakes, machine learning, master data management, and the like.
Technology professionals need to calculate and present business value in terms business executives can understand. Unfortunately, most IT professionals lack the knowledge required to develop comprehensive cost-benefit analyses and return on investment (ROI) measurements.
This session provides a framework to help technology professionals research, measure, and present the economic value of a proposed or existing analytics initiative, no matter the form that the business benefit arises. The session will provide practical advice about how to calculate ROI and the formulas, and how to collect the necessary information.
How a Semantic Layer Makes Data Mesh Work at ScaleDATAVERSITY
Data Mesh is a trending approach to building a decentralized data architecture by leveraging a domain-oriented, self-service design. However, the pure definition of Data Mesh lacks a center of excellence or central data team and doesn’t address the need for a common approach for sharing data products across teams. The semantic layer is emerging as a key component to supporting a Hub and Spoke style of organizing data teams by introducing data model sharing, collaboration, and distributed ownership controls.
This session will explain how data teams can define common models and definitions with a semantic layer to decentralize analytics product creation using a Hub and Spoke architecture.
Attend this session to learn about:
- The role of a Data Mesh in the modern cloud architecture.
- How a semantic layer can serve as the binding agent to support decentralization.
- How to drive self service with consistency and control.
Enterprise data literacy. A worthy objective? Certainly! A realistic goal? That remains to be seen. As companies consider investing in data literacy education, questions arise about its value and purpose. While the destination – having a data-fluent workforce – is attractive, we wonder how (and if) we can get there.
Kicking off this webinar series, we begin with a panel discussion to explore the landscape of literacy, including expert positions and results from focus groups:
- why it matters,
- what it means,
- what gets in the way,
- who needs it (and how much they need),
- what companies believe it will accomplish.
In this engaging discussion about literacy, we will set the stage for future webinars to answer specific questions and feature successful literacy efforts.
The Data Trifecta – Privacy, Security & Governance Race from Reactivity to Re...DATAVERSITY
Change is hard, especially in response to negative stimuli or what is perceived as negative stimuli. So organizations need to reframe how they think about data privacy, security and governance, treating them as value centers to 1) ensure enterprise data can flow where it needs to, 2) prevent – not just react – to internal and external threats, and 3) comply with data privacy and security regulations.
Working together, these roles can accelerate faster access to approved, relevant and higher quality data – and that means more successful use cases, faster speed to insights, and better business outcomes. However, both new information and tools are required to make the shift from defense to offense, reducing data drama while increasing its value.
Join us for this panel discussion with experts in these fields as they discuss:
- Recent research about where data privacy, security and governance stand
- The most valuable enterprise data use cases
- The common obstacles to data value creation
- New approaches to data privacy, security and governance
- Their advice on how to shift from a reactive to resilient mindset/culture/organization
You’ll be educated, entertained and inspired by this panel and their expertise in using the data trifecta to innovate more often, operate more efficiently, and differentiate more strategically.
Emerging Trends in Data Architecture – What’s the Next Big Thing?DATAVERSITY
With technological innovation and change occurring at an ever-increasing rate, it’s hard to keep track of what’s hype and what can provide practical value for your organization. Join this webinar to see the results of a recent DATAVERSITY survey on emerging trends in Data Architecture, along with practical commentary and advice from industry expert Donna Burbank.
Data Governance Trends - A Look Backwards and ForwardsDATAVERSITY
As DATAVERSITY’s RWDG series hurdles into our 12th year, this webinar takes a quick look behind us, evaluates the present, and predicts the future of Data Governance. Based on webinar numbers, hot Data Governance topics have evolved over the years from policies and best practices, roles and tools, data catalogs and frameworks, to supporting data mesh and fabric, artificial intelligence, virtualization, literacy, and metadata governance.
Join Bob Seiner as he reflects on the past and what has and has not worked, while sharing examples of enterprise successes and struggles. In this webinar, Bob will challenge the audience to stay a step ahead by learning from the past and blazing a new trail into the future of Data Governance.
In this webinar, Bob will focus on:
- Data Governance’s past, present, and future
- How trials and tribulations evolve to success
- Leveraging lessons learned to improve productivity
- The great Data Governance tool explosion
- The future of Data Governance
Data Governance Trends and Best Practices To Implement TodayDATAVERSITY
Would you share your bank account information on social media? How about shouting your social security number on the New York City subway? We didn’t think so either – that’s why data governance is consistently top of mind.
In this webinar, we’ll discuss the common Cloud data governance best practices – and how to apply them today. Join us to uncover Google Cloud’s investment in data governance and learn practical and doable methods around key management and confidential computing. Hear real customer experiences and leave with insights that you can share with your team. Let’s get solving.
Topics that you will hear addressed in this webinar:
- Understanding the basics of Cloud Incident Response (IR) and anticipated data governance trends
- Best practices for key management and apply data governance to your day-to-day
- The next wave of Confidential Computing and how to get started, including a demo
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the enterprise mission will be executed and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and data architecture. William will kick off the fifth year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
Too often I hear the question “Can you help me with our data strategy?” Unfortunately, for most, this is the wrong request because it focuses on the least valuable component: the data strategy itself. A more useful request is: “Can you help me apply data strategically?” Yes, at early maturity phases the process of developing strategic thinking about data is more important than the actual product! Trying to write a good (must less perfect) data strategy on the first attempt is generally not productive –particularly given the widespread acceptance of Mike Tyson’s truism: “Everybody has a plan until they get punched in the face.” This program refocuses efforts on learning how to iteratively improve the way data is strategically applied. This will permit data-based strategy components to keep up with agile, evolving organizational strategies. It also contributes to three primary organizational data goals. Learn how to improve the following:
- Your organization’s data
- The way your people use data
- The way your people use data to achieve your organizational strategy
This will help in ways never imagined. Data are your sole non-depletable, non-degradable, durable strategic assets, and they are pervasively shared across every organizational area. Addressing existing challenges programmatically includes overcoming necessary but insufficient prerequisites and developing a disciplined, repeatable means of improving business objectives. This process (based on the theory of constraints) is where the strategic data work really occurs as organizations identify prioritized areas where better assets, literacy, and support (data strategy components) can help an organization better achieve specific strategic objectives. Then the process becomes lather, rinse, and repeat. Several complementary concepts are also covered, including:
- A cohesive argument for why data strategy is necessary for effective data governance
- An overview of prerequisites for effective strategic use of data strategy, as well as common pitfalls
- A repeatable process for identifying and removing data constraints
- The importance of balancing business operation and innovation
Who Should Own Data Governance – IT or Business?DATAVERSITY
The question is asked all the time: “What part of the organization should own your Data Governance program?” The typical answers are “the business” and “IT (information technology).” Another answer to that question is “Yes.” The program must be owned and reside somewhere in the organization. You may ask yourself if there is a correct answer to the question.
Join this new RWDG webinar with Bob Seiner where Bob will answer the question that is the title of this webinar. Determining ownership of Data Governance is a vital first step. Figuring out the appropriate part of the organization to manage the program is an important second step. This webinar will help you address these questions and more.
In this session Bob will share:
- What is meant by “the business” when it comes to owning Data Governance
- Why some people say that Data Governance in IT is destined to fail
- Examples of IT positioned Data Governance success
- Considerations for answering the question in your organization
- The final answer to the question of who should own Data Governance
It is clear that Data Management best practices exist and so does a useful process for improving existing Data Management practices. The question arises: Since we understand the goal, how does one design a process for Data Management goal achievement? This program describes what must be done at the programmatic level to achieve better data use and a way to implement this as part of your data program. The approach combines DMBoK content and CMMI/DMM processes – permitting organizations with the opportunity to benefit from the best of both. It also permits organizations to understand:
- Their current Data Management practices
- Strengths that should be leveraged
- Remediation opportunities
MLOps – Applying DevOps to Competitive AdvantageDATAVERSITY
MLOps is a practice for collaboration between Data Science and operations to manage the production machine learning (ML) lifecycles. As an amalgamation of “machine learning” and “operations,” MLOps applies DevOps principles to ML delivery, enabling the delivery of ML-based innovation at scale to result in:
Faster time to market of ML-based solutions
More rapid rate of experimentation, driving innovation
Assurance of quality, trustworthiness, and ethical AI
MLOps is essential for scaling ML. Without it, enterprises risk struggling with costly overhead and stalled progress. Several vendors have emerged with offerings to support MLOps: the major offerings are Microsoft Azure ML and Google Vertex AI. We looked at these offerings from the perspective of enterprise features and time-to-value.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
1. Agile NoSQL with XRX
Dan McCreary
President
M Dan McCreary & Associates
dan@danmccreary.com
D
(952) 931 9198
931-9198
Metadata Solutions
2. Session Description
What if all members of your software development team from
Project Managers, Business Analysts, Testing and Documentation
Managers Analysts
members could create and modify web applications and web
services? With traditional SQL solutions this was difficult because
of th need t convert web pages t objects, objects t t bl as
f the d to t b to bj t bj t to tables
well as the reverse functions. But now with native XML databases
and drag-and-drop forms builders, data can flow from the XML
model of a web form to the database and back again without
translation. This radically simpler process combined with
standardized query languages makes it easier for non-
non
programmers to build and maintain their own applications and web
services.
M
D 2 of
Copyright 2011 Dan McCreary & Associates
N
3. During this session viewers will:
• Understand the challenges associated with traditional four-
translation web/object/RDBMS systems
• Understand the elegant simplicity of zero-translation systems
• Understand the role of web standards in the XRX environment
• See how each component of XRX plays key roles in the
application
• See how organizations have leveraged XRX for agile web
development
• See variations of XRX architectures
• Create and objective p
j process to evaluate the benefits of XRX
• Get resources for building XRX pilot projects
M
D 3
Copyright 2011 Dan McCreary & Associates
4. Executive Summary
• Schema-free, zero translation NoSQL
systems have the ability to have a big
impact on overall system agility
p y g y
• XRX systems dramatically increase
overall agility and can also empower
non-progammers to build and maintain
NoSQL systems
M
D 4
Copyright 2011 Dan McCreary & Associates
5. Background for Dan McCreary
• Circuit Designer at Bell Labs
• UNIX/Supercomputers
• NeXT Computer (Steve Jobs)
• Owner of 75-person software consulting firm
with a focus on Object-Oriented systems and
Object Oriented
Object-relational mapping frameworks
• US Federal data integration (National
Information Exchange Model NIEM.gov)
• NativeXML/XQuery for metadata management
since 2006
• Advocate of web standards, OpenSource,
NoSQL d
N SQL and XRX systems t
• W3C invited expert (forms working group)
• Likes functional programming
M
D 5
Copyright Kelly-McCreary & Associates, LLC
6. Origins: The Humble Data Dictionary
M
6
D Copyright 2011 Kelly-McCreary & Associates
7. Electronic Certificate of Real Estate
Summer 2006
1 Document
= 44 SQL inserts
M
D 7
Copyright 2011 Kelly-McCreary & Associates
8. 250 Data Elements
XForms
Mockup
M
D 8
Copyright 2011 Kelly-McCreary & Associates
9. Four Translations
T1 T2
T4 T3
Relational
Web Browser Object Middle
Database
Tier
• T1 – HTML into Java Objects
• T2 – Java Objects into SQL Tables
• T3 – Tables into Objects
• T4 – Objects into HTML
M
9
D Copyright 2011 Kelly-McCreary & Associates
10. Kurt's Suggestion
Use a
A Native XML
Database!
Web Form
Save
Web Browser
Kurt Cagle
store($collection, $file-name, $data)
M
eXist
D 10
Copyright 2011 Kelly-McCreary & Associates
11. Zero Translation
XForms
Web Browser XML database
• XML lives in the web browser (XForms)
• REST interfaces
• XML in the database (Native XML, XQuery)
( , y)
• XRX Web Application Architecture
• No translation!
M
D Copyright 2010 Dan McCreary & Associates 11
12. Key Question: Impact on Agility
• What impact do zero translation
NoSQL
N SQL systems h
t have on
system agility?
• Agility: the ability to quickly
react to changing business
requirements at any stage of
the software development
lifecycle
• Question: Big impact or little
impact?
M
• Answer: Big impact
D 12
Copyright 2011 Kelly-McCreary & Associates
13. No-Shredding!
My Form
Data
• Relational databases take a single hierarchical
document and shred it into many pieces so it will fit in
tabular structures
• Many document-oriented NoSQL databases prevent
this shreddingg
M
D 13
Copyright 2008 Dan McCreary & Associates
14. Is Shredding Really Necessary?
• Every time you take
hierarchical data and
put it into a traditional
database you have to
put repeating groups in
separate tables and
use SQL “joins” to
joins
reassemble the data
M
D 14
Copyright 2008 Dan McCreary & Associates
15. Many Processes Today Are Driven By…
The constraints of yesterday…
Challenge:
Ask ourselves the question…
Do
D our current method of solving problems with t b l d t
t th d f l i bl ith tabular data…
Reflect the storage of the 1950s…
Or our actual business requirements?
What structures best solve the actual business problem?
p
M
D 15
Copyright 2008 Dan McCreary & Associates
16. "Schema Free"
• Systems that automatically determine
how to index data as the data is loaded
into the database
• No a priori knowledge of data structure
• N need f up-front logical d t modeling
No d for f t l i l data d li
– …but some modeling is still critical
• Adding new data elements or changing
g g g
data elements is not disruptive
• Searching millions of records still has sub-
second response time
M
D 16
Copyright 2010 Dan McCreary & Associates
19. Finding the Right Match
Schema-Free
Standards Compliant
Mature Query Language
M Use CMU's Architectural Tradeoff and Modeling (ATAM) Process
D 19
Copyright 2010 Dan McCreary & Associates
20. Architectural Summary
Four Translation Zero Translation
T T
T T web browser XML database
web browser
database
• HTML web pages
eb • XForms Client
• Object middle tier • Native XML Database
• RDBMS database
Which system more agile and by how much?
M How can this help us manage enterprise metadata?
D 20
Copyright 2010 Dan McCreary & Associates
21. Origins: The XML Data Dictionary
M
D 21
Copyright 2010 Dan McCreary & Associates
22. Electronic Certificate of Real Estate
Summer 2006
1 Document
= 44 SQL inserts
M
D 22
Copyright 2010 Dan McCreary & Associates
23. 250 Data Elements
XForms
Mockup
M
D 23
Copyright 2010 Dan McCreary & Associates
24. The NO-SQL Universe
Key-Value Stores Document Stores
XML
Graph Stores
Object Stores
M
24
D Copyright 2010 Dan McCreary & Associates
25. Three Core Processes
1. Add new data to the database
1. Use XML example but JSON and other
formats could also be used
2. Query the d
2 Q h data
3. Create an XML web service
• Analyze the effort for each step
• Compare SQL and NoSQL XRX systems
NoSQL-XRX
• Analyze impact on participation of non-programmers
• Rate the relative agility of each system
M
D 25
Copyright 2010 Dan McCreary & Associates
26. It is Easy to Import Data
SQL XQuery
1. Analyze data for all parent child 1. Drag XML files into folder
relationships and repeating g p
p p g groups
2. Design logical and physical ER
diagrams
3. For each table create a Data Definition
File using a data definition language
(DDL)
4. Create indexes using DDL
5. Create one table for each set of
repeating set of data
6. Run DDL on database creating tables
using the appropriate data types
7. Create indexes
8. Create Insert statements
9. Create separate insert statements for
each repeating group
10. Run Insert statements on primary
structures in database
11. Use primary keys of the first data
inserts as foreign keys of dependant
M data structures
D 26
27. The XML File system
• XML File system – a way of
storing information in XML that
can be quickly searched
• You can drag-and-drop almost
any fil onto thi fil system
files t this file t
• You access it by using a
WebDAV connector
• Microsoft Windows “My Network
Places” function
M
D 27
28. Functional Programming
y = f(x)
• Computer programs are like mathematical functions
• Developers do not manipulate states and variables (things that
change value), but focus entirely on constants and functions
g ) y
(things that never change) and transformations of data
• Makes it very easy to build modular programs
• Software written in functional programming languages tend to
be very concise and easy to port to highly parallel systems
M http://en.wikibooks.org/wiki/Computer_programming/Functional_programming
D 28
Copyright 2011 Kelly-McCreary & Associates
29. It's Easy to Query XML Data
SELECT COL1, Col2
for $r in doc(‘t.xml’)//row
FROM TABLE where col1=1
col1 1
WHERE COL1=1 return $r/col1, $r/col2
<root>
<row>
Col1 Col2 <col1>1</col1><col2>A</col2>
</row>
1 A <row>
<col1>1</col1><col2>B</col2>
1 B </row>
<row>
1 C <col1>1</col1><col2>C</col2>
</row>
1 D <row>
<col1>1</col1> <col2>D</col2>
M </row>
</root>
D 29
30. It is Easy to Create A Web Service
The Java/JDBC/SQL Way The XRX Way
1. All XQuerys are web services
1.
1 Learn Java or find a Java Developer
2. Install TomCat Web Server
3. Install Java AXIS Web Server
4. Write a JDBC program that sends
SQL queries to a database
5. Get the results back in Java Result
Object structures
6. Go through the Java Results
Structues and use print statements
to wrap XML tags aro nd the strings
rap around
in the result objects
7. Rename your class files to .jws files
8. Add the .jws files to the TomCat
deploy folders
9. The WSDL files will automatically be
generated
M
D 30
31. Insert/Select/Publish Comparison
SQL
SQL
Java
logical Tomcat
data AXIS
modeling JDBC
SQL XQuery
XQuery XQuery
Insert Query Web Service NoSQL
M
D Total Effort 31
32. Steps for Adding a New Element
• Add the element to the "new
instance
instance" a form uses and set
Takes about
one minute the default value there
• Add the UI control
• Optionally – run an XQuery
update to update existing data
Takes about 10
minutes
or add an insert to the
"enricher" when you get an
y g
instance
M
D 32
Copyright 2011 Kell-McCreary & Associates
33. Six Translation – Web Service
Web Service
T5 T6
T1 T2
T3 T4
Relational
Web Browser Object Middle
Database
Tier
• T1 – HTML into Java Objects
• T2 – Java Objects into SQL Tables
• T3 – Tables into Objects
• T4 – Objects into HTML
• T5 – Objects to XML
M
• T6 – XML to Objects
D Copyright 2011 Dan McCreary & Associates 33
34. XML Stored in XForms Model
Browser Database
model save
update
view
M
D Copyright 2011 Kelly-McCreary & Associates 34
35. XRX Core Process
Browser Database
model
save/edit
update
view
M
35
D Copyright 2011 Kelly-McCreary & Associates
36. Code Table Services
Client Server
model Form Data
Code Table Service
all-codes.xq
view
Code
C d
Tables
M
Code tables are separated from form instance data
D 36
Copyright 2011 Kelly McCreary & Associates
37. XRX Dynamic Forms Generation
Application Server
Client Application
XForms Model
Session
Form Data Form Data
User Team Document Collection
Status
Code Tables
Role Group
Views DataElement
Registry
Binding Rules
Code Table Services
Required Context
filters
Read-only Suggest Services
Data Types
Business Rules Editor
Calculations
Calculations Inference
Submissions
Constraints
XForms View
XF Vi
XML Schema Registry
Static Controls
Subschema Service
Dynamic Controls
Constraint Schemas Semantic Schemas
M
Design Time
D Run Time
38. Model Driven
• XForms enables the
developer to reuse business
XForms rules encapsulated in XML
Application
Schemas (xsd) and XML
Transforms (XQuery
Typeswitch)
• XForms reduces duplication
Meta
XML
Data and ensures that a change in
Schema
Registry the underlying business logic
does not require rewriting in
M another language
D Copyright 2011 Kelly-McCreary & Associates 38
39. View and Model are Trees
Model • The view is a tree of a presentation
data element
• Models are comprised of one or
more trees
• XForms supplies the control layer
Control (Bind) that moves data elements to and
from the model
View (Presentation) • Users don’t have to worry about
moving things to and from the
g g
screen
M
D 39
Copyright 2011 Kelly McCreary & Associates
40. Models and View Are Linked with "Bind"
HTML
head
h d
xf:model body
Person
P form
f
Name fieldset
label
label
first last input
<bind> input
• Both the model and the views are trees of
M data l
d t elements
t
D 40
Copyright 2011 Dan McCreary & Associates
41. Just “Do The Right Thing”
HTML
head
body
xf:model
Person form
PersonCurrentOnTaxes type="xs:boolean" fieldset
label
PersonBirthDate type="xs:date" label
input
input
<bind>
• Data types from the model just do the right thing
• Boolean variables become checkboxes
M • Dates have d t selectors
D t h date l t
41
D Copyright 2008 Dan McCreary & Associates
42. Example of Automatic UI Generation
• All true/false data types
(xs:boolean) automatically
become a checkbox
• All dates (xs:date) have a
date selector to the right of
the date field
• All codes can be selected
M from lists
42
D Copyright 2008 Dan McCreary & Associates
43. Structure of a XForms File
• XForms tags are just XML
Namespaces
tags imbedded in a
CSS Imports (View) standard XHTML file with
Model a different namespace
Constraints (Bindings)
( g ) • Most HTML form tags are
g
UI (View)
exactly the same but some
attributes have been
Submit Controls
promoted to be full
MyForm.xhtml
y elements
M
43
D Copyright 2008 Dan McCreary & Associates
44. REST
• REpresentation State Transfer
• Create applications based on well
designed URLs
• Take advantage of web caching
• Migrate toward Resource Oriented
Resource-Oriented
Computing (ROC)
• REST evangelists: RESTifarians
M
D Copyright 2011 Kelly-McCreary & Associates 44
45. Five RESTFull Friends
1. In-resident memory cache in your
browser
2. You local hard drive cache
3. Your local enterprise cache
4.
4 The cache on the web server farm
5. The cache on the database
Please make sure to check with your RESTfull
friends BEFORE you bother the database.
M
D 45
Copyright 2011 Kelly-McCreary & Associates
46. Shallow REST vs. Deep REST
• You can start taking advantage of ReST
buy just doing
b j t d i well th ll thought-out URL
ht t
design
• To take advantage of deep ReST you
must consider the subtleties of the
HTTP protocol
– GET vs POST vs PUT
– DELETE
M
D 46
Copyright 2008 Dan McCreary & Associates
47. Benefits of REST
• Provides improved response time
• Reduced server load
• Improves server scalability
p y
• Requires less client-side software
• Depends less on vendor dependencies
• Promotes discovery
• Provides better long-term compatibility
• y
Better and evolvability
M
D 47
Copyright 2008 Dan McCreary & Associates
48. Sample XForms
M
D 48
Copyright 2011 Dan McCreary & Associates
49. Requirements Editor
Code
Table
Selection
Lists
Repeating
R ti
Elements
M
D 49
50. Drag-and-Drop XForms Builders
• Several options for
drag-and-drop f
d dd forms
builder
• Open Source
– Orbeon
– BetterFORM
• Commercial
– IBM Forms
M
D 50
Copyright 2011 Kelly-McCreary & Associates
55. XRX Apps Demo Site
• Demo site for
danmccreary.com
• Some programs written
under an hour with
student data
• Several utility programs
that start with template
and add
transformations to
other formats
• Focus on metadata
management
M
D 55
Copyright 2011 Kelly-McCreary & Associates
56. Structured Retrieval is Better
Introduction to
Information Retrieval
by Christopher D. Manning,
y p g
Prabhakar Raghavan and
Hinrich Schütze
Cambridge University
C b id U i i
Press, 2008
M
http://nlp.stanford.edu/IR-book/information-retrieval-book.html
http://nlp stanford edu/IR-book/information-retrieval-book html
D 56
57. Table 10.1 - Revised
unstructured structured
RDB search
retrieval retrieval
unstructured trees with text at
objects records
documents leaves
vector space &
model relational model XML hierarchy
others
trees with node-
main data
table inverted index ids for
structure
t t
document ids
queries SQL free text queries XQuery fulltext
XML - Table 10.1 and structured information retrieval. SQLRDB (relational database) search,
unstructured information retrieval
M
D 57
58. Retain Document Structure in Search
"Bag of Words" "Retained Structure"
keywords
doc-id
keywords
'love' keywords
keywords
'hate' keywords
'new'
new
keywords
'fear'
• All keywords in a single container • Keywords associated with each
• Only count frequencies are stored sub-document component
with each word • Assign higher weight for titles and
M
names
D • Set by non-programmer 58
59. Empower the Non Programmer!
Before XRX After XRX
SUPER PM, BA!
,
Sorry, we have no idea Let me just search our XRX system…
what code 47 means. I'll have your answer in 150 milliseconds.
M
D 59
Copyright 2011 Kelly-McCreary & Associates
60. Using the Right Architecture
Start Finish
Find ways to remove barriers to empowering
the non programmers on your team
team.
M
D 60
Copyright 2011 Kelly-McCreary & Associates
61. Six "S"s of XRX Agility
1. Semantics – build around shard metadata
registry services
i t i
2. Search – structured search
3. Standards – CSS, XML, XPath, XQuery,
XForms, XML Schemas
4. Services – all XQ i are REST services
4 S i ll XQueries i
5. Solutions - that are quickly customized
6. Super – Empower the non-programmers
M
61
D Copyright 2010 Kelly-McCreary & Associates
62. XRX Resources
• XRX Wikipedia Page
– http://en.wikipedia.org/wiki/XRX(web_application_architecture)
p p g ( _ pp _ )
• XRX Resources
– http://www.danmccreary.com/xrx/
• LinkedIn XRX Group
– http://www.linkedin.com/groups/XRX-Web-Application-Architecture-1056777
• Beginner's Guide to XRX
– http://www danmccreary com/xrx/beginners-guide
http://www.danmccreary.com/xrx/beginners-guide
• XRX Wikibook
– http://en.wikibooks.org/wiki/XRX
• Google Code
– http://code.google.com/p/xrx/
M
D 62
Copyright 2011 Kelly-McCreary & Associates
63. References
XForms
XQuery
XQ
XRX
A Beginner's Guide to XRX
Send e-mail to dan@danmccreary com for extended
e mail dan@danmccreary.com
M
list of "getting started" resources.
D 63
Copyright 2011 Kelly-McCreary & Associates
64. Thank You!
Dan McCreary, President
Kelly-McCreary & Associates
dan@danmccreary.com
(952) 931-9198
eXist Meeting Prague March 12th, 2010
M
D 64
Copyright 2007 Dan McCreary & Associates