The NoSQL movement has rekindled interest in data storage solutions. A few years ago, within limited scale systems, storage choices for programmers and architects were simple where relational databases were almost always the choice. However, advent of Cloud and ever increasing user bases for applications have given rise to larger scale systems. Relational databases cannot always scale to meet the needs of those systems, and as an alternative, the NoSQL movement has proposed many solutions.
For a programmer who wants to select a data model, they now have to choose from a wide variety of choices like Local memory, Relational databases, Files, Distributed Cache, Column Family Storage, Document Storage, Name value pairs, Graph DBs, Service Registries, Queue, and Tuple Space etc. Furthermore, there are different layers/access choices such as directly accessing data, using object to relation mapping layer like hibernate/JPA, or using data services. Moreover, users also need to worry about how to scale up the storage in multiple dimensions like the number of databases, the number of tables, the amount of data in a table, frequency of requests, types of requests (read/write ratio).
Consequently, choosing the right data model for a given problem is no longer trivial, and such a choice needs a clear understanding of different storage offerings, their similarities, differences, as well as associated tradeoffs. We faced the same problem while designing the data interfaces for Stratos Platform as a Service (SaaS) offering, and in this talk, we would like to share our findings and experiences of that work. We will present a survey of different data models, their differences as well as similarities, tradeoffs, and killer apps for each model. We believe the participants will walk away with a border understanding about data models and guidelines on which model to be used when.
Finding the Right Data Solution for Your Application in the Data Storage Hays...Srinath Perera
The NoSQL movement has rekindled interest in data storage solutions. A few years ago, within limited scale systems, storage choices for programmers and architects were simple where relational databases were almost always the choice. However, advent of Cloud and ever increasing user bases for applications have given rise to larger scale systems. Relational databases cannot always scale to meet the needs of those systems, and as an alternative, the NoSQL movement has proposed many solutions.
For a programmer who wants to select a data model, they now have to choose from a wide variety of choices like Local memory, Relational databases, Files, Distributed Cache, Column Family Storage, Document Storage, Name value pairs, Graph DBs, Service Registries, Queue, and Tuple Space etc. Furthermore, there are different layers/access choices such as directly accessing data, using object to relation mapping layer like hibernate/JPA, or using data services. Moreover, users also need to worry about how to scale up the storage in multiple dimensions like the number of databases, the number of tables, the amount of data in a table, frequency of requests, types of requests (read/write ratio).
Consequently, choosing the right data model for a given problem is no longer trivial, and such a choice needs a clear understanding of different storage offerings, their similarities, differences, as well as associated tradeoffs. We faced the same problem while designing the data interfaces for Stratos Platform as a Service (SaaS) offering, and in this talk, we would like to share our findings and experiences of that work. We will present a survey of different data models, their differences as well as similarities, tradeoffs, and killer apps for each model. We believe the participants will walk away with a border understanding about data models and guidelines on which model to be used when.
Runner Up: Best Use of Customer InsightB2B Marketing
In July 2011, international IT services company Atos Origin acquired Siemens IT Services and rebranded as Atos. The merger catapulted Atos up from the eleventh to the third IT provider to financial services organisations in Europe. It was a huge opportunity for Atos to target a larger global financial services client base. The resulting prospect campaign combined deep prospect insight, personalised approaches and integrated international execution. It delivered a 350x ROI.
Key takeaways will address the benefits of building sector-specific propositions, developing deep prospect intelligence, and combining data, creative communications and telemarketing in a single joined-up approach.
ElasticSearch - Suche im Zeitalter der Cloudsinovex GmbH
Eine performante Suche mit relevanten Ergebnissen in großen Datenbeständen ist inzwischen für uns alle immer und überall selbstverständlich. Suche wird nicht mehr nur in klassischen Szenarien wie Enterprise Search und Web Search eingesetzt, sondern organisiert den Zugriff auf Daten und Informationen in verschiedensten Anwendungen (Stichwort: Search-based Applications). Ein Großteil der gebräuchlichen Suchtechnologien basiert hierbei auf dem Apache-Lucene-Projekt. Im Bereich der Suchserver auf Lucene-Basis gibt es nun neben Apache Solr einen neuen Star in der Open-Soruce-Szene: ElasticSearch. Dieser Vortrag stellt ElasticSearch und die Einsatzszenarien eingehend vor und grenzt die Möglichkeiten gegenüber Lucene und Solr insbesondere im Bereich großer Datenmengen ab.
TERMINALFOUR's Daniel Keane explores TERMINALFOUR Mailer, a product used to create newsletters and mailing campaigns which allows users to re-use content from Site Manager.
Finding the Right Data Solution for Your Application in the Data Storage Hays...Srinath Perera
The NoSQL movement has rekindled interest in data storage solutions. A few years ago, within limited scale systems, storage choices for programmers and architects were simple where relational databases were almost always the choice. However, advent of Cloud and ever increasing user bases for applications have given rise to larger scale systems. Relational databases cannot always scale to meet the needs of those systems, and as an alternative, the NoSQL movement has proposed many solutions.
For a programmer who wants to select a data model, they now have to choose from a wide variety of choices like Local memory, Relational databases, Files, Distributed Cache, Column Family Storage, Document Storage, Name value pairs, Graph DBs, Service Registries, Queue, and Tuple Space etc. Furthermore, there are different layers/access choices such as directly accessing data, using object to relation mapping layer like hibernate/JPA, or using data services. Moreover, users also need to worry about how to scale up the storage in multiple dimensions like the number of databases, the number of tables, the amount of data in a table, frequency of requests, types of requests (read/write ratio).
Consequently, choosing the right data model for a given problem is no longer trivial, and such a choice needs a clear understanding of different storage offerings, their similarities, differences, as well as associated tradeoffs. We faced the same problem while designing the data interfaces for Stratos Platform as a Service (SaaS) offering, and in this talk, we would like to share our findings and experiences of that work. We will present a survey of different data models, their differences as well as similarities, tradeoffs, and killer apps for each model. We believe the participants will walk away with a border understanding about data models and guidelines on which model to be used when.
Runner Up: Best Use of Customer InsightB2B Marketing
In July 2011, international IT services company Atos Origin acquired Siemens IT Services and rebranded as Atos. The merger catapulted Atos up from the eleventh to the third IT provider to financial services organisations in Europe. It was a huge opportunity for Atos to target a larger global financial services client base. The resulting prospect campaign combined deep prospect insight, personalised approaches and integrated international execution. It delivered a 350x ROI.
Key takeaways will address the benefits of building sector-specific propositions, developing deep prospect intelligence, and combining data, creative communications and telemarketing in a single joined-up approach.
ElasticSearch - Suche im Zeitalter der Cloudsinovex GmbH
Eine performante Suche mit relevanten Ergebnissen in großen Datenbeständen ist inzwischen für uns alle immer und überall selbstverständlich. Suche wird nicht mehr nur in klassischen Szenarien wie Enterprise Search und Web Search eingesetzt, sondern organisiert den Zugriff auf Daten und Informationen in verschiedensten Anwendungen (Stichwort: Search-based Applications). Ein Großteil der gebräuchlichen Suchtechnologien basiert hierbei auf dem Apache-Lucene-Projekt. Im Bereich der Suchserver auf Lucene-Basis gibt es nun neben Apache Solr einen neuen Star in der Open-Soruce-Szene: ElasticSearch. Dieser Vortrag stellt ElasticSearch und die Einsatzszenarien eingehend vor und grenzt die Möglichkeiten gegenüber Lucene und Solr insbesondere im Bereich großer Datenmengen ab.
TERMINALFOUR's Daniel Keane explores TERMINALFOUR Mailer, a product used to create newsletters and mailing campaigns which allows users to re-use content from Site Manager.
This presentation outlines the consultancy services provided by Stickyeyes.
Stickyeyes is an award-winning digital marketing agency, working with global brands in over 20 countries.
The Channel Partnership developed and executed a content driven campaign, Banking 20|20, to strengthen Cable&Wireless Worldwide’s positioning within the UK banking sector and deliver new engagement opportunities to their sales teams. The Banking 20|20 campaign was based around the critical issues facing the banking and financial services sector, highlighting key challenges to be overcome, and how operations need to evolve to achieve success in a changing landscape.
The campaign was praised throughout the organisation and exceeded expectations across all key metrics, including customer advocacy ratings, website visitors, new sales engagements and pipeline value.
Understanding Hacker Tools and Techniques: A live Demonstration EnergySec
Presented by: Monta Elkins, FoxGuard Solutions
Abstract: Learn what the hackers know. See the tools used by hackers to scan your networks, guess your passwords, and break into your un-patched Windows® XP systems to take full control in this live demonstration. Use the knowledge you gain to better prepare yourself and your systems against attacks.
Click through excerpts of LinkedIn's report on recruiting trends across in China. This report is in Simplified Chinese.
Learn more about LinkedIn Talent Solutions: http://linkd.in/1bgERGj
Subscribe to the LinkedIn Talent Blog: http://linkd.in/18yp4Cg
Follow the LinkedIn Talent Solutions page: http://linkd.in/1cNvIFT
Tweet with us: http://bit.ly/HireOnLinkedIn
When we allow Facebook applications access, we allow them to see things that are within our Facebook account.
This deck shows you how to learn more about your personal Facebook application settings including:
- Which applications you've allowed access to
- What data they are seeing about you
- When they last accessed data about you
- and How to remove access to data or applications inside your Facebook account.
Content migration part 2: TERMINALFOUR t44u 2013Terminalfour
TERMINALFOUR's Paul Kelly discusses the new and improved HTML Importer tool using TERMINALFOUR Site Manager, the limitations of the old tool and the benefits associated with the new updated content migration tool.
“not only SQL.”
NoSQL databases are databases store data in a format other than relational tables.
NoSQL databases or non-relational databases don’t store relationship data well.
In this session you will learn:
Zookeeper
To know more, click here: https://www.mindsmapped.com/courses/big-data-hadoop/big-data-and-hadoop-training-for-beginners/
This presentation outlines the consultancy services provided by Stickyeyes.
Stickyeyes is an award-winning digital marketing agency, working with global brands in over 20 countries.
The Channel Partnership developed and executed a content driven campaign, Banking 20|20, to strengthen Cable&Wireless Worldwide’s positioning within the UK banking sector and deliver new engagement opportunities to their sales teams. The Banking 20|20 campaign was based around the critical issues facing the banking and financial services sector, highlighting key challenges to be overcome, and how operations need to evolve to achieve success in a changing landscape.
The campaign was praised throughout the organisation and exceeded expectations across all key metrics, including customer advocacy ratings, website visitors, new sales engagements and pipeline value.
Understanding Hacker Tools and Techniques: A live Demonstration EnergySec
Presented by: Monta Elkins, FoxGuard Solutions
Abstract: Learn what the hackers know. See the tools used by hackers to scan your networks, guess your passwords, and break into your un-patched Windows® XP systems to take full control in this live demonstration. Use the knowledge you gain to better prepare yourself and your systems against attacks.
Click through excerpts of LinkedIn's report on recruiting trends across in China. This report is in Simplified Chinese.
Learn more about LinkedIn Talent Solutions: http://linkd.in/1bgERGj
Subscribe to the LinkedIn Talent Blog: http://linkd.in/18yp4Cg
Follow the LinkedIn Talent Solutions page: http://linkd.in/1cNvIFT
Tweet with us: http://bit.ly/HireOnLinkedIn
When we allow Facebook applications access, we allow them to see things that are within our Facebook account.
This deck shows you how to learn more about your personal Facebook application settings including:
- Which applications you've allowed access to
- What data they are seeing about you
- When they last accessed data about you
- and How to remove access to data or applications inside your Facebook account.
Content migration part 2: TERMINALFOUR t44u 2013Terminalfour
TERMINALFOUR's Paul Kelly discusses the new and improved HTML Importer tool using TERMINALFOUR Site Manager, the limitations of the old tool and the benefits associated with the new updated content migration tool.
“not only SQL.”
NoSQL databases are databases store data in a format other than relational tables.
NoSQL databases or non-relational databases don’t store relationship data well.
In this session you will learn:
Zookeeper
To know more, click here: https://www.mindsmapped.com/courses/big-data-hadoop/big-data-and-hadoop-training-for-beginners/
An Open Talk at DeveloperWeek Austin 2017 by Kimberly Wilkins (@dba_denizen), Principal Engineer - Databases at ObjectRocket. Featuring new use cases like Bitcoin, AI, IoT, and all the cool things.
Architecture, Products, and Total Cost of Ownership of the Leading Machine Le...DATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a comprehensive platform designed to address multi-faceted needs by offering multi-function data management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion.
In this research-based session, I’ll discuss what the components are in multiple modern enterprise analytics stacks (i.e., dedicated compute, storage, data integration, streaming, etc.) and focus on total cost of ownership.
A complete machine learning infrastructure cost for the first modern use case at a midsize to large enterprise will be anywhere from $3 million to $22 million. Get this data point as you take the next steps on your journey into the highest spend and return item for most companies in the next several years.
Data at the Speed of Business with Data Mastering and GovernanceDATAVERSITY
Do you ever wonder how data-driven organizations fuel analytics, improve customer experience, and accelerate business productivity? They are successful by governing and mastering data effectively so they can get trusted data to those who need it faster. Efficient data discovery, mastering and democratization is critical for swiftly linking accurate data with business consumers. When business teams can quickly and easily locate, interpret, trust, and apply data assets to support sound business judgment, it takes less time to see value.
Join data mastering and data governance experts from Informatica—plus a real-world organization empowering trusted data for analytics—for a lively panel discussion. You’ll hear more about how a single cloud-native approach can help global businesses in any economy create more value—faster, more reliably, and with more confidence—by making data management and governance easier to implement.
What is data literacy? Which organizations, and which workers in those organizations, need to be data-literate? There are seemingly hundreds of definitions of data literacy, along with almost as many opinions about how to achieve it.
In a broader perspective, companies must consider whether data literacy is an isolated goal or one component of a broader learning strategy to address skill deficits. How does data literacy compare to other types of skills or “literacy” such as business acumen?
This session will position data literacy in the context of other worker skills as a framework for understanding how and where it fits and how to advocate for its importance.
Building a Data Strategy – Practical Steps for Aligning with Business GoalsDATAVERSITY
Developing a Data Strategy for your organization can seem like a daunting task – but it’s worth the effort. Getting your Data Strategy right can provide significant value, as data drives many of the key initiatives in today’s marketplace – from digital transformation, to marketing, to customer centricity, to population health, and more. This webinar will help demystify Data Strategy and its relationship to Data Architecture and will provide concrete, practical ways to get started.
Uncover how your business can save money and find new revenue streams.
Driving profitability is a top priority for companies globally, especially in uncertain economic times. It's imperative that companies reimagine growth strategies and improve process efficiencies to help cut costs and drive revenue – but how?
By leveraging data-driven strategies layered with artificial intelligence, companies can achieve untapped potential and help their businesses save money and drive profitability.
In this webinar, you'll learn:
- How your company can leverage data and AI to reduce spending and costs
- Ways you can monetize data and AI and uncover new growth strategies
- How different companies have implemented these strategies to achieve cost optimization benefits
Data Catalogs Are the Answer – What is the Question?DATAVERSITY
Organizations with governed metadata made available through their data catalog can answer questions their people have about the organization’s data. These organizations get more value from their data, protect their data better, gain improved ROI from data-centric projects and programs, and have more confidence in their most strategic data.
Join Bob Seiner for this lively webinar where he will talk about the value of a data catalog and how to build the use of the catalog into your stewards’ daily routines. Bob will share how the tool must be positioned for success and viewed as a must-have resource that is a steppingstone and catalyst to governed data across the organization.
Data Catalogs Are the Answer – What Is the Question?DATAVERSITY
Organizations with governed metadata made available through their data catalog can answer questions their people have about the organization’s data. These organizations get more value from their data, protect their data better, gain improved ROI from data-centric projects and programs, and have more confidence in their most strategic data.
Join Bob Seiner for this lively webinar where he will talk about the value of a data catalog and how to build the use of the catalog into your stewards’ daily routines. Bob will share how the tool must be positioned for success and viewed as a must-have resource that is a steppingstone and catalyst to governed data across the organization.
In this webinar, Bob will focus on:
-Selecting the appropriate metadata to govern
-The business and technical value of a data catalog
-Building the catalog into people’s routines
-Positioning the data catalog for success
-Questions the data catalog can answer
Because every organization produces and propagates data as part of their day-to-day operations, data trends are becoming more and more important in the mainstream business world’s consciousness. For many organizations in various industries, though, comprehension of this development begins and ends with buzzwords: “Big Data,” “NoSQL,” “Data Scientist,” and so on. Few realize that all solutions to their business problems, regardless of platform or relevant technology, rely to a critical extent on the data model supporting them. As such, data modeling is not an optional task for an organization’s data effort, but rather a vital activity that facilitates the solutions driving your business. Since quality engineering/architecture work products do not happen accidentally, the more your organization depends on automation, the more important the data models driving the engineering and architecture activities of your organization. This webinar illustrates data modeling as a key activity upon which so much technology and business investment depends.
Specific learning objectives include:
- Understanding what types of challenges require data modeling to be part of the solution
- How automation requires standardization on derivable via data modeling techniques
- Why only a working partnership between data and the business can produce useful outcomes
Analytics play a critical role in supporting strategic business initiatives. Despite the obvious value to analytic professionals of providing the analytics for these initiatives, many executives question the economic return of analytics as well as data lakes, machine learning, master data management, and the like.
Technology professionals need to calculate and present business value in terms business executives can understand. Unfortunately, most IT professionals lack the knowledge required to develop comprehensive cost-benefit analyses and return on investment (ROI) measurements.
This session provides a framework to help technology professionals research, measure, and present the economic value of a proposed or existing analytics initiative, no matter the form that the business benefit arises. The session will provide practical advice about how to calculate ROI and the formulas, and how to collect the necessary information.
How a Semantic Layer Makes Data Mesh Work at ScaleDATAVERSITY
Data Mesh is a trending approach to building a decentralized data architecture by leveraging a domain-oriented, self-service design. However, the pure definition of Data Mesh lacks a center of excellence or central data team and doesn’t address the need for a common approach for sharing data products across teams. The semantic layer is emerging as a key component to supporting a Hub and Spoke style of organizing data teams by introducing data model sharing, collaboration, and distributed ownership controls.
This session will explain how data teams can define common models and definitions with a semantic layer to decentralize analytics product creation using a Hub and Spoke architecture.
Attend this session to learn about:
- The role of a Data Mesh in the modern cloud architecture.
- How a semantic layer can serve as the binding agent to support decentralization.
- How to drive self service with consistency and control.
Enterprise data literacy. A worthy objective? Certainly! A realistic goal? That remains to be seen. As companies consider investing in data literacy education, questions arise about its value and purpose. While the destination – having a data-fluent workforce – is attractive, we wonder how (and if) we can get there.
Kicking off this webinar series, we begin with a panel discussion to explore the landscape of literacy, including expert positions and results from focus groups:
- why it matters,
- what it means,
- what gets in the way,
- who needs it (and how much they need),
- what companies believe it will accomplish.
In this engaging discussion about literacy, we will set the stage for future webinars to answer specific questions and feature successful literacy efforts.
The Data Trifecta – Privacy, Security & Governance Race from Reactivity to Re...DATAVERSITY
Change is hard, especially in response to negative stimuli or what is perceived as negative stimuli. So organizations need to reframe how they think about data privacy, security and governance, treating them as value centers to 1) ensure enterprise data can flow where it needs to, 2) prevent – not just react – to internal and external threats, and 3) comply with data privacy and security regulations.
Working together, these roles can accelerate faster access to approved, relevant and higher quality data – and that means more successful use cases, faster speed to insights, and better business outcomes. However, both new information and tools are required to make the shift from defense to offense, reducing data drama while increasing its value.
Join us for this panel discussion with experts in these fields as they discuss:
- Recent research about where data privacy, security and governance stand
- The most valuable enterprise data use cases
- The common obstacles to data value creation
- New approaches to data privacy, security and governance
- Their advice on how to shift from a reactive to resilient mindset/culture/organization
You’ll be educated, entertained and inspired by this panel and their expertise in using the data trifecta to innovate more often, operate more efficiently, and differentiate more strategically.
Emerging Trends in Data Architecture – What’s the Next Big Thing?DATAVERSITY
With technological innovation and change occurring at an ever-increasing rate, it’s hard to keep track of what’s hype and what can provide practical value for your organization. Join this webinar to see the results of a recent DATAVERSITY survey on emerging trends in Data Architecture, along with practical commentary and advice from industry expert Donna Burbank.
Data Governance Trends - A Look Backwards and ForwardsDATAVERSITY
As DATAVERSITY’s RWDG series hurdles into our 12th year, this webinar takes a quick look behind us, evaluates the present, and predicts the future of Data Governance. Based on webinar numbers, hot Data Governance topics have evolved over the years from policies and best practices, roles and tools, data catalogs and frameworks, to supporting data mesh and fabric, artificial intelligence, virtualization, literacy, and metadata governance.
Join Bob Seiner as he reflects on the past and what has and has not worked, while sharing examples of enterprise successes and struggles. In this webinar, Bob will challenge the audience to stay a step ahead by learning from the past and blazing a new trail into the future of Data Governance.
In this webinar, Bob will focus on:
- Data Governance’s past, present, and future
- How trials and tribulations evolve to success
- Leveraging lessons learned to improve productivity
- The great Data Governance tool explosion
- The future of Data Governance
Data Governance Trends and Best Practices To Implement TodayDATAVERSITY
Would you share your bank account information on social media? How about shouting your social security number on the New York City subway? We didn’t think so either – that’s why data governance is consistently top of mind.
In this webinar, we’ll discuss the common Cloud data governance best practices – and how to apply them today. Join us to uncover Google Cloud’s investment in data governance and learn practical and doable methods around key management and confidential computing. Hear real customer experiences and leave with insights that you can share with your team. Let’s get solving.
Topics that you will hear addressed in this webinar:
- Understanding the basics of Cloud Incident Response (IR) and anticipated data governance trends
- Best practices for key management and apply data governance to your day-to-day
- The next wave of Confidential Computing and how to get started, including a demo
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the enterprise mission will be executed and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and data architecture. William will kick off the fifth year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
Too often I hear the question “Can you help me with our data strategy?” Unfortunately, for most, this is the wrong request because it focuses on the least valuable component: the data strategy itself. A more useful request is: “Can you help me apply data strategically?” Yes, at early maturity phases the process of developing strategic thinking about data is more important than the actual product! Trying to write a good (must less perfect) data strategy on the first attempt is generally not productive –particularly given the widespread acceptance of Mike Tyson’s truism: “Everybody has a plan until they get punched in the face.” This program refocuses efforts on learning how to iteratively improve the way data is strategically applied. This will permit data-based strategy components to keep up with agile, evolving organizational strategies. It also contributes to three primary organizational data goals. Learn how to improve the following:
- Your organization’s data
- The way your people use data
- The way your people use data to achieve your organizational strategy
This will help in ways never imagined. Data are your sole non-depletable, non-degradable, durable strategic assets, and they are pervasively shared across every organizational area. Addressing existing challenges programmatically includes overcoming necessary but insufficient prerequisites and developing a disciplined, repeatable means of improving business objectives. This process (based on the theory of constraints) is where the strategic data work really occurs as organizations identify prioritized areas where better assets, literacy, and support (data strategy components) can help an organization better achieve specific strategic objectives. Then the process becomes lather, rinse, and repeat. Several complementary concepts are also covered, including:
- A cohesive argument for why data strategy is necessary for effective data governance
- An overview of prerequisites for effective strategic use of data strategy, as well as common pitfalls
- A repeatable process for identifying and removing data constraints
- The importance of balancing business operation and innovation
Who Should Own Data Governance – IT or Business?DATAVERSITY
The question is asked all the time: “What part of the organization should own your Data Governance program?” The typical answers are “the business” and “IT (information technology).” Another answer to that question is “Yes.” The program must be owned and reside somewhere in the organization. You may ask yourself if there is a correct answer to the question.
Join this new RWDG webinar with Bob Seiner where Bob will answer the question that is the title of this webinar. Determining ownership of Data Governance is a vital first step. Figuring out the appropriate part of the organization to manage the program is an important second step. This webinar will help you address these questions and more.
In this session Bob will share:
- What is meant by “the business” when it comes to owning Data Governance
- Why some people say that Data Governance in IT is destined to fail
- Examples of IT positioned Data Governance success
- Considerations for answering the question in your organization
- The final answer to the question of who should own Data Governance
It is clear that Data Management best practices exist and so does a useful process for improving existing Data Management practices. The question arises: Since we understand the goal, how does one design a process for Data Management goal achievement? This program describes what must be done at the programmatic level to achieve better data use and a way to implement this as part of your data program. The approach combines DMBoK content and CMMI/DMM processes – permitting organizations with the opportunity to benefit from the best of both. It also permits organizations to understand:
- Their current Data Management practices
- Strengths that should be leveraged
- Remediation opportunities
MLOps – Applying DevOps to Competitive AdvantageDATAVERSITY
MLOps is a practice for collaboration between Data Science and operations to manage the production machine learning (ML) lifecycles. As an amalgamation of “machine learning” and “operations,” MLOps applies DevOps principles to ML delivery, enabling the delivery of ML-based innovation at scale to result in:
Faster time to market of ML-based solutions
More rapid rate of experimentation, driving innovation
Assurance of quality, trustworthiness, and ethical AI
MLOps is essential for scaling ML. Without it, enterprises risk struggling with costly overhead and stalled progress. Several vendors have emerged with offerings to support MLOps: the major offerings are Microsoft Azure ML and Google Vertex AI. We looked at these offerings from the perspective of enterprise features and time-to-value.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Finding the Right Data Solution for your Application in the Data Storage Haystack
1. Finding the Right Data Solution
for Your Application in the Data
Storage Haystack
Srinath Perera Ph.D.
Senior Software Architect, WSO2 Inc.
Visiting Faculty, University of Moratuwa
Research Scientist, Lanka Software Foundation
2. Data Models
§ There has been many data models
proposed (read Stonebraker’s
“What Goes Around Comes
Around” for more details)
o Hierarchical (IMS): late 1960’s and
1970’s
o Directed graph (CODASYL): 1970’s
o Relational: 1970’s and early 1980’s
o Entity-Relationship: 1970’s
o Extended Relational: 1980’s
o Semantic: late 1970’s and 1980’s
§ For last 20-30 years, Relational
Database systems (SQL) together
with transactions has been the
defacto data solution.
Copyright Greg Morss and licensed for reuse under CC License , http://www.geograph.org.uk/photo/990700
3. For many years, choice of data storage was
a easy one (use RDBMS)
Copyright by Alan Murray Walsh and licensed for reuse under CC License , http://www.geograph.org.uk/photo/1652880
4. Scale of Systems
§ However, the scale of systems
are changing due to
o Increasing user bases of
systems.
o Mobile devices, online presence
o Cloud computing and multicore
systems
§ Scaling up RDBMS
o Put it in a bigger machine
o Replicate (Cluster) the database to 2-3 more nodes. But the
approach does not scale up.
o Partition the data across many nodes (distribute, a.k.a.
shredding). However, JOIN queries across many nodes are hard,
and sometimes too slow. This often needs custom code and
configurations. Also transactions do not scale as well.
Copyright digitalART2 and licensed for reuse under CC License , http://www.flickr.com/photos/digitalart/2101765353/
5. CAP Theorem, Transactions, and Storage
§ RDBMS model provide two things
o Relational model with SQL
o ACID transactions – (Atomic,
Isolation, Consistent, Durable)
§ It was a classical one size fit all
solution, but it worked for a quite a
some time.
§ However, CAP theorem says that
you can not have it all.
o Consistency, Availability and Partition
Tolerance, pick two!
§ But there are many usecases that do not need all RDBMS
features, when those are dropped, systems could scale. (e.g.
Google Big Table)
§ However, to use them, one has to understand and utilize the
application specific behavior.
Copyright stephcarter and licensed for reuse under CC License , http://www.flickr.com/photos/stephcarter/541464462
6. NoSQL and other Storage Systems
§ Large internet companies hit the problem first, they build
systems that are specific to their problems, and those
systems did scale.
o Google Big table
o Amazon Dynamo
§ Soon many others followed, and most of them are free and
open source.
§ Now there are couple of dozen
§ Among advantages of
NoSQL are
o Scalability
o Flexible schema
o Designed to scale and support
fault tolerance out of the Box
Copyright ind{yeah} and licensed for reuse under CC License ,
http://www.flickr.com/photos/flickcoolpix/3566848458/
7. However, with NoSQL solutions, choosing a
data storage is no longer simple.
Copyright Philipp Salzgeber on and licensed for reuse under CC License http://
www.salzgeber.at/astro/pics/20081126_heart/index.html
8. Selecting the Right Data Solution
§ What are the right Questions to ask?
§ Categorize Answers for each question
§ Take different cases based on different answers and make
recommendations!
Copyright by Krzysztof Poltorak, and licensed for reuse under CC License.
http://www.fotocommunity.com/pc/pc/display/22077920
9. What are the right Questions?
o Types of data
- Structured, Semi-Structured,
Unstructured
o Need for Scalability
- Number of users
- Number of data items
- Size of files
- Read/Write ratio
o Types of Queries
- Retrieve by Key
- WHERE clauses
- JOIN queries
- Offline Queries
o Consistency
- Loose Consistency
- Single Operation Consistency
- Transactions
Copyright by romainguy, and licensed for reuse under CC License http://www.flickr.com/
photos/romainguy/249370084
10. Unstructured Data
§ Data do not have a particular
structure, often retrieved
through a key (name).
o E.g. File systems.
§ Humans are good in processing
unstructured data, but
computers do not.
§ This data are often stored in storage but consumed by humans
at the end of the pipeline. (e.g. Document repository)
§ One common use case is building structured data from
unstructured data
§ Often associate Metadata to help searching
Copyright Martyn Gorman and licensed for reuse under CC License, http://www.geograph.org.uk/photo/294134
11. Structured Data
§ Have a structure and often described through a Schema
§ Often a table like 2D structure is used, but other structures
also possible.
§ Main advantage of the structure is search
§ Schema can be provided at
the deployment time or at the
runtime (dynamic schema)
§ Schema can be used to
o Validate data
o Support user friendly search
o Optimize storage and queries
Copyright Marion Doss by and licensed for reuse under CC License , http://www.flickr.com/
photos/ooocha/2611398859/
12. Semi-structured Data
§ Structure is not fully defined.
But there is some inherent
structure.
§ For example
o XML documents, data are
stored in a tree like structure
o Graph data
o Data structures like lists and
arrays
§ Support queries based on
structure
§ But processing data often
needs custom code.
Copyright Walter Baxter http://www.geograph.org.uk/photo/1069339
13. Search
§ Unstructured Data – no structure to support search.
o Search based on an reverse index
o Search through Properties
§ Semi-Structured Data
o To search XML, Xpath or XQuery (Any tree like structure).
o Tuple spaces can be queried through tuple space templates
o Data registries can be searched for entries that matches with given
Metadata descriptions (search by properties)
o Graph’s can be queried based on connectivity
§ Structured Data
o Retrieve by Key
o WHERE clauses
o Queries with JOINs
o Offline Queries
Copyright bydigitalART2 and licensed for reuse under CC License ,
http://www.flickr.com/photos/digitalart/2101765353/
14. Consistency and Scalability
§ Scalability – this is ability to
handle more users, data, or
larger files by adding more
nodes. We will have 3 categories.
o Small systems (can handle with 1-3
nodes)
o Scalable systems (can handle with
about 10 nodes)
o Highly scalable systems (anything
larger, can be 100s or 1000s of Copyright NNSANews and licensed for reuse under CC
nodes) License , http://www.flickr.com/photos/nnsanews/
5347287260/
§ Consistency – this is how to keep the replicas of same data
in many nodes synced up (e.g. replicas) how they can be
updated without data corruptions. We will have 3 categories.
o Transactional – series of operations updated in ACID manner
o Atomic operation – single operation, updated in all replicas
o Eventual consistency - data will be eventually consistent
16. Data Storage Implementations
§ Expectations from data
storages
o Reliably store the data
o Efficient search and retrieval
of data whenever needed
o Data management – delete,
update data
Copyright John Atherton by and licensed for reuse under CC
License , http://www.flickr.com/photos/gbaku/2231332836/
17. Challenges of Data Storage
§ Reliability
o Replicating data
o Creating backup or recovering using backups
§ Security
§ Scaling and Parallel access
o Distribution or replications
o ACID transactions
§ Availability
o Data replications
§ Vendor lock-in
o Interoperability, standard query languages
§ Simple use experience
o Hide the physical location of data,
o Provide simple API and security models
o Expressive query languages.
18. Data Storage Choices
Queries
Join Transactio Flexible
Storage Type Advantages Disadvantages Key Where s ns Scale schema
No unless
Local memory Very fast Not durable Yes No No STMs No Yes
Rigid schema,
good for read
oriented Moder
Relational/ SQL Standardized usecases. Yes Yes Yes Yes ate No
Column High write Not Yes,
families performance, transactional, secondar
(NoSQL ) replicated no-online joins Yes y index No No High Yes
High write Not
Documents performance, transactional, Yes,
DBs replicated no-online joins Yes views No No Yes Yes
Easy to integrate
with
Object Struct programming
Databases ured languages Yes Yes Yes Yes No No
19. Queries trans
Disadvanta action Flexible
Storage Type Advantages ges Key Search s Scale schema
No
structured
Save big files whose search on
Files format not understood content Yes Indexing No Moderate Yes
Data
Registries/ Metadata search Property
Metadata Unstru based search
Catalogs ctured Yes (Where) No Moderate Yes
Representation of flow
of messages over
Queues time/ Tasks Yes N/A No Yes Yes
Used to inference, very
Triple fast relationship Relationship
Stores processing Yes search No No Yes
XML XPath/
database XML native XQuery
Distributed
Cache Fast, replicated No search Yes No No Yes Yes
Model is too
simple in
some
High write cases, not
Key-value performance, transactiona
pairs replicated l Yes No No Yes Yes
Semi- Very fast joins, natural
structur to represent Not very
Graph DBs ed relationships, scalable Yes Graph Search Yes Low N/A
21. How do We do this?
Copyright 8664 and
licensed for reuse
under CC License ,
http://www.flickr.com/
photos/
80464769@N00/186
598462/
§ Consider structured, semi-structured, and unstructured
separately.
o Then drill down based on other 3 properties: scale, consistency,
and search.
§ Structured case is more complicated, other two are bit
simpler.
§ Start by giving a defacto for each case
22. Handling Structured Data
§ There are three main considerations: scale, consistency
and queries
Small (1-3 nodes) Scalable (10 nodes) Highly Scalable (1000s
nodes)
Loose Operat ACID Loose Operat ACID Loose Operat ACID
Consist ion Transa Consi ion Transa Consi ion Transa
ency Consi ctions stency Consi ctions stency Consi ctions
stency stency stency
Primary DB/ KV/ DB/ DB KV/CF KV/CF Partitio KV/CF KV/CF No
Key CF KV/ CF ned
DB?
Where DB/ CF/ DB/ DB CF/ CF/ Partitio CF/ CF/ No
Doc CF/ Doc(?) Doc (?) ned Doc Doc
Doc DB?
JOIN DB DB DB ?? ?? ?? No No No
Offline DB/CF/ DB/CF/ DB/CF/ CF/ CF/ No CF/ CF/ No
Doc Doc Doc Doc Doc Doc Doc
*KV: Key-Value Systems, CF: Column Families, Doc: document based Systems
23. Handling Small Scale Systems (1-3 nodes)
Small (1-3 nodes) § In general using DB here for
every case might work.
Loose Operati ACID
Consi on Transa § Reason for using options
stency Consist ctions other than DB
ency o When there is potential need
Primary DB/ DB/ KV/ DB to scale later.
Key KV/ CF CF o High write throughput
Where DB/ DB/ DB § KV is 1-D where as other two
CF/ CF/Doc
Doc
are 2D
JOIN DB DB DB
Offline DB/ DB/CF/ DB/CF/
CF/ Doc Doc
Doc
*KV: Key-Value Systems, CF: Column
Families, Doc: document based
Systems
24. Handling Scalable Systems
Scalable (10 nodes) § KV, CF, and Doc can easily
handle this case.
Loose Operati ACID § If DBs used with data shredded
Consi on Transa
stenc Consist ctions across many nodes
y ency o Transactions might work given that
Primary KV/CF KV/CF Partition participants on one transaction are
Key ed DB? not too many.
Where CF/ CF/Doc Partition
o JOINs might need to transfer too
Doc ed DB? much data between nodes.
o Also should consider in Memory
JOIN ?? ?? Partition
ed DBs like Vault DB.
DB?? § Offline mode will work.
Offline CF/ CF/Doc No § Most systems let users choose
Doc
consistency, and loose
*KV-Key-Value Systems, CF-Column
consistency can scale more.
Families, Doc- document based Systems (e.g. Cassandra)
25. Highly Scalable Systems
§ Transactions do not work in
Highly Scalable (1000s
nodes) this scale. (CAP theorem).
Loose Operati ACID § Same for JOINs. The problem
Consis on Transac is sometime too much data
tency Consist tions
ency needs to be transferred
Primary KV/CF KV/CF No
between nodes to perform the
Key JOIN.
Where CF/Doc CF/Doc No § Offline case handled through
Map-Reduce. Even JOIN
JOIN No No No case is OK since there is
time.
Offline CF/Doc CF/Doc No
*KV: Key-Value Systems, CF: Column
Families, Doc: document based
Systems
26. Highly Scalable Systems + Primary Key Retrieval
Highly Scalable (1000s § This is (comparatively) the
nodes) easy one.
Loose Operat ACID § Can be solved through
Consis ion Transa
tency Consis ctions
DHT (Distributed Hash
tency table) based solutions or
Primar KV/CF KV/CF No architectures like
y Key OceanStore.
Where CF/Doc CF/Doc No § Both Key-Value storage
(?) (?)
(KV) and Column Families
JOIN No No No
(CF) can be used. But
Key-Value model is
Offline CF/Doc CF/Doc No
preferred as it is more
scalable.
*KV-Key-Value Systems, CF-Column
Families, Doc- document based
Systems
27. Highly Scalable systems + WHERE
Highly Scalable (1000s § This Generally OK, but tricky.
nodes)
§ CF work through a Secondary
Loose Operat Transa
Consis ion ctions index that do Scatter-gather
tency Consis (e.g. Cassandra).
tency
§ Doc work through Map-
Primar KV/CF KV/CF No
y Key Reduce views (e.g.
Where CF/Doc CF/Doc No
CouchDB)
(?) (?) § There is Bissa, which build a
JOIN No No No index for all possible queries
(No range queries)
Offline CF/Doc CF/Doc No § If you are doing this, you
should do pilot runs and
*KV-Key-Value Systems, CF-Column make sure things work.
Families, Doc- document based
Systems
28. Handling Unstructured Data
§ Storage Options
o Distributed File systems - generally scalable (e.g. NSF), but HDFS
(Hadoop) and Lustre are highly scalable versions.
o Metadata registries (e.g. Niravana, SDSC Resource Broker)
29. Handling Semi-Structured Data
Small Scale (1-3 Scalable (10 nodes) Highly
nodes) Scalable
XML (Queried XML DB or convert XML DB or convert to a ??
through XPath) to a structured structured model
model
Graphs Graph DBs Graph DBs if graph can ??
be partitioned
Data Structures Data Structure
Servers, Object
Databases
Queues Distributed Distributed Queues Distributed
Queues Queues
!
§ Storage Options
o Answer depends on the type of structure. If there is a server
optimized for a given type, it is often much more efficient than
using a DB. (e.g. Graph databases can support fast relationship
search)
§ Search
o Very much custom. E.g. XML or any tree = Xpath, Graph can
support very fast relationship search
30. Hybrid Approaches
§ Some solutions have many types
of data and hence need more than
one data solution (hybrid
architectures).
§ For example
o Using DB for transactional data and
CF for other data.
o Keeping metadata and actual data
separate for large data archives.
o Use GraphDB to store relationship
data while other data is in Column
Family storage. Copyright Matthew Oliphant by and licensed for
§ However, if transactions are reuse under CC License , http://www.flickr.com/
photos/fajalar/3174131216/
needed, transactions have to be
handled outside storage (e.g.
using Atomikos Zookeeper ).
31. Other parameters
§ Above list is not exhaustive, and there are other
parameters
o Read/ Write ratio – when high it is easy to scale
o High write throughput
o Very large data products – you will need a file system. May be
keep metadata in Data registry and store data in a file system.
o Flexible Schema
o Archival usecases
o Analytical usecases
o Others …
§ So there is no silver bullet …
32. Conclusion
§ For last 20 years or so, DBMS were the de facto storage
solution
§ However, DBMS could not scale well, and many NoSQL
solutions have been proposed instead
§ As a results. it is no longer easy to find the best data
solution for your problem.
§ We discussed may dimensions (types of data, scalability,
queries, and consistency) and provided guidelines on when
to use which data solution.
§ Your feedback and thoughts are most welcome .. Contact
me through srinath@wso2.com