DataWiki is a versatile semantic enterprise wiki that supports communities of knowledge workers to easily formalise their expert knowledge. The socially curated knowledge base is enriched with data from external enterprise databases and made available to the Wiki users (semantic data integration).
DataWiki is a standard product from DIQA (www.diqa-pm.com).
Knowlege Management is a complex undertaking that must meet special requirements and needs for each new project. Flexible platforms covering different aspects of this "knowledge sharing" goal are needed as the technological underpinning.
This talk presents two platforms and their individual features
* Semantic MediaWiki
* Microsoft SharePoint
Concrete examples from professional practice illustrate their strength and weaknesses.
----
KM ist ein komplexes Unterfangen, das in jedem Unternehmen spezielle Anforderungen und Bedürfnisse erfüllen muss. Um diese Aufgabe mit technischen Mitteln zu unterstützen, bedarf es flexibler Plattformen, die unterschiedlichste Aspekte dieses "Knowledge Sharing" abdecken können.
In diesem Vortrag werden zwei dieser Plattformen und ihre individuellen Möglichkeiten vorgestellt
* Semantic MediaWiki
* Microsoft SharePoint
und anhand von konkreten Beispielen aus der beruflichen Praxis illustriert.
Register for the companion webinar:
http://forms.embarcadero.com/Dealing-with-New-Datatypes
Data modeling is going back to the future! No, it doesn’t include a hoverboard (yet), but it does include some new datatypes that capture temporal and spatial information. In the past, datatypes were used to classify various types of data, whether integers, characters, or alphanumeric strings. With the technologies introduced in recent years, these basic datatypes can’t address everything – data modelers now need more specialized datatypes for specific needs and new formats.
Multiple database platforms have introduced new datatypes that can make it easier to support more advanced data concepts in physical data models. If you do not know about what new things are happening in the physical data modeling world, or what to do with them, Karen Lopez will discuss using a variety of new datatypes including:
•Temporal, such as period, with keywords
•Spatial, including geospatial
•Others, incorporating JSON/BSON/UBJSON usage
Learn more about ER/Studio at:
http://www.embarcadero.com/products/er-studio
DataWiki is a versatile semantic enterprise wiki that supports communities of knowledge workers to easily formalise their expert knowledge. The socially curated knowledge base is enriched with data from external enterprise databases and made available to the Wiki users (semantic data integration).
DataWiki is a standard product from DIQA (www.diqa-pm.com).
Knowlege Management is a complex undertaking that must meet special requirements and needs for each new project. Flexible platforms covering different aspects of this "knowledge sharing" goal are needed as the technological underpinning.
This talk presents two platforms and their individual features
* Semantic MediaWiki
* Microsoft SharePoint
Concrete examples from professional practice illustrate their strength and weaknesses.
----
KM ist ein komplexes Unterfangen, das in jedem Unternehmen spezielle Anforderungen und Bedürfnisse erfüllen muss. Um diese Aufgabe mit technischen Mitteln zu unterstützen, bedarf es flexibler Plattformen, die unterschiedlichste Aspekte dieses "Knowledge Sharing" abdecken können.
In diesem Vortrag werden zwei dieser Plattformen und ihre individuellen Möglichkeiten vorgestellt
* Semantic MediaWiki
* Microsoft SharePoint
und anhand von konkreten Beispielen aus der beruflichen Praxis illustriert.
Register for the companion webinar:
http://forms.embarcadero.com/Dealing-with-New-Datatypes
Data modeling is going back to the future! No, it doesn’t include a hoverboard (yet), but it does include some new datatypes that capture temporal and spatial information. In the past, datatypes were used to classify various types of data, whether integers, characters, or alphanumeric strings. With the technologies introduced in recent years, these basic datatypes can’t address everything – data modelers now need more specialized datatypes for specific needs and new formats.
Multiple database platforms have introduced new datatypes that can make it easier to support more advanced data concepts in physical data models. If you do not know about what new things are happening in the physical data modeling world, or what to do with them, Karen Lopez will discuss using a variety of new datatypes including:
•Temporal, such as period, with keywords
•Spatial, including geospatial
•Others, incorporating JSON/BSON/UBJSON usage
Learn more about ER/Studio at:
http://www.embarcadero.com/products/er-studio
K Cube Ventures is the leading venture capital firm in South Korea. Learn more about our portfolio companies and what we do by viewing our 2016 Media Kit.
SBI Magnum Balanced Fund: An Open-ended Balanced Scheme - Dec 16SBI Mutual Fund
SBI Magnum Balanced Fund invests in a mix of equity and debt investments. It provides a good investment opportunity to investors who do not wish to be completely exposed to equity markets, but are looking for relatively higher returns than those provided by debt funds. The scheme invests in a diversified portfolio of equities of high growth companies and balances the risk through investing the rest in a relatively safe portfolio of debt.To know more about this mutual fund check SBI Mutual Fund page
https://www.sbimf.com/Products/HybridSchemes/Magnum_Balanced_Fund.aspx
NoSQL Simplified: Schema vs. Schema-lessInfiniteGraph
A look at the many facets of schema-less approaches vs a rich schema approach, ranging from performance and query support to heterogeneity and code/data migration issues. Presented by Leon Guzenda, Founder, Objectivity
Belgium & Luxembourg dedicated online Data Virtualization discovery workshopDenodo
Watch full webinar here: https://bit.ly/33yYuQm
Data virtualization has become an essential part of enterprise data architectures, bridging the gap between IT and business users and delivering significant cost and time savings. This technology revolutionizes the way data is accessed, delivered, consumed and governed regardless of its format and location.
This 1.5 hour discovery session will show help you identify the benefits of this modern and agile data integration and management technology for your organisation.
K Cube Ventures is the leading venture capital firm in South Korea. Learn more about our portfolio companies and what we do by viewing our 2016 Media Kit.
SBI Magnum Balanced Fund: An Open-ended Balanced Scheme - Dec 16SBI Mutual Fund
SBI Magnum Balanced Fund invests in a mix of equity and debt investments. It provides a good investment opportunity to investors who do not wish to be completely exposed to equity markets, but are looking for relatively higher returns than those provided by debt funds. The scheme invests in a diversified portfolio of equities of high growth companies and balances the risk through investing the rest in a relatively safe portfolio of debt.To know more about this mutual fund check SBI Mutual Fund page
https://www.sbimf.com/Products/HybridSchemes/Magnum_Balanced_Fund.aspx
NoSQL Simplified: Schema vs. Schema-lessInfiniteGraph
A look at the many facets of schema-less approaches vs a rich schema approach, ranging from performance and query support to heterogeneity and code/data migration issues. Presented by Leon Guzenda, Founder, Objectivity
Belgium & Luxembourg dedicated online Data Virtualization discovery workshopDenodo
Watch full webinar here: https://bit.ly/33yYuQm
Data virtualization has become an essential part of enterprise data architectures, bridging the gap between IT and business users and delivering significant cost and time savings. This technology revolutionizes the way data is accessed, delivered, consumed and governed regardless of its format and location.
This 1.5 hour discovery session will show help you identify the benefits of this modern and agile data integration and management technology for your organisation.
Government GraphSummit: And Then There Were 15 StandardsNeo4j
Todd Pihl PhD., Technical Project Mgr. & Mark Jensen, Director of Data Managements and Interoperability, National Institute of Health, Frederick National Labs for Cancer Research
Data repositories such as NCI’s Cancer Research Data Commons receive data that use a variety of data models and vocabularies. This presents a significant obstacle to finding and using the data outside of their original purpose. In this talk we’ll show how using Neo4j allows different data models to be represented and mapped to each other, giving data managers a new way to provide harmonized data to their users.
Paolo Kreth - Persistence layers for microservices – the converged database a...matteo mazzeri
This talk will present the difference between a polyglot persistence and a converged database approach in mapping data for microservices. An historical point of views will lead us in understanding the difficulties in operating different databases and stores and the repercussions operational bottlenecks have on development.
Objectivity/DB: A Multipurpose NoSQL DatabaseInfiniteGraph
The speakers will describe the flexible configuration possibilities that Objectivity/DB provides, with an emphasis on how best to distribute data across multiple storage nodes. The session will start by describing the distributed processing architecture of Objectivity/DB before covering the new Placement Manager features. The speakers will also describe how Objectivity/DB compares and contrasts with other NoSQL solutions.
Big Data Expo 2015 - Barnsten Why Data Modelling is EssentialBigDataExpo
Learn the tips and tricks how to handle Data Modeling in your Big Data environment. Mark will show how modeling will add value to the business and how to make your Big Data landscape transparent across the organization.
You will see the latest modeling techniques for Big Data and different types of modeling notations. Also you will learn how to integrate Data Modeling into your BI environment.
Using Crowdsourced Images to Create Image Recognition Models with Analytics Z...Databricks
Volunteers around the world increasingly act as human sensors to collect millions of data points. A team from the World Bank trained deep learning models, using Apache Spark and BigDL, to confirm that photos gathered through a crowdsourced data collection pilot matched the goods for which observations were submitted.
In this talk, Maurice Nsabimana, a statistician at the World Bank, will demonstrate a collaborative project to design and train large-scale deep learning models using crowdsourced images from around the world. BigDL is a distributed deep learning library designed from the ground up to run natively on Apache Spark. It enables data engineers and scientists to write deep learning applications in Scala or Python as standard Spark programs-without having to explicitly manage distributed computations. Attendees of this session will learn how to get started with BigDL, which runs in any Apache Spark environment, whether on-premises or in the Cloud.
Attendees will also learn how to write a deep learning application that leverages Spark to train image recognition models at scale.
Similar to Zloch, Bosch, Wegener: A technical perspective... (20)
Alive and kicking! Keeping data re-usable in the European Values Study:
- Data and information flow in the EVS project
- Principles and workflows for managing data and documentation in survey projects
DDI-RDF Discovery Vocabulary; A Metadata Vocabulary for Documenting Research and Survey Data:
Overview:
- What is DDI?
- Motivation
- Relationships to Vocabularies
- DDI-RDF Discovery Vocabulary
- Conceptual Model
Thesaurus-Based Indexing of Research Data in the Social SciencesOpportunities and Difficulties of Internationalization Efforts
Contents:
- Current Trends and Demands in Describing and Cataloguing Research Data
- Subject Indexing of Research Data in the Social Sciences (Present Situation in Europe)
- Thesauri in Subject Indexing
- Recommended Indexing Model
- Retrieval Model
- Practical Aspects
Dr. Andreas Oskar Kempf, Ute Sondergeld: Indicator-Based Monitoring of an Interdisciplinary Field of Science. The Example of Educational Research - Presentation at IASSIST 2013
Natascha Schumann, Astrid Recker: De-mystifying OAIS compliance. Benefits and challenges of mapping the OAIS reference model to the GESIS Data Archive - Presentation at IASSIST 2013
Wie lassen sich fachspezifische Instrumente der bibliothekarischen und dokumentarischen Inhaltserschließung in sowohl national als auch vor allem stärker international sichtbare Erschließungssysteme einbinden? Dieser Frage widmet sich eine Kooperation zwischen Deutscher Nationalbibliothek (DNB) und GESIS - Leibniz-Institut für Sozialwissenschaften zur Erstellung und Evaluation von Crosskonkordanzen zwischen der Fachklassifikation Sozialwissenschaften (KlassSoz) und der Dewey Decimal Classification (DDC), deren Ergebnisse im Rahmen des BID-Kongresses vorgestellt werden sollen.
Crosskonkordanzen dienen der Behandlung semantischer Heterogenität, indem Verbindungen zwischen semantischen Einheiten unterschiedlicher Erschließungssysteme erstellt und diese Relationen qualifiziert werden. Hierdurch wird die einheitliche Suche über verteilte und heterogen erschlossene Informationsangebote - auch über unterschiedliche Dokument- und Datentypen hinweg - möglich: aus einer Ein-Datenbank-Suche wird ein verteiltes Suchszenario. Gleichzeitig erfolgt die enge Anbindung eines lokal verwendeten Indexierungsinstruments an ein international etabliertes Erschließungssystem.
Der Vortrag liefert einen Überblick über die Methodik zur Erstellung, Qualifizierung und Evaluation der Relationen. Anhand exemplarischer Falldarstellungen wird das genaue Vorgehen beim Mapping illustriert. Zusätzlich wird auf die Überführung der Crosskonkordanzen in das SKOS-Format und damit die Bereitstellung der Mappings als Linked Data im Semantic Web eingegangen. Abschließend werden die Nutzungsmöglichkeiten der Mappings zur Wissensexploration skizziert.
Die aus Internetsuchmaschinen bekannte Technologie der indexgestützten Recherche findet im bibliothekarischen Bereich und auch darüber hinaus immer mehr Anhänger. Discovery Systeme profitieren von dieser Technik und bringen zusätzlich zur hohen Performanz und Skalierbarkeit viele der in Bibliotheken nachgefragten Funktionalitäten im Paket mit.
Eine Bibliothek erhält mit einem Discovery System eine nutzerfreundliche Möglichkeit zur Präsentation ihrer sämtlichen elektronischen Bestände und angeschlossenen Dienstleistungen. So sind Verfügbarkeitsrecherchen in OPACs, Fernleihen, das Anreichern von Suchergebnissen mit Covern, Inhaltsverzeichnissen oder Buchhandelsinformationen, der Export von Metadaten in Literaturverwaltungssoftware etc., über offene Standardschnittstellen einfach als zentrale Services einzubinden. Eine Google-ähnliche Suche mit Facettierung der Rechercheergebnisse gibt es selbstverständlich inklusiv.
Die Herausforderung besteht darin, die Flexibilität des Systems zu nutzen um eine noch konsequentere Ausrichtung auf den Nutzer umzusetzen. Dies wirkt sich naturgemäß auf die grundlegende Konzeption und damit auf den Aufbau und die Konfiguration eines Discovery Systems aus. Basierend auf Erkenntnissen durch Studien aus Umfragen und Statistiken lassen sich die Bedürfnisse der eigenen Nutzer identifizieren und Handlungsanweisungen zum Aufbau eines solchen zielgruppenspezifischen Services ableiten.
Der Weg eines ‚out-of-the-box' Discovery Systems (VuFind) hin zu einer Anwendung als disziplinär ausgerichtete, nutzerorientierte Informationsplattform wird hier am Beispiel SOWIPORT dargestellt.
Präsentation im Rahmen eines Vortrags bei der 9th Summer School on Ontology Engineering and the Semantic Web. Thomas Bosch (M.Sc.) thomas.bosch@gesis.org | http://boschthomas.blogspot.com
More from GESIS - Leibniz-Institut für Sozialwissenschaften (8)
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Leading Change strategies and insights for effective change management pdf 1.pdf
Zloch, Bosch, Wegener: A technical perspective...
1. A Technical Perspective on Use-Case-Driven Challenges
for Software Architectures
to Document Study and Variable Information
IASSIST 2013
29.05.2013
Matthäus Zloch
GESIS, Germany
matthaeus.zloch@gesis.org
Thomas Bosch
GESIS, Germany
thomas.bosch@gesis.org
boschthomas@blogspot.com
Dennis Wegener
GESIS, Germany
dennis.wegener@gesis.org
1
2. Outline
• What has already been said
• Challenges for MISSY Software Developers
• MISSY Software Architecture
• Implementation of DISCO
• Persistence Strategies
2
3. Thomas Presentation
• General information about MISSY
• Next generation MISSY
• Software architecture overview
• Presentation layer and MISSY use cases
• Business logic
• data model
• DDI-RDF Discovery Vocabulary
5. Requirements to Software Developers
• Focus lies on software reusability
• must be stable and reliable
• API must be clean and easy to extend
• Flexible Web Application Framework and modern architecture
• Service-oriented
• Use of Semantic Web technologies
• Complex data model to represent use-cases (seen in previous
presentation)
5
6. Requirements to Software Developers
• Define and implement a common data model and
• Different Persistence Strategies
• Creation of an abstract framework and architecture
• Should be well designed to be able to be extended and reusable
• Available as open source software
• Independent of end-user system
6
8. Software Architecture – Design Goals
• Separation of
• Model, i.e. concepts and real life objects, that represents the use case
• (Physical) Storage mechanisms
• Logic that controls and provides services to manipulate the data
• The representation of information itself
• The key is to have logically separated parts, where people
might work independently but collaboratively
• Creation of a reusable and extendable abstract API
8
9. Software Architecture
• State-of-the-art technologies to develop software
• Multitier architecture
• Model-View-Controller (MVC-Pattern)
• Maven Projects + Modules
• Multitier architecture separates the project into logical parts
• Presentation, application processing, data, persistence, …
9
16. Data Model
• DDI-RDF Discovery Vocabulary DISCO
• designed for the discovery use-case
• provides object types, properties and data type properties designed
for discovery use-case
• We use DISCO as the internal data model
• Implemented in Java
• Maps all object properties available
• Subclass relationships through Java native inheritance
16
18. Extendible Data Model
• DISCO does not cover all use cases
• Projects may have individual needs
• DISCO-model objects may be extended
18
DISCO-Model
Your Project-Model
19. Extendible Data Model
• DISCO does not cover all use cases
• Projects may have individual needs
• DISCO-model objects may be extended
19
Provide this
as an API!!DISCO-Model
Your Project-Model
25. Persistence-Layer – Strategies
• The application itself does not need to know how the
data is (physically) stored
• Methods are provided to access and store objects through
data access objects
• Actual implementation is “hidden” to the upper layers
• A strategy is an implementation of the actual type of
persistence or physical storage, respectively
• e.g. DDI-L-XML, DDI-RDF, XML-DB, Relational-DB, etc.
25
26. Persistence-Layer – Strategies
• The application itself does not need to know how the
data is (physically) stored
• Methods are provided to access and store objects through
data access objects
• Actual implementation is “hidden” to the upper layers
• A strategy is an implementation of the actual type of
persistence or physical storage, respectively
• e.g. DDI-L-XML, DDI-RDF, XML-DB, Relational-DB, etc.
26
disco-
persistence api
27. Persistence-Layer – Strategies
• The application itself does not need to know how the
data is (physically) stored
• Methods are provided to access and store objects through
data access objects
• Actual implementation is “hidden” to the upper layers
• A strategy is an implementation of the actual type of
persistence or physical storage, respectively
• e.g. DDI-L-XML, DDI-RDF, XML-DB, Relational-DB, etc.
• Due to performance:
27
disco-
persistence api
disco-persistence relational
28. Persistence-Layer – Strategies / Modules
• disco-persistence-api
• Defines persistence functionality for model components regardless of the
actual type of physical persistence
• disco-persistence-relational
• Implements the persistence functionality defined in disco-persistence-api
with respect to the usage of relational DBs
• disco-persistence-xml
• Implements the persistence functionality defined in disco-persistence-api
with respect to the usage of DDI-XML
• disco-persistence-rdf
• Implements the persistence functionality defined in disco-persistence-api
with respect to the usage of the disco-specification
28
29. Persistence-Layer – Strategies / Modules
• disco-persistence-api
• Defines persistence functionality for model components regardless of the
actual type of physical persistence
• disco-persistence-relational
• Implements the persistence functionality defined in disco-persistence-api
with respect to the usage of relational DBs
• disco-persistence-xml
• Implements the persistence functionality defined in disco-persistence-api
with respect to the usage of DDI-XML
• disco-persistence-rdf
• Implements the persistence functionality defined in disco-persistence-api
with respect to the usage of the disco-specification
29
39. Thank you for your attention
39
Matthäus Zloch
Team Architecture
GESIS, Germany
matthaeus.zloch@gesis.org
The Missy Project
http://github.com/missy-project