see more recent http://www.slideshare.net/jneubert/linked-data-enhanced-publishing-for-special-collections-with-drupal and http://www.slideshare.net/jneubert/swib13-drupal-ws
An Introduction to the Open Archives Initiative Object Reuse and Exchange (OA...Jenn Riley
Riley, Jenn. "An Introduction to the Open Archives Initiative Object Reuse and Exchange (OAI-ORE)." Digital Library Program Brown Bag Presentation, November 19, 2008.
Exploiting the version history of SKOS files: skos-history (SWIB13 Lightning ...Joachim Neubert
The document discusses exploiting the version history of SKOS (Simple Knowledge Organization System) files by publishing and formally tracking changes between versions of SKOS vocabularies. This would help address common questions about what is new or changed when a new version is published. It could benefit human indexers learning about updated terms, those maintaining mappings to other vocabularies, and applications supporting automatic indexing and search. The authors propose tracking changes using a GitHub repository called "skos-history" to establish best practices for maintaining SKOS version histories.
Linked data enhanced publishing for special collections (with Drupal)Joachim Neubert
This document discusses using Drupal 7 as a content management system for publishing special collections as linked open data. It provides an overview of how Drupal allows customizing content types and fields for mapping to RDF properties. While Drupal 7 provides basic RDFa support out of the box, there are some limitations around nested RDF structures and multiple entities per page that may require custom code. The document outlines some additional linked data modules for Drupal 7 and highlights improved RDF support anticipated in Drupal 8.
Using Wikidata as an Authority for the SowiDataNet Research Data RepositoryJoachim Neubert
Wikidata provides a comprehensive authority for geographical entities that can be used by the SowiDataNet Research Data Repository. Wikidata contains countries, German states, the European Union, and geographical regions without needing to create a custom authority. A custom query can access just the required geographical data items from Wikidata, gaining identifiers, multilingual labels, links to other identifiers, and abundant information from Wikipedia with minimal cost and effort compared to maintaining a custom authority.
SWIB14 presentation
Over time, Knowledge Organization Systems such as thesauri and classifications undergo lots of changes, as the knowledge domains evolve. Most SKOS publishers therefore put a version tag on their vocabularies. With the vocabularies interwoven in the open web of data, however, different versions may be the base for references in other datasets. So, updates by "third parties" are required, in indexing data as well as in mappings from or to other vocabularies. Yet answers to simple user questions such as "What's new?" or "What has changed?" are not easily obtainable. Best practices and shared standards for communicating changes precisely and making them (machine-) actionable still have to emerge. STW Thesaurus for Economics currently is subject to a series of major revisions. In a case study we review the amount and the types of changes in this process, and demonstrate how versioning in general and difficult types of changes such as the abandonment of descriptors in particular are handled. Furthermore, a method to get a tight grip on the changes, based on SPARQL queries over named graphs, is presented. And finally, the skos-history activity is introduced, which aims at the development of an ontology/application profile and best practices to describe SKOS versions and changes.
Opportunities and challenges presented by Wikidata in the context of biocurationBenjamin Good
Abstract—Wikidata is a world readable and writable knowledge base maintained by the Wikimedia Foundation. It offers the opportunity to collaboratively construct a fully open access knowledge graph spanning biology, medicine, and all other domains of knowledge. To meet this potential, social and technical challenges must be overcome - many of which are familiar to the biocuration community. These include community ontology building, high precision information extraction, provenance, and license management. By working together with Wikidata now, we can help shape it into a trustworthy, unencumbered central node in the Semantic Web of biomedical data.
An Introduction to the Open Archives Initiative Object Reuse and Exchange (OA...Jenn Riley
Riley, Jenn. "An Introduction to the Open Archives Initiative Object Reuse and Exchange (OAI-ORE)." Digital Library Program Brown Bag Presentation, November 19, 2008.
Exploiting the version history of SKOS files: skos-history (SWIB13 Lightning ...Joachim Neubert
The document discusses exploiting the version history of SKOS (Simple Knowledge Organization System) files by publishing and formally tracking changes between versions of SKOS vocabularies. This would help address common questions about what is new or changed when a new version is published. It could benefit human indexers learning about updated terms, those maintaining mappings to other vocabularies, and applications supporting automatic indexing and search. The authors propose tracking changes using a GitHub repository called "skos-history" to establish best practices for maintaining SKOS version histories.
Linked data enhanced publishing for special collections (with Drupal)Joachim Neubert
This document discusses using Drupal 7 as a content management system for publishing special collections as linked open data. It provides an overview of how Drupal allows customizing content types and fields for mapping to RDF properties. While Drupal 7 provides basic RDFa support out of the box, there are some limitations around nested RDF structures and multiple entities per page that may require custom code. The document outlines some additional linked data modules for Drupal 7 and highlights improved RDF support anticipated in Drupal 8.
Using Wikidata as an Authority for the SowiDataNet Research Data RepositoryJoachim Neubert
Wikidata provides a comprehensive authority for geographical entities that can be used by the SowiDataNet Research Data Repository. Wikidata contains countries, German states, the European Union, and geographical regions without needing to create a custom authority. A custom query can access just the required geographical data items from Wikidata, gaining identifiers, multilingual labels, links to other identifiers, and abundant information from Wikipedia with minimal cost and effort compared to maintaining a custom authority.
SWIB14 presentation
Over time, Knowledge Organization Systems such as thesauri and classifications undergo lots of changes, as the knowledge domains evolve. Most SKOS publishers therefore put a version tag on their vocabularies. With the vocabularies interwoven in the open web of data, however, different versions may be the base for references in other datasets. So, updates by "third parties" are required, in indexing data as well as in mappings from or to other vocabularies. Yet answers to simple user questions such as "What's new?" or "What has changed?" are not easily obtainable. Best practices and shared standards for communicating changes precisely and making them (machine-) actionable still have to emerge. STW Thesaurus for Economics currently is subject to a series of major revisions. In a case study we review the amount and the types of changes in this process, and demonstrate how versioning in general and difficult types of changes such as the abandonment of descriptors in particular are handled. Furthermore, a method to get a tight grip on the changes, based on SPARQL queries over named graphs, is presented. And finally, the skos-history activity is introduced, which aims at the development of an ontology/application profile and best practices to describe SKOS versions and changes.
Opportunities and challenges presented by Wikidata in the context of biocurationBenjamin Good
Abstract—Wikidata is a world readable and writable knowledge base maintained by the Wikimedia Foundation. It offers the opportunity to collaboratively construct a fully open access knowledge graph spanning biology, medicine, and all other domains of knowledge. To meet this potential, social and technical challenges must be overcome - many of which are familiar to the biocuration community. These include community ontology building, high precision information extraction, provenance, and license management. By working together with Wikidata now, we can help shape it into a trustworthy, unencumbered central node in the Semantic Web of biomedical data.
LITA Preconference: Getting Started with Drupal (handout)Rachel Vacek
This document provides an overview of popular modules for the content management system Drupal, focusing on modules useful for libraries. It discusses modules for administration, content management, performance, navigation, user management, and library-specific functions. Popular modules are highlighted for tasks like custom fields, views, panels, web forms, images, editors, spam prevention, taxonomy, scheduling, groups, analytics, events, authentication, searching catalogs and databases. Resources for learning Drupal like books, tutorials, communities and publications are also listed.
Linked Data Publishing with Drupal (SWIB13 workshop)Joachim Neubert
Publishing Linked Open Data in a user-appealing way is still a challenge: Generic solutions to convert arbitrary RDF structures to HTML out-of-the-box are available, but leave users perplexed. Custom-built web applications to enrich web pages with semantic tags "under the hood" require high efforts in programming. Given this dilemma, content management systems (CMS) could be a natural enhancement point for data on the web. In the case of Drupal, one of the most popular CMS nowadays, Semantic Web enrichment is provided as part of the CMS core. In a simple declarative approach, classes and properties from arbitrary vocabularies can be added to Drupal content types and fields, and are turned into Linked Data on the web pages automagically. The embedded RDFa marked-up data can be easily extracted by other applications. This makes the pages part of the emerging Web of Data, and in the same course helps discoverability with the major search engines.
In the workshop, you will learn how to make use of the built-in Drupal 7 features to produce RDFa enriched pages. You will build new content types, add custom fields and enhance them with RDF markup from mixed vocabularies. The gory details of providing LOD-compatible "cool" URIs will not be skipped, and current limitations of RDF support in Drupal will be explained. Exposing the data in a REST-ful application programming interface or as a SPARQL endpoint are additional options provided by Drupal modules. The workshop will also introduce modules such as Web Taxonomy, which allows linking to thesauri or authority files on the web via simple JSON-based autocomplete lookup. Finally, we will touch the upcoming Drupal 8 version. (Workshop announcement)
This talk guides you through building modern web applications using ASP.NET MVC and MongoDB, one of the most popular NoSQL databases.
You will learn some best practices for getting started with MVC. We’ll cover building rich-forms to accept user input. And if time permits, we might even add some client-side techniques using jQuery and MVC.
All of this will be built upon the powerful non-relational database MongoDB. We will discuss the origins of the so-called NoSQL movement and why you might choose a non-relational database over SQL Server. You’ll also see our data access layer will be built using LINQ to MongoDB.
Of course, you won’t be in for a night of PowerPoint. This talk is a series of interactive demos using Visual Studio 11, Windows 8, and C#.
NIF 2.0 Tutorial: Content Analysis and the Semantic Web Sebastian Hellmann
This tutorial is held by Sebastian Hellmann from the NLP2RDF Group at AKSW:
The NLP Interchange Format (NIF) is an RDF/OWL-based format that aims to achieve interoperability between Natural Language Processing (NLP) tools, language resources and annotations. NIF consists of specifications, ontologies and software (overview), which are combined under the version identifier “NIF 2.0″. Links:
http://nlp2rdf.org
http://persistence.uni-leipzig.org/nlp2rdf/
This document provides an overview of Drupal 8, including improvements for end users, site builders, designers, developers, and the timeline for its release. Key points include new mobile-first responsive features, improved authoring tools, stronger multilingual support, use of Symfony components, and a planned release date of November 19, 2015. It encourages contributors to help with documentation, examples, testing and porting existing modules to Drupal 8.
Using Cascalog to build an app with City of Palo Alto Open DataOSCON Byrum
"Using Cascalog to build an app with City of Palo Alto Open Data" by Paco Nathan, presented at OSCON 2013 in Portland. Based on a case study from "Enterprise Data Workflows with Cascading" http://shop.oreilly.com/product/0636920028536.do
OSCON 2013: Using Cascalog to build an app with City of Palo Alto Open DataPaco Nathan
OSCON 2013 talk in Portland about https://github.com/Cascading/CoPA project for CMU, to build a recommender system based on Open Data from City of Palo Alto. This talk examines a "lengthy" (400+ lines) Cascalog app -- which is big for Cascalog, as well as issues involved in commercial use cases for Open Data.
This document provides an overview of Omeka, an open-source digital collection management system. It discusses what Omeka is, its advantages for cultural heritage institutions, how to set up an Omeka site, important considerations for digital collections like copyright and metadata, and examples of plugins that can extend its functionality. The goal is to help attendees understand how Omeka can be useful for their institution to publish and exhibit digital collections online.
This is the presentation I would have loved to see when I started using Composer with Drupal. Based on my experience working with Composer and Drupal 7 + Drupal 8.
Learn about the basics working with the Dependency Management for PHP: Composer. Dicover the commands, files (composer.lock and composer.json), the pros but also the cons of using the tool.
This was presented in October 2016 in Cebu for Cebu Drupal Meetups, and Drupalcamp Japan 2017 in Tokyo in January 2017.
1) Sebastian Hellmann presented on Linked Open Data and Natural Language Processing.
2) He discussed DBpedia and using it for NLP tasks like named entity recognition.
3) Hellmann proposed ways for the ULI and AKSW to collaborate on projects like adding CLDR data to the Linguistic Linked Open Data cloud and creating open benchmarks.
UnifiedViews is a joint project currently maintained by Semantic Web Company (SWC) and Semantica.cz (Semantica.cz). It has been mainly developed by Charles University in Prague as a student project called ODCleanStore (version 2). It is based on the experience SWC obtained with the LOD Management Suite (LODMS) used in WP7 and ODCleansStore (version 1) developed by Charles University in Prague for the WP9a use case of the LOD2 FP7 project. In the next stack release of the LOD2 stack, UnifiedViews will replace LODMS as an ETL tool in the stack and the tool has already been adopted in other projects.
In the webinar we will give a brief overview of the UnifiedViews project (Helmut Nagy). The main part will be a presentation of the tool and it's capabilities (Tomas Knap)
Ruby On Rails - Rochester K Linux User GroupJose de Leon
The document provides a history and overview of Ruby and Ruby on Rails. It discusses how Ruby on Rails embodies best practices in software design such as the model-view-controller pattern, test-driven development, and principles of simplicity. Examples are given of how Rails enforces separation of concerns and automates common development tasks to improve productivity.
Doing Drupal: Quick Start Deployments via DistributionsThom Bunting
With its extensive range of contributed modules, Drupal is a highly adaptable content management system. From huge mass-media publishing gateways such as economist.com and open data repositories such as data.gov.uk to a broad range of university websites and countless blog, community-building, and social networking projects, Drupal has proven itself capable of supporting diverse business and user requirements.
Recently some useful Drupal distributions have pre-packaged leading-edge modules to facilitate creation of highly advanced, customisable websites. These distributions harness the power of Drupal's extensible modular framework, with the ease of 'famous 5 minute installation'.
In this computer-lab-based session, participants review and explore newly released Drupal distributions, with focus on a distribution providing automated content and data aggregation, tagging, mapping, and trend visualisation. Learning objectives include: understanding how Drupal distributions can simplify CMS set-up and deployment; appraising use cases; evaluating institutional benefits and challenges.
This document discusses integrating custom taskflows into Oracle WebCenter Portal. It outlines the overall development process, including developing taskflows in JDeveloper using web services or data sources, packaging them as a shared library WAR file, deploying the WAR to WebLogic Server, registering the library with the portal, adding the taskflows to the portal's resource catalog, and using them on portal pages. It also covers configuring datasources and web services in WebLogic Admin Console and Enterprise Manager to match the taskflow configuration.
This document discusses integrating custom taskflows into Oracle WebCenter Portal. It outlines the overall development process, including developing taskflows in JDeveloper using web services or data sources, packaging them as a shared library WAR file, deploying the WAR to WebLogic Server, registering the library with the portal, adding the taskflows to the portal's resource catalog, and using them on portal pages. It also covers configuring datasources and web services in WebLogic Admin Console and Enterprise Manager to match the taskflow configuration.
http://2016.foss4g.org/talks.html#146
Docker is a growing open-source platform for building and shipping applications as cloud services in so called containers. But containers can be more than that! Following the idea of DevOps, Dockerfiles are a complete scripted definition of an application with all it's dependencies, which can be build and published as ready to use images. As each container is only running "one thing" (e.g. one application, one database, a worker instance), multiple containers can be configured with the help of docker-compose.
More and more geospatial open source projects or third parties provide Dockerfiles. In this talk, we try to give an overview of the existing Docker images and docker-compose configurations for FOSS4G projects. We report on test runs that we conducted with them, informing about the evaluation results, target purposes, licenses, commonly used base images, and more. We will also give a short introduction into Docker and present the purposes that Docker images can be used for, such as easy evaluation for new users, education, testing, or common development environments.
This talk integrates and summarizes information from previous talks at FOSS4G and FOSSGIS conferences, so I'd like to thank Sophia Parafina, Jonathan Meyer, and Björn Schilberg for their contributions.
Getting Started with Drupal - HandoutsRachel Vacek
This document provides an overview of popular contributed and core modules for the content management system Drupal. It is organized into sections on administration, content management, performance, navigation, publishing, user management, SEO/analytics, events/calendars, authentication, and library-specific modules. Key modules highlighted include Views, CCK, Context, Panels, Webform, Taxonomy Menu, Pathauto, Organic Groups, Google Analytics, Date, Calendar, LDAP, and library-focused modules like LT4L, Question/Answer for email reference, and Fedora REST API. Resources for learning more about Drupal like books, online tutorials, communities and publications are also listed.
Doctrine is an object relational mapper (ORM) built for PHP. It allows developers to work with PHP objects instead of directly with SQL. Some key features include the Doctrine Query Language (DQL) for writing object-oriented queries, support for relationships and associations between objects, and behaviors that add common functionality like timestamps and slugs. Doctrine aims to make working with databases and objects easier and more productive for PHP developers.
Composer is the de-facto php dependency management tool of the future. An ever-increasing number of useful open-source libraries are available for easy use via Packagist, the standard repository manager for Composer. As more and more Drupal contrib modules begin to depend on external libraries from Packagist, the motivation to use Composer to manage grows stronger; since Drupal 8 Core, and Drush 7 are now also using Composer to manage dependencies, the best way to ensure that all of the requirements are resolved correctly is to manage everything from a top-level project composer.json file.
This deck examines the different ways that Composer can be used to manage your project code, and how these new practices will influence how you use Drush and deploy code.
Watch the session video: https://www.youtube.com/watch?v=WNS3d_wzZ2Y
This presentation discusses catalog enrichment using Linked Open Data. It begins with defining catalog enrichment as any addendum to catalog records, such as links to full texts or subjects. It then discusses techniques for enrichment including matching catalog records to external data sources and linking the records. The presentation demonstrates an implementation of catalog enrichment by linking records to data sets like DBpedia, Project Gutenberg and Open Library. It concludes that while catalog enrichment is possible without Linked Open Data, using LOD makes the process easier.
Exploring and mapping the category system of the world‘s largest public press...Joachim Neubert
ZBW is a member of the Leibniz Association that has explored and mapped the category system of the world's largest public press archives. The archives contain over 25,000 thematic dossiers from 1908 to 1949 about persons, general subjects, events, and companies with over 2 million scanned pages available online. ZBW aims to map the historic system used to organize knowledge about the world in the press archives to Wikidata to make the information more accessible.
More Related Content
Similar to Linked Data Publishing with Drupal (SWIB12 Lightning Talk)
LITA Preconference: Getting Started with Drupal (handout)Rachel Vacek
This document provides an overview of popular modules for the content management system Drupal, focusing on modules useful for libraries. It discusses modules for administration, content management, performance, navigation, user management, and library-specific functions. Popular modules are highlighted for tasks like custom fields, views, panels, web forms, images, editors, spam prevention, taxonomy, scheduling, groups, analytics, events, authentication, searching catalogs and databases. Resources for learning Drupal like books, tutorials, communities and publications are also listed.
Linked Data Publishing with Drupal (SWIB13 workshop)Joachim Neubert
Publishing Linked Open Data in a user-appealing way is still a challenge: Generic solutions to convert arbitrary RDF structures to HTML out-of-the-box are available, but leave users perplexed. Custom-built web applications to enrich web pages with semantic tags "under the hood" require high efforts in programming. Given this dilemma, content management systems (CMS) could be a natural enhancement point for data on the web. In the case of Drupal, one of the most popular CMS nowadays, Semantic Web enrichment is provided as part of the CMS core. In a simple declarative approach, classes and properties from arbitrary vocabularies can be added to Drupal content types and fields, and are turned into Linked Data on the web pages automagically. The embedded RDFa marked-up data can be easily extracted by other applications. This makes the pages part of the emerging Web of Data, and in the same course helps discoverability with the major search engines.
In the workshop, you will learn how to make use of the built-in Drupal 7 features to produce RDFa enriched pages. You will build new content types, add custom fields and enhance them with RDF markup from mixed vocabularies. The gory details of providing LOD-compatible "cool" URIs will not be skipped, and current limitations of RDF support in Drupal will be explained. Exposing the data in a REST-ful application programming interface or as a SPARQL endpoint are additional options provided by Drupal modules. The workshop will also introduce modules such as Web Taxonomy, which allows linking to thesauri or authority files on the web via simple JSON-based autocomplete lookup. Finally, we will touch the upcoming Drupal 8 version. (Workshop announcement)
This talk guides you through building modern web applications using ASP.NET MVC and MongoDB, one of the most popular NoSQL databases.
You will learn some best practices for getting started with MVC. We’ll cover building rich-forms to accept user input. And if time permits, we might even add some client-side techniques using jQuery and MVC.
All of this will be built upon the powerful non-relational database MongoDB. We will discuss the origins of the so-called NoSQL movement and why you might choose a non-relational database over SQL Server. You’ll also see our data access layer will be built using LINQ to MongoDB.
Of course, you won’t be in for a night of PowerPoint. This talk is a series of interactive demos using Visual Studio 11, Windows 8, and C#.
NIF 2.0 Tutorial: Content Analysis and the Semantic Web Sebastian Hellmann
This tutorial is held by Sebastian Hellmann from the NLP2RDF Group at AKSW:
The NLP Interchange Format (NIF) is an RDF/OWL-based format that aims to achieve interoperability between Natural Language Processing (NLP) tools, language resources and annotations. NIF consists of specifications, ontologies and software (overview), which are combined under the version identifier “NIF 2.0″. Links:
http://nlp2rdf.org
http://persistence.uni-leipzig.org/nlp2rdf/
This document provides an overview of Drupal 8, including improvements for end users, site builders, designers, developers, and the timeline for its release. Key points include new mobile-first responsive features, improved authoring tools, stronger multilingual support, use of Symfony components, and a planned release date of November 19, 2015. It encourages contributors to help with documentation, examples, testing and porting existing modules to Drupal 8.
Using Cascalog to build an app with City of Palo Alto Open DataOSCON Byrum
"Using Cascalog to build an app with City of Palo Alto Open Data" by Paco Nathan, presented at OSCON 2013 in Portland. Based on a case study from "Enterprise Data Workflows with Cascading" http://shop.oreilly.com/product/0636920028536.do
OSCON 2013: Using Cascalog to build an app with City of Palo Alto Open DataPaco Nathan
OSCON 2013 talk in Portland about https://github.com/Cascading/CoPA project for CMU, to build a recommender system based on Open Data from City of Palo Alto. This talk examines a "lengthy" (400+ lines) Cascalog app -- which is big for Cascalog, as well as issues involved in commercial use cases for Open Data.
This document provides an overview of Omeka, an open-source digital collection management system. It discusses what Omeka is, its advantages for cultural heritage institutions, how to set up an Omeka site, important considerations for digital collections like copyright and metadata, and examples of plugins that can extend its functionality. The goal is to help attendees understand how Omeka can be useful for their institution to publish and exhibit digital collections online.
This is the presentation I would have loved to see when I started using Composer with Drupal. Based on my experience working with Composer and Drupal 7 + Drupal 8.
Learn about the basics working with the Dependency Management for PHP: Composer. Dicover the commands, files (composer.lock and composer.json), the pros but also the cons of using the tool.
This was presented in October 2016 in Cebu for Cebu Drupal Meetups, and Drupalcamp Japan 2017 in Tokyo in January 2017.
1) Sebastian Hellmann presented on Linked Open Data and Natural Language Processing.
2) He discussed DBpedia and using it for NLP tasks like named entity recognition.
3) Hellmann proposed ways for the ULI and AKSW to collaborate on projects like adding CLDR data to the Linguistic Linked Open Data cloud and creating open benchmarks.
UnifiedViews is a joint project currently maintained by Semantic Web Company (SWC) and Semantica.cz (Semantica.cz). It has been mainly developed by Charles University in Prague as a student project called ODCleanStore (version 2). It is based on the experience SWC obtained with the LOD Management Suite (LODMS) used in WP7 and ODCleansStore (version 1) developed by Charles University in Prague for the WP9a use case of the LOD2 FP7 project. In the next stack release of the LOD2 stack, UnifiedViews will replace LODMS as an ETL tool in the stack and the tool has already been adopted in other projects.
In the webinar we will give a brief overview of the UnifiedViews project (Helmut Nagy). The main part will be a presentation of the tool and it's capabilities (Tomas Knap)
Ruby On Rails - Rochester K Linux User GroupJose de Leon
The document provides a history and overview of Ruby and Ruby on Rails. It discusses how Ruby on Rails embodies best practices in software design such as the model-view-controller pattern, test-driven development, and principles of simplicity. Examples are given of how Rails enforces separation of concerns and automates common development tasks to improve productivity.
Doing Drupal: Quick Start Deployments via DistributionsThom Bunting
With its extensive range of contributed modules, Drupal is a highly adaptable content management system. From huge mass-media publishing gateways such as economist.com and open data repositories such as data.gov.uk to a broad range of university websites and countless blog, community-building, and social networking projects, Drupal has proven itself capable of supporting diverse business and user requirements.
Recently some useful Drupal distributions have pre-packaged leading-edge modules to facilitate creation of highly advanced, customisable websites. These distributions harness the power of Drupal's extensible modular framework, with the ease of 'famous 5 minute installation'.
In this computer-lab-based session, participants review and explore newly released Drupal distributions, with focus on a distribution providing automated content and data aggregation, tagging, mapping, and trend visualisation. Learning objectives include: understanding how Drupal distributions can simplify CMS set-up and deployment; appraising use cases; evaluating institutional benefits and challenges.
This document discusses integrating custom taskflows into Oracle WebCenter Portal. It outlines the overall development process, including developing taskflows in JDeveloper using web services or data sources, packaging them as a shared library WAR file, deploying the WAR to WebLogic Server, registering the library with the portal, adding the taskflows to the portal's resource catalog, and using them on portal pages. It also covers configuring datasources and web services in WebLogic Admin Console and Enterprise Manager to match the taskflow configuration.
This document discusses integrating custom taskflows into Oracle WebCenter Portal. It outlines the overall development process, including developing taskflows in JDeveloper using web services or data sources, packaging them as a shared library WAR file, deploying the WAR to WebLogic Server, registering the library with the portal, adding the taskflows to the portal's resource catalog, and using them on portal pages. It also covers configuring datasources and web services in WebLogic Admin Console and Enterprise Manager to match the taskflow configuration.
http://2016.foss4g.org/talks.html#146
Docker is a growing open-source platform for building and shipping applications as cloud services in so called containers. But containers can be more than that! Following the idea of DevOps, Dockerfiles are a complete scripted definition of an application with all it's dependencies, which can be build and published as ready to use images. As each container is only running "one thing" (e.g. one application, one database, a worker instance), multiple containers can be configured with the help of docker-compose.
More and more geospatial open source projects or third parties provide Dockerfiles. In this talk, we try to give an overview of the existing Docker images and docker-compose configurations for FOSS4G projects. We report on test runs that we conducted with them, informing about the evaluation results, target purposes, licenses, commonly used base images, and more. We will also give a short introduction into Docker and present the purposes that Docker images can be used for, such as easy evaluation for new users, education, testing, or common development environments.
This talk integrates and summarizes information from previous talks at FOSS4G and FOSSGIS conferences, so I'd like to thank Sophia Parafina, Jonathan Meyer, and Björn Schilberg for their contributions.
Getting Started with Drupal - HandoutsRachel Vacek
This document provides an overview of popular contributed and core modules for the content management system Drupal. It is organized into sections on administration, content management, performance, navigation, publishing, user management, SEO/analytics, events/calendars, authentication, and library-specific modules. Key modules highlighted include Views, CCK, Context, Panels, Webform, Taxonomy Menu, Pathauto, Organic Groups, Google Analytics, Date, Calendar, LDAP, and library-focused modules like LT4L, Question/Answer for email reference, and Fedora REST API. Resources for learning more about Drupal like books, online tutorials, communities and publications are also listed.
Doctrine is an object relational mapper (ORM) built for PHP. It allows developers to work with PHP objects instead of directly with SQL. Some key features include the Doctrine Query Language (DQL) for writing object-oriented queries, support for relationships and associations between objects, and behaviors that add common functionality like timestamps and slugs. Doctrine aims to make working with databases and objects easier and more productive for PHP developers.
Composer is the de-facto php dependency management tool of the future. An ever-increasing number of useful open-source libraries are available for easy use via Packagist, the standard repository manager for Composer. As more and more Drupal contrib modules begin to depend on external libraries from Packagist, the motivation to use Composer to manage grows stronger; since Drupal 8 Core, and Drush 7 are now also using Composer to manage dependencies, the best way to ensure that all of the requirements are resolved correctly is to manage everything from a top-level project composer.json file.
This deck examines the different ways that Composer can be used to manage your project code, and how these new practices will influence how you use Drush and deploy code.
Watch the session video: https://www.youtube.com/watch?v=WNS3d_wzZ2Y
This presentation discusses catalog enrichment using Linked Open Data. It begins with defining catalog enrichment as any addendum to catalog records, such as links to full texts or subjects. It then discusses techniques for enrichment including matching catalog records to external data sources and linking the records. The presentation demonstrates an implementation of catalog enrichment by linking records to data sets like DBpedia, Project Gutenberg and Open Library. It concludes that while catalog enrichment is possible without Linked Open Data, using LOD makes the process easier.
Similar to Linked Data Publishing with Drupal (SWIB12 Lightning Talk) (20)
Exploring and mapping the category system of the world‘s largest public press...Joachim Neubert
ZBW is a member of the Leibniz Association that has explored and mapped the category system of the world's largest public press archives. The archives contain over 25,000 thematic dossiers from 1908 to 1949 about persons, general subjects, events, and companies with over 2 million scanned pages available online. ZBW aims to map the historic system used to organize knowledge about the world in the press archives to Wikidata to make the information more accessible.
Donating data to Wikidata: First experiences from the „20th Century Press Arc...Joachim Neubert
This document summarizes ZBW's experiences donating data from its 20th Century Press Archives (PM20) collection to Wikidata. PM20 contains over 2 million digitized newspaper clippings organized into 25,000 thematic dossiers on general topics, persons, companies, and products from 1908-1949. ZBW aims to link all PM20 dossier folders to Wikidata items to provide open access to these historical sources. So far, ZBW has added over 5,000 links and 6,000 statements from PM20 dossiers to Wikidata items on persons. Their next challenge is mapping PM20's hierarchical organization of dossiers on countries and topics to Wikidata.
Tutorial at DCMI conference in Seoul, 2019-09-25, by Tom Baker, Joachim Neubert and Andra Waagmeester
Rendered HTML version: https://jneubert.github.io/wd-dcmi2019/#/
Wikidata as opportunity for special collections: the 20th Century Press Archi...Joachim Neubert
This document discusses transferring metadata from the 20th Century Press Archives to Wikidata. It begins by describing the press archives collection. It then explains why Wikidata is a good platform, being sustainable, editable, and with linked open data capabilities. The document outlines the process of linking the archive's metadata to existing Wikidata items, creating new items, and adding metadata to items. It provides an example of using the linked data to create a map of economists in the collection. Future plans include linking more archive folders to items and creating pages for each folder on the archive's website.
ZBW is a member of the Leibniz Association and maintains the 20th Century Press Archives. The archives contain over 1 million digitized newspaper clippings from 1500+ newspapers covering persons, companies, products, and events. To ensure long term sustainability, ZBW is making the folder metadata from the archives openly available on Wikidata to allow for improved discovery, access, and maintenance of the metadata. Over 90% of person folders from the archives have already been linked on Wikidata. This integration with Wikidata will provide new interfaces and APIs for working with the press archive metadata.
1) The 20th Century Press Archives housed at ZBW, a member of the Leibniz Association, contains digitized press clippings dating back to 1826 that were organized into dossiers on persons, companies, products and events.
2) The metadata for the dossiers and 1.8 million digitized pages are being linked to Wikidata to make them more accessible and connect them to related information on Wikidata.
3) A new Wikidata property was created to link the press archive dossier IDs to relevant Wikidata items and the press archive data was released on Wikidata and via APIs to encourage participation in the Coding da Vinci cultural hackathon.
Linking Knowledge Organization Systems via Wikidata (DCMI conference 2018)Joachim Neubert
Wikidata has been used sucessfully as a linking hub for authoritiy files. Knowledge organization systems like thesauri or classifications are more complex and pose additional challenges.
Making Wikidata fit as a Linking Hub for Knowledge Organization SystemsJoachim Neubert
Wikidata can serve as a linking hub for knowledge organization systems by using its new "mapping relation type" qualifier (P4390) to optionally qualify external property relations with a mapping type like exactMatch, closeMatch, or narrowMatch. This qualifier allows tracking its usage by relation type and external ID properties. The qualifier aims to improve Wikidata's role as a hub for linking different knowledge organization systems.
Joachim Neubert presented methods for linking authorities like economists from the Research Papers in Economics (RePEc) database and the Integrated Authority File (GND) to Wikidata items. This included developing tools to match Wikidata items to external IDs, identify items missing properties, lookup external IDs, and trigger property inserts into Wikidata via QuickStatements. Over 10% of economists were added to Wikidata by synthesizing existing mappings and inserting new items from RePEc.
Wikidata as a linking hub for knowledge organization systems? Integrating an ...Joachim Neubert
Wikidata has been created in order to support all of the roughly 300 Wikipedia projects. Besides interlinking all Wikipedia pages about a specific item – e.g., a person - in different languages, it also connects to more than 1900 different sources of authority information.
We will present lessons learned from using Wikidata as a linking hub for two personal name authorities in economics (GND and RePEc author identifiers) and demonstrate the benefits of moving a mapping from a closed environment to Wikidata as a public and community-curated linking hub. We will further ask to what extent these experiences can be transferred to knowledge organization systems and how the limitation to simple 1:1 relationships (as for authorities) can be overcome. Using STW Thesaurus for Economics as an example, we will investigate how we can make use of existing cross-concordances to "seed" Wik-idata with external identifiers, and how transitive mappings to yet un-mapped vocabularies can be earned.
ELAG 2017 Abstract: Authority files and identifiers are used by libraries to consistently refer to entities such as subjects and authors. This principle of referencing “things, not strings” is also applied successfully in Linked Open Data and knowledge bases.
The open knowledge base Wikidata connects Wikipedia articles in any language and provides background information on article’s subject such images and affiliations. In addition to internal references Wikidata contains identifiers from more than thousand authority files: well known library identifiers such as VIAF and GND, researcher IDs such as ORCID and Google Scholar, and identifiers for many more items such as TED speakers and Find A Grave. Wikidata thus emerges as a giant curated linking hub based on authority identifiers. In contrast to closed authority files that often impose tedious procedures, Wikidata can be enhanced and corrected by anybody, just like Wikipedia.
The management of links between authority files in Wikidata is particularly suitable for automation: mappings extracted from Wikidata can be used in other systems and mapping data is already available in libraries ready to be added to Wikidata. This presentation will illustrate the use of Wikidata as authority linking hub with a case study considering links between author identifiers from RePEc (Research Papers in Economics) and GND. Workflows include both automatic and semi-automatic mapping approaches. We will address both technical solutions and organizational policies ruling the operation of Wikidata bots for automated updates. The tools presented in this talk include “wdmapper” command line application for extracting and complementing mappings from Wikidata with external mapping files and the “Mix’n’Match” to support crowdsourced mapping of authority files to Wikidata.
Online version: https://hackmd.io/s/S1YmXWC0e#
Leveraging SKOS to trace the overhaul of the STW Thesaurus for EconomicsJoachim Neubert
ZBW maintains the STW Thesaurus for Economics. They have overhauled it since 2010 using SKOS, releasing a new version roughly yearly. The latest version 9.0 added 777 new concepts and deprecated 1,052 concepts. To track these complex changes, ZBW developed a dataset versioning and SKOS-history approach. It extracts insertions and deletions between versions and makes them queryable. This provides comprehensive information on concept changes to support users and applications that use the thesaurus.
Constantly Under Construction: STW Thesaurus for Economics Linked Data Maint...Joachim Neubert
Talk at Dublin Core Libraries Community (Sep 5, 2013 in Lisbon). Includes a first draft of a version history for SKOS files - see project on https://github.com/jneubert/skos-history
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Programming Foundation Models with DSPy - Meetup Slides
Linked Data Publishing with Drupal (SWIB12 Lightning Talk)
1. ZBW Labs - Linked Data Publishing with Drupal
Joachim Neubert
ZBW German National Library of Economics
Leibniz Information Centre for Economics
SWIB12 Lightning Talk
Cologne, 28/11/2012
ZBW is member of the Leibniz Association
2. Example of a ZBW Labs project page
http://zbw.eu/beta/labs
3. Declaration of RDF types and attributes in Drupal
mixing attributes
from different
ontologies freely
4. RDF triples extracted from the example project page
</labs/en/node/9>
a schema:CreativeWork, doap:Project ;
doap:category </labs/en/taxonomy/term/1>, </labs/en/taxonomy/te
doap:created "2012-02"^^xsd:gYearMonth, "2012-02-01T00:00:00+01
doap:developer </labs/en/user/19> ;
doap:homepage <http://drupal.org/sandbox/jneubert/1447918> ;
doap:name "Economics Taxonomies in Drupal"@en ;
doap:repository [
a doap:GitRepository ;
doap:location <http://git.drupal.org/sandbox/jneubert/14479
] ;
doap:shortdesc "Making Web Services for Economics available as
schema:author </labs/en/user/19> ;
schema:name "Economics Taxonomies in Drupal"@en ;
...
dc:creator </labs/en/user/19> ;
dc:title "Economics Taxonomies in Drupal"@en ;