Presentation of "Reusing Linguistic Resources: Tasks and Goals for a Linked Data Approach", March 9, DGfS 34, Frankfurt Germany.
Find the paper at: http://www.springerlink.com/content/k535323272457913
This presentation is an updated version of my Data Management 101 talk, which covers the basics of research data management in the categories of: storage and backup, documentation, organization, and making files usable for the future.
Slides from NCURA's webinar "Part I: Public Access: Practical Ways To Assist Faculty To Comply With Public Access Policies". This is the last section on the webinar on open data.
Lab Notebooks as Data Management (SLA Winter Virtual Conference 2012)Kristin Briney
This talk, aimed at librarians, describes the data management issues surrounding paper and electronic lab notebooks. It offers several ways for librarians to support good practices and the transition from paper to electronic.
Learn about library guides and open source software to create them, presented by Katie Lynn in March 2010 for Get On The Bus Wyoming: http://getonthebuswyoming.wordpress.com/.
This presentation is an updated version of my Data Management 101 talk, which covers the basics of research data management in the categories of: storage and backup, documentation, organization, and making files usable for the future.
Slides from NCURA's webinar "Part I: Public Access: Practical Ways To Assist Faculty To Comply With Public Access Policies". This is the last section on the webinar on open data.
Lab Notebooks as Data Management (SLA Winter Virtual Conference 2012)Kristin Briney
This talk, aimed at librarians, describes the data management issues surrounding paper and electronic lab notebooks. It offers several ways for librarians to support good practices and the transition from paper to electronic.
Learn about library guides and open source software to create them, presented by Katie Lynn in March 2010 for Get On The Bus Wyoming: http://getonthebuswyoming.wordpress.com/.
Opening What's Closed: Using Open Source Tools to Tear Down [Vendor] SilosKen Varnum
The University of Michigan Library's web site is a consistent, integrated front end on what was a collection of 19 distinctly different library sites and multiple library silos. The library's site now combines a variety of tools (including Drupal, VuFind, Springshare's LibGuides, Ex Libris's Metalib, DSpace, and Solr) within a single interface. In this talk, you will learn about the design process that informed the system architecture and the way we are using data from both open source and proprietary software to break down information silos. Presented at WiLSWOrld 2010 in Madison, Wisonsin
Troubleshooting Electronic Resources with ILL DataNASIG
Troubleshooting electronic resource linking issues can seem to be an insurmountable task - some many resources, so little time. Using ILL data on requests for materials available online, the electronic resources staff at the Samford University Library detected problems with the implementation of their new link resolver. This data also provided a window into some systemic issues within the metadata of certain sources and the link resolver knowledgebase. In addition to helping us improve linking for our users, the establishment of a workflow for communicating cancelled ILL transaction data on an ongoing basis has also improved the communication between electronic resources staff and the ILL department regarding the overall linking process.
Speaker: Beth Ashmore, Metadata Librarian for Serials and Electronic Resources, Samford University
Automatic Heritage Metadata Enrichment with Historic Events Marieke van Erp
Here are the slides from the Agora presentation at Museums and the Web 2011. Johan Oomen and Marieke van Erp presented this first version of the Agora event extraction system to enrich museum collection with during the Linked Data Session on Thursday 7 April 2011.
For more information see http://agora.cs.vu.nl
Agora: putting museum objects into their art-historic contextMarieke van Erp
The digital era has presented big challenges, but also great opportunities for the museum world. One of these opportunities is the way museums can open up their collections to the public. Many museums are now actively exploring possibilities to present their collections online for visitors who cannot come to the museum, or to show objects for which they do not have space in the exhibition halls. Often they will put together themed Web sites for online exhibitions in which objects are presented in a certain context. However, these themed Web sites usually only cover only a small part of their collection. For the majority of the objects, the context is not made explicit. In the Agora project, we aim to make this context explicit in an automatic way in order to help users understand and interpret museum objects. We do this by linking museum objects to historical events and explicitly presenting these links in an event-driven browsing environment.
In the first part of my talk, I will explain the theoretical framework we have developed in the Agora project to represent historical contexts as well as the general challenges to the project. In the second part of my talk, I will focus on the particular challenges in information extraction for building the event thesaurus and linking museum objects.
These slides are from a presentation given at the Eurecom seminar on July 20 2012
Slides of the Knowledge and Media lecture about Linked Data and Linked Open Data. Presented 19 november 2012. Slides were based on presentations by Victor de Boer and Christophe Guéret
Slides shown at Agora Bronbeek session in which we interviewed eye witnesses and interested lay persons on how they share their memories and access information about historical events they were involved in.
Lecture 5: Mining, Analysis and VisualisationMarieke van Erp
This is the fourth lecture in the Social Web course at the VU University Amsterdam
Visit the website for more information: <a>Social Web 2012</a>
Opening What's Closed: Using Open Source Tools to Tear Down [Vendor] SilosKen Varnum
The University of Michigan Library's web site is a consistent, integrated front end on what was a collection of 19 distinctly different library sites and multiple library silos. The library's site now combines a variety of tools (including Drupal, VuFind, Springshare's LibGuides, Ex Libris's Metalib, DSpace, and Solr) within a single interface. In this talk, you will learn about the design process that informed the system architecture and the way we are using data from both open source and proprietary software to break down information silos. Presented at WiLSWOrld 2010 in Madison, Wisonsin
Troubleshooting Electronic Resources with ILL DataNASIG
Troubleshooting electronic resource linking issues can seem to be an insurmountable task - some many resources, so little time. Using ILL data on requests for materials available online, the electronic resources staff at the Samford University Library detected problems with the implementation of their new link resolver. This data also provided a window into some systemic issues within the metadata of certain sources and the link resolver knowledgebase. In addition to helping us improve linking for our users, the establishment of a workflow for communicating cancelled ILL transaction data on an ongoing basis has also improved the communication between electronic resources staff and the ILL department regarding the overall linking process.
Speaker: Beth Ashmore, Metadata Librarian for Serials and Electronic Resources, Samford University
Automatic Heritage Metadata Enrichment with Historic Events Marieke van Erp
Here are the slides from the Agora presentation at Museums and the Web 2011. Johan Oomen and Marieke van Erp presented this first version of the Agora event extraction system to enrich museum collection with during the Linked Data Session on Thursday 7 April 2011.
For more information see http://agora.cs.vu.nl
Agora: putting museum objects into their art-historic contextMarieke van Erp
The digital era has presented big challenges, but also great opportunities for the museum world. One of these opportunities is the way museums can open up their collections to the public. Many museums are now actively exploring possibilities to present their collections online for visitors who cannot come to the museum, or to show objects for which they do not have space in the exhibition halls. Often they will put together themed Web sites for online exhibitions in which objects are presented in a certain context. However, these themed Web sites usually only cover only a small part of their collection. For the majority of the objects, the context is not made explicit. In the Agora project, we aim to make this context explicit in an automatic way in order to help users understand and interpret museum objects. We do this by linking museum objects to historical events and explicitly presenting these links in an event-driven browsing environment.
In the first part of my talk, I will explain the theoretical framework we have developed in the Agora project to represent historical contexts as well as the general challenges to the project. In the second part of my talk, I will focus on the particular challenges in information extraction for building the event thesaurus and linking museum objects.
These slides are from a presentation given at the Eurecom seminar on July 20 2012
Slides of the Knowledge and Media lecture about Linked Data and Linked Open Data. Presented 19 november 2012. Slides were based on presentations by Victor de Boer and Christophe Guéret
Slides shown at Agora Bronbeek session in which we interviewed eye witnesses and interested lay persons on how they share their memories and access information about historical events they were involved in.
Lecture 5: Mining, Analysis and VisualisationMarieke van Erp
This is the fourth lecture in the Social Web course at the VU University Amsterdam
Visit the website for more information: <a>Social Web 2012</a>
About the Webinar
The library and cultural institution communities have generally accepted the vision of moving to a Linked Data environment that will align and integrate their resources with those of the greater Semantic Web. But moving from vision to implementation is not easy or well-understood. A number of institutions have begun the needed infrastructure and tools development with pilot projects to provide structured data in support of discovery and navigation services for their collections and resources.
Join NISO for this webinar where speakers will highlight actual Linked Data projects within their institutions—from envisioning the model to implementation and lessons learned—and present their thoughts on how linked data benefits research, scholarly communications, and publishing.
Speakers:
Jon Voss - Strategic Partnerships Director, We Are What We Do
LODLAM + Historypin: A Collaborative Global Community
Matt Miller - Front End Developer, NYPL Labs at the New York Public Library
The Linked Jazz Project: Revealing the Relationships of the Jazz Community
Cory Lampert - Head, Digital Collections , UNLV University Libraries
Silvia Southwick - Digital Collections Metadata Librarian, UNLV University Libraries
Linked Data Demystified: The UNLV Linked Data Project
Talk about Exploring the Semantic Web, and particularly Linked Data, and the Rhizomer approach. Presented August 14th 2012 at the SRI AIC Seminar Series, Menlo Park, CA
Talk given at Open Knowledge Foundation 'Opening Up Metadata: Challenges, Standards and Tools' Workshop, Queen Mary University of London, 13th June 2012.
Info on the event at http://openglam.org/2012/05/31/last-places-left-for-opening-up-metadata-challenges-standards-and-tools/
UCISA Learning Anaytics Pre-Conference WorkshopMike Moore
UCISA Learning Analytics Pre-Conference Workshop
Mike Moore - Sr. Advisory Consultant - Analytics
Desire2Learn, Inc.
UCISA Conference 2014, Brighton, UK
Presented Mar 26, 2014
Slides from our tutorial on Linked Data generation in the energy domain, presented at the Sustainable Places 2014 conference on October 2nd in Nice, France
Lucene/Solr Revolution 2015: Where Search Meets Machine LearningJoaquin Delgado PhD.
Search engines have focused on solving the document retrieval problem, so their scoring functions do not handle naturally non-traditional IR data types, such as numerical or categorical. Therefore, on domains beyond traditional search, scores representing strengths of associations or matches may vary widely. As such, the original model doesn’t suffice, so relevance ranking is performed as a two-phase approach with 1) regular search 2) external model to re-rank the filtered items. Metrics such as click-through and conversion rates are associated with the users’ response to items served. The predicted selection rates that arise in real-time can be critical for optimal matching. For example, in recommender systems, predicted performance of a recommended item in a given context, also called response prediction, is often used in determining a set of recommendations to serve in relation to a given serving opportunity. Similar techniques are used in the advertising domain. To address this issue the authors have created ML-Scoring, an open source framework that tightly integrates machine learning models into a popular search engine (SOLR/Elasticsearch), replacing the default IR-based ranking function. A custom model is trained through either Weka or Spark and it is loaded as a plugin used at query time to compute custom scores.
Towards Culturally Aware AI Systems - TSDH SymposiumMarieke van Erp
Towards Culturally Aware AI Systems
Presented 23 June 2021
Slide credits: Cultural AI team members Andrei Nesterov, Laura Hollink, Ryan Brate, Valentin Vogelmann + input and inspiration from all Cultural AI Colleagues
Biases in data can be both explicit and implicit. Explicitly, ‘The Dutch Seventeenth Century’ and ‘The Dutch Golden Age’ are pseudo-synonymous and refer to a particular era of Dutch history. Implicitly, the ‘Golden Age’ moniker is contested due to the fact that the geopolitical and economic expansion came with great costs, such as the slave trade. A simple two-word phrase can carry strong contestations, and entire research fields, such as post-colonial studies, are devoted to them. However, these sometimes subtle (and sometimes not so subtle) differences in voice are as yet not often represented well in AI systems.
In this talk, I will discuss how the Cultural AI Lab is working towards creating AI systems that are implicitly or explicitly aware of the subtle and subjective complexity of human culture. I will highlight the different research strands and activities that look at AI from different angles as well as how we engage with our user communities to create synergies between the technology and the daily practice of cultural heritage professionals.
The Human in Digital Humanities
Online Symposium, Tilburg School of Humanities & Digital Sciences
Tilburg University
https://www.digitalhumanitiestilburg.com/
Marieke van Erp & Victor de Boer (2021, June). A Polyvocal and Contextualised Semantic Web. In European Semantic Web Conference (pp. 506-512). Springer, Cham.
Presented on 8 June, 2021
Computationally Tracing Concepts Through Time and SpaceMarieke van Erp
Slides for HNR2020 Keynote presentation
Abstract:
Digitised sources are a treasure trove for scholars, but accessing the information contained in them is far from trivial. Due to scale, traditional methods are insufficient to analyse the big data coming from these sources. Hence, computational methods look to be the solution. Indeed, computational methods can be utilised to identify and model concepts in large digital datasets, however the nature of these datasets as well as that of humanities research questions requires caution. In particular, the ramifications of time and location on understanding concepts cannot be underestimated.
In this talk, Marieke will present ongoing work on computationally tracing concepts through time and across geography using language and semantic web technology. The work illustrates that seemingly simple concepts (e.g. sugar) prove to be much more complex than expected. We discuss the importance of semantics in helping not only to deal with this complexity but reify it so that it can be interrogated both computationally and via expert analysis.
Slides 5, 8, 11, 12, 15, 16, 17, 18, 19, 20 are based the presentation Tabea Tietz gave for the paper "Challenges of Knowledge Graph Evolution from an NLP Perspective" in the WHiSe Workshop @ ESWC 2020 (2 June 2020).
http://hnr2020.historicalnetworkresearch.org/
The Hitchhiker's Guide to the Future of Digital HumanitiesMarieke van Erp
Slides of my DHOxSS closing lecture
Oxford, 26 July 2019
Abstract
In the constellation of research fields, new configurations are continuously reshaping our ideas of what a field should be. This is particularly the case in the young field of digital humanities which, as David M. Berry noted, started with a focus on improving access to digital repositories and then moved to expanding the limits of archives to include born-digital materials as research objects. Both moves greatly impacted our research practice. However, I argue that we have only started scratching the surface of what digital methods can mean for humanities research.
In particular, as our methods and collaborations with other fields have matured, we can now start imagining new types of research questions that go beyond the sum of their ‘digital’ and ‘humanities’ parts -- to fundamentally change the nature of the humanities questions that we can ask. For such a reshaping to occur, we need to deepen the connection to our academic neighbours and keep looking beyond our own research community in order to ask these new questions. In my talk, I will present how multi-disciplinary collaborations between historians, linguists, and computer scientists can bring about new insights that may form the first steps to this future.
Why language technology can’t handle Game of Thrones (yet)Marieke van Erp
Natural language processing (NLP) tools are commonly used in many day-to-day applications such as Siri and Google, but the effectiveness of these technologies is not thoroughly understood. I will present joint work with colleagues from the Vrij Universiteit Amsterdam in which we perform a thorough evaluation of four different name recognition tools on 40 popular novels (including A Game of Thrones). I will highlight why literary texts are so difficult for NLP tools as well as solutions for improving their performance.
Finding common ground between text, maps, and tables for quantitative and qua...Marieke van Erp
Invited talk given at 8th AIUCD Conference 2019 – ‘Pedagogy, teaching, and research in the age of Digital Humanities’
http://aiucd2019.uniud.it/
24 January 2019, Udine, Italy
Slicing and Dicing a Newspaper Corpus for Historical Ecology ResearchMarieke van Erp
Presented at EKAW 2018
Historical newspapers are a novel source of information for historical ecologists to study the interactions between humans and animals through time and space. Newspaper archives are particularly interesting to analyse because of their breadth and depth. However, the size and the occasional noisiness of such archives also brings difficulties, as manual analysis is impossible. In this paper, we present experiments and results on automatic query expansion and categorisation for the perception of animal species between 1800 and 1940. For query expansion and to the manual annotation process, we used lexicons. For the categorisation we trained a Support Vector Machine model. Our results indicate that we can distinguish newspaper articles that are about animal species from those that are not with an F 1 of 0.92 and the subcategorisation of the different types of newspapers on animals up to 0.84 F 1 .
Lessons Learnt from the Named Entity rEcognition and Linking (NEEL) Challenge...Marieke van Erp
Giuseppe Rizzo, Biana Pereira, Andra Varga, Marieke van Erp, Amparo Elizabeth Cano Basave
Presented on Wednesday 10 October at the 17th International Semantic Web Conference (ISWC 2018)
Paper: http://www.semantic-web-journal.net/content/lessons-learnt-named-entity-recognition-and-linking-neel-challenge-series
Conference: http://iswc2018.semanticweb.org/
Entity Typing Using Distributional Semantics and DBpedia Marieke van Erp
Presentation given at NLP&DBpedia workshop on 18 October 2016. The presentation accompanies the work described in: https://nlpdbpedia2016.files.wordpress.com/2016/09/nlpdbpedia2016_paper_9.pdf
The domain as unifier, how focusing on social history can bring technical fie...Marieke van Erp
Invited talk given at the final CEDAR symposium about the interaction between (social) history, language technology, and semantic web.
https://socialhistory.org/en/events/final-cedar-mini-symposium
Evaluating entity linking an analysis of current benchmark datasets and a ro...Marieke van Erp
Marieke van Erp, Pablo Mendes, Heiko Paulheim, Filip Ilievski, Julien Plu, Giuseppe Rizzo and Joerg Waitelonis
Presented at LREC 2016:
http://www.lrec-conf.org/proceedings/lrec2016/pdf/926_Paper.pdf
Finding Stories in 1,784,532 Events: Scaling up computational models of narr...Marieke van Erp
Slides of the NewsReader Computational Models of Narrative Presentation "Finding Stories in 1,784,532 Events: Scaling Up Computational Models of Narrative - Marieke van Erp, Antske Fokkens, and Piek Vossen"
Workshop page: http://narrative.csail.mit.edu/cmn14/
Project page: http://www.newsreader-project.eu
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
2. Introduction
• BA, MA & PhD compling/
information extraction
@Tilburg University
• Since 2009: SemWeb group
@VU University Amsterdam
3. Why Reuse Linguistic
Resources?
• Linguistic resources are
expensive to create
• ...and difficult to use for
‘outsiders’
• How can we reach out to the
‘outside world’?
Image Source: http://cyberbrethren.com/wp-content/uploads/2012/02/language1.jp
4. Make reuse easier!
• Increased visibility
• Social value:
• stimulates collaboration
• accelerates innovation
• External quality control
Image Source: http://th02.deviantart.net/fs71/PRE/i/2010/146/b/3/
DON__T_PANIC_by_VigilantMeadow.jpg
5. What’s holding us back?
• Fear?
• Habit?
Image Source: h http://mindfulbalance.files.wordpress.com/2011/02/hesitate1.jpg
6. Practical Constraints
1. Task specificity
2. Formats
3. Different conceptual
models
4. No machine-readable
definitions
5. Lack of metadata
Image Source: http://bogdankipko.com/wp-content/uploads/2011/12/barriers.jpg
7. 1. Task-specificity
• Resources are often geared
towards one specific task
e.g., part-of-speech tagging,
named entity recognition
• How can we make our
resources more flexible?
Image Source: http://thelearnersguild.files.wordpress.com/2008/07/the-informal-
learners-toolkit1.jpg
8. 2. Formats
• XML, inline XML, CSV, one
word per line, one sentence
per line, slashtags, ARFF,
Image Source: http://www.elec-intro.com/EX/05-13-03/kf_compact_data.jpg
9. 3. Conceptual Models
• An NP is an NP is an NP?
• “President Obama signed the
National Defense
Authorization Act after
months of debate”
• NE: “President Obama”?
• NE: “Obama”?
Image Source: http://www.w3.org/2001/sw/BestPractices/WNET/wordnet-
sw-20040713-fig01.png
10. 4. Lack of Machine-
Readable definitions
• For integration or reuse
manual effort is needed
• time consuming
• difficult to track definitions
• not scalable
Image Source: http://www.barcode1.co.uk/images/samplejplarge.jpg
11. 5. Lack of Metadata
• Can I trust this data provider?
• How was this data created?
• How many annotators?
• for the entire data set?
• per instance?
• If generated automatically,
what were the parameters?
Image Source: http://darwin-online.org.uk/converted/published/
1859_Origin_F373/1859_Origin_F373_fig02.jpg
12. A Linked Data Approach
• Linked Data is not a magic
solution to all problems
• ...but it is better than what
we’ve got at this moment
Image Source: http://linkeddata.org/static/images/lod-
datasets_2009-07-14_cropped.png
13. 1. Using RDF
• RDF is not inherently better
than some other formats, but
it is used by many
• + SPARQL makes it easy to
retrieve data
Image Source: http://www.247ha.com/images/rdf.jpg
14. 2. Mapping Annotations
• A single conceptual
model for all linguistic
resources is not going
to happen
• ...but can we spot the
similarities between
models and utilise
that?
Image Source:http://www.webology.org/2006/v3n3/images/sample.JPG
15. 3. Grounding
• It’s only linked data if you link
it to other sources
• Added bonus: automatic
sense disambiguation + access
to a wealth of extra
knowledge about your data
item
Image Source: http://mj-services.com/wallpaper/More_WallPaper/Trees/Giants,
%20Calaveras%20State%20Park%20-%201600x1200%20-%20ID%2015.jpg
16. 4. Define Your Metadata
• Include your data model
• Preferably give each instance’s
provenance
• collection
• annotation/creation
• previous versions
• confidence
Image Source: http://www.wineaustralia.com/australia/Portals/2/November%20E-
news/Wines%20of%20Provenance%20Final.jpg
17. Conclusions
• Look for similarities between
resources
• Say where your resource
comes from
• Use standards, or make it
easy for others to convert
your data to a standard
• Link to other data
Image Source: http://efr0702.files.wordpress.com/2012/02/puzzle.jpg