The document discusses using linked data in a large higher education organization. It describes building a linked data platform for the Open University containing course, publication, media, and other university data. Several applications were developed using this linked data including a study tool, research evaluation support, and community/media analytics. Key lessons learned include the potential for simple yet useful applications, rapid development, and challenges of dealing with incomplete or heterogeneous data without application-specific assumptions. Overall, the experiences highlight both opportunities and common pitfalls of interacting with linked data at scale in a large organization.
Scholarly social media applications platforms for knowledge sharing and net...tullemich
This short presentation deals with some of the current publishing workflows to platforms for scholarly knowledge sharing and SoMe networking. It is touched upon what kind of implications emerge from operating in these open and networked virtual research environments (VRE) e.g. publishing open access.
Linked Data Love: research representation, discovery, and assessment
#ALAAC15
The explosion of linked data platforms and data stores over the last five years has been profound – both in terms of quantity of data as well as its potential impact. Research information systems such as VIVO (www.vivoweb.org) play a significant role in enabling this work. VIVO is an open source, Semantic Web-based application that provides an integrated, searchable view of the scholarly activities of an organization. The uniform semantic structure of VIVO-ISF data enables a new class of tools to advance science. This presentation will provide a brief introduction and update to VIVO and present ways that this semantically-rich data can enable visualizations, reporting and assessment, next-generation collaboration and team building, and enhanced multi-site search. Libraries are uniquely positioned to facilitate the open representation of research information and its subsequent use to spur collaboration, discovery, and assessment. The talk will conclude with a description of ways librarians are engaged in this work – including visioning, metadata and ontology creation, policy creation, data curation and management, technical, and engagement activities.
Kristi Holmes, PhD
Director, Galter Health Sciences Library
Director of Evaluation, NUCATS
Associate Professor, Preventive Medicine-Health and Biomedical Informatics
Northwestern University Feinberg School of Medicine
Working with Social Media Data: Ethics & good practice around collecting, usi...Nicola Osborne
Slides from a workshop delivered for the University of Edinburgh Digital Scholarship programme, on 18th October 2017. For further information on the programme see: http://www.digital.cahss.ed.ac.uk/ or #DigScholEd. If you are interested in hosting a similar workshop, or adapting these slides please contact me: nicola.osborne@ed.ac.uk.
Scholarly social media applications platforms for knowledge sharing and net...tullemich
This short presentation deals with some of the current publishing workflows to platforms for scholarly knowledge sharing and SoMe networking. It is touched upon what kind of implications emerge from operating in these open and networked virtual research environments (VRE) e.g. publishing open access.
Linked Data Love: research representation, discovery, and assessment
#ALAAC15
The explosion of linked data platforms and data stores over the last five years has been profound – both in terms of quantity of data as well as its potential impact. Research information systems such as VIVO (www.vivoweb.org) play a significant role in enabling this work. VIVO is an open source, Semantic Web-based application that provides an integrated, searchable view of the scholarly activities of an organization. The uniform semantic structure of VIVO-ISF data enables a new class of tools to advance science. This presentation will provide a brief introduction and update to VIVO and present ways that this semantically-rich data can enable visualizations, reporting and assessment, next-generation collaboration and team building, and enhanced multi-site search. Libraries are uniquely positioned to facilitate the open representation of research information and its subsequent use to spur collaboration, discovery, and assessment. The talk will conclude with a description of ways librarians are engaged in this work – including visioning, metadata and ontology creation, policy creation, data curation and management, technical, and engagement activities.
Kristi Holmes, PhD
Director, Galter Health Sciences Library
Director of Evaluation, NUCATS
Associate Professor, Preventive Medicine-Health and Biomedical Informatics
Northwestern University Feinberg School of Medicine
Working with Social Media Data: Ethics & good practice around collecting, usi...Nicola Osborne
Slides from a workshop delivered for the University of Edinburgh Digital Scholarship programme, on 18th October 2017. For further information on the programme see: http://www.digital.cahss.ed.ac.uk/ or #DigScholEd. If you are interested in hosting a similar workshop, or adapting these slides please contact me: nicola.osborne@ed.ac.uk.
This presentation was provided by Daniel Tracy of the University of Illinois at Urbana Champaign during the NISO webinar, Library as Publisher, Part Two, held on Wednesday, March 14, 2018.
This presentation was provided by Rachel Vacek of the University of Michigan during the NISO webinar, Library as Publisher, Part Two, held on March 14, 2018.
Library 2.0 technologies in academic libraries, a case study of student use a...Anne Morris
These are the slides of a presentation given at the Online International 2008 conference in London December 2-4. The presentation reviews the types of Library 2.0 technologies available and how these are being implemented within the higher education sector, examines their potential barriers, and describes a small scale research project undertaken to investigate student use and perceptions of Library 2.0 services at Loughborough University.
Under the Higher Education Opportunity Act (HEOA), universities are required by July 1, 2010 to compile a written plan on how they will educate students on the dangers of file sharing and which procedures are in place to effectively combat illegal file sharing.
In preparation for the HEOA implementation and to more effectively educate the UNC community about the issues of copyright law and file sharing, the Information Security Office designed an online course combined with a follow-up survey. Come learn about UNC’s new online copyright education course.
What's on the Horizon? Trends and Trials in Educational TechnologyMelissa Rethlefsen
For the past 10 years, the New Media Consortium has released an annual Horizon Report, evaluating current trends in technology, and forecasting newer technologies' importance and uptake in education over a multi-year horizon. We will review previous Horizon Reports' predictions with a focus on the 2014 Horizon Report Higher Ed Edition's findings and forecasts. Particular emphasis will be placed on challenges in the educational technology realm, including faculty training and readiness, innovation scalability, and the ultimate question - when should new technologies be used? Are we jumping on the bandwagon? When should we jump off? http://stream.lib.utah.edu/index.php?c=details&id=10298
Leveraging Flat Files from the Canvas LMS Data Portal at K-StateShalin Hai-Jew
A lot of data are created in an LMS instance, and much of this can be analyzed for insight. In 2016, Instructure, the makers of Canvas, made their LMS data available to their customers through a data portal (updated monthly). This portal enables access to a number of flat files related to that particular instance. This presentation showcases how this big data was analyzed on a regular laptop with basic office software, to summarize Kansas State University’s use of the LMS. Methods for analysis include the following: basic descriptive statistics, survival analysis, computational linguistic analysis, and others.
The results are reported out with both numbers and data visualizations, including classic pie charts, line graphs, bar charts, mixed-charts, word clouds, and others. The findings provide some insights about how to approach the data, how to use a data dictionary, and other methods for extracting the data for awareness and practical decision-making. This work also is suggestive of next steps for more advanced analysis (using the flat files in a SQL database).
More information about this may be accessed at http://scalar.usc.edu/works/c2c-digital-magazine-spring--summer-2017/wrangling-big-data-in-a-small-tech-ecosystem.
Scientific Software Challenges and Community ResponsesDaniel S. Katz
a talk given at RTI International on 7 December 2015, discussing 12 scientific software challenges and how the scientific software community is responding to them
This presentation was provided by Daniel Tracy of the University of Illinois at Urbana Champaign during the NISO webinar, Library as Publisher, Part Two, held on Wednesday, March 14, 2018.
This presentation was provided by Rachel Vacek of the University of Michigan during the NISO webinar, Library as Publisher, Part Two, held on March 14, 2018.
Library 2.0 technologies in academic libraries, a case study of student use a...Anne Morris
These are the slides of a presentation given at the Online International 2008 conference in London December 2-4. The presentation reviews the types of Library 2.0 technologies available and how these are being implemented within the higher education sector, examines their potential barriers, and describes a small scale research project undertaken to investigate student use and perceptions of Library 2.0 services at Loughborough University.
Under the Higher Education Opportunity Act (HEOA), universities are required by July 1, 2010 to compile a written plan on how they will educate students on the dangers of file sharing and which procedures are in place to effectively combat illegal file sharing.
In preparation for the HEOA implementation and to more effectively educate the UNC community about the issues of copyright law and file sharing, the Information Security Office designed an online course combined with a follow-up survey. Come learn about UNC’s new online copyright education course.
What's on the Horizon? Trends and Trials in Educational TechnologyMelissa Rethlefsen
For the past 10 years, the New Media Consortium has released an annual Horizon Report, evaluating current trends in technology, and forecasting newer technologies' importance and uptake in education over a multi-year horizon. We will review previous Horizon Reports' predictions with a focus on the 2014 Horizon Report Higher Ed Edition's findings and forecasts. Particular emphasis will be placed on challenges in the educational technology realm, including faculty training and readiness, innovation scalability, and the ultimate question - when should new technologies be used? Are we jumping on the bandwagon? When should we jump off? http://stream.lib.utah.edu/index.php?c=details&id=10298
Leveraging Flat Files from the Canvas LMS Data Portal at K-StateShalin Hai-Jew
A lot of data are created in an LMS instance, and much of this can be analyzed for insight. In 2016, Instructure, the makers of Canvas, made their LMS data available to their customers through a data portal (updated monthly). This portal enables access to a number of flat files related to that particular instance. This presentation showcases how this big data was analyzed on a regular laptop with basic office software, to summarize Kansas State University’s use of the LMS. Methods for analysis include the following: basic descriptive statistics, survival analysis, computational linguistic analysis, and others.
The results are reported out with both numbers and data visualizations, including classic pie charts, line graphs, bar charts, mixed-charts, word clouds, and others. The findings provide some insights about how to approach the data, how to use a data dictionary, and other methods for extracting the data for awareness and practical decision-making. This work also is suggestive of next steps for more advanced analysis (using the flat files in a SQL database).
More information about this may be accessed at http://scalar.usc.edu/works/c2c-digital-magazine-spring--summer-2017/wrangling-big-data-in-a-small-tech-ecosystem.
Scientific Software Challenges and Community ResponsesDaniel S. Katz
a talk given at RTI International on 7 December 2015, discussing 12 scientific software challenges and how the scientific software community is responding to them
The Social Semantic Server - A Flexible Framework to Support Informal Learnin...Sebastian Dennerlein
Introduction: Scaling Informal Workplace Learning
System Design: Designing a flexible framework for informal workplace learning
Theoretical Underpinning
Design Principles
System Implementation: SOA for a Hybrid Knowledge Representation
Software Architecture
Services
Applications: B&P, KnowBrain & Bookmarker/ Attacher
Conclusion on the Support of Informal Learning
Future Work: Next Steps & What else can be achieve by the SSS?
Adoption of Cloud Computing in Scientific ResearchYehia El-khatib
Some might say the scientific research community is somewhat behind the curve of adopting the cloud. In this talk, I present a few examples of adopting the cloud from the wider research community. I also highlight some of the aspects by which cloud computing could affect scientific research in the near future and the associated challenges.
In a nutshell, this 'idea deck' describes how a (node-edge) graph and data model can, in addition to containing knowledge, can also include: 1) metadata to drive knowledge and collaboration UX behavior, 2) content curation, 3) temporal knowledge, 4) collaborative voting, and 5) deep provenance of the statements contained in the knowledge graph.
Note: This slide deck contains ideas for 'reinventing' Education. In particular, a proposal I submitted in January-2010 to the MacArthur Foundation 'Reinvent Learning' RFP is included along with a handful of supplementary mockup screenshots.
Sands Fish - Knowing in the Age of Networked Knowledgesandsfish
Knowledge representation has become extremely complex since the advent of the internet, online education, and commons-based peer production. This talk discusses the thresholds we've crossed and what it means to know something when knowledge is massively interlinked.
WEBLOG is a combination of both Blog as well as Novels. Blog contain the Information of various things related to Technology, Education, News, International, Business, Sports, Entertainment and ongoing college activities. The main aim of this project is to provide data to students in only one site. Students can gather the information from one site as well as give their feedback and create their own blog. Students can post their views and thought and analyze themselves. Besides all such core functionalities, the application also includes features like FAQ, request, feedback etc. so as to provide a satisfactory user experience.
The explosion in growth of the Web of Linked Data has provided, for the first time, a plethora of information in disparate locations, yet bound together by machine-readable, semantically typed relations. Utilisation of the Web of Data has been, until now, restricted to the members of the community, eating their own dogfood, so to speak. To the regular web user browsing Facebook and watching YouTube, this utility is yet to be realised. The primary factor inhibiting uptake is the usability of the Web of Data, where users are required to have prior knowledge of elements from the Semantic Web technology stack. Our solution to this problem is to hide the stack, allowing end users to browse the Web of Data, explore the information it contains, discover knowledge, and use Linked Data. We propose a template-based visualisation approach where information attributed to a given resource is rendered according to the rdf:type of the instance.
Similar to Putting Linked Data to Use in a Large Higher-Education Organisation (20)
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Putting Linked Data to Use in a Large Higher-Education Organisation
1. Putting Linked Data to Use in a
Large Higher-Education
Organisation
Mathieu d’Aquin
Knowledge Media Institute (KMi),
The Open University, UK
@mdaquin
2. Motivation
• Many works fosus on the publication of
linked data
• But what do we do once its published
• We have built a full linked data platform
for our university (the Open
University, data.open.ac.uk)
• And built a lot of applications to
demonstrate what we could do with it
• What do we learn from getting people
to, unknowingly, use linked data?
• What experience can we reuse for the
development of interactive tools relying
on linked data?
4. Linked Data at the Open University
• Course information:
– 580 modules/ description of the course, information about the levels and number of
credits associated with it, topics, and conditions of enrolment.
• Research publications:
– 16,000 academic articles / information about authors, dates, abstract and venue of the
publication.
• Podcasts:
– 2220 video podcasts and 1500 audio podcats / short description, topics, link to a
representative image and to a transscript if available, information about the course the
podcast might relate to and license information regarding the content of the podcast.
• Open Educational Resources:
– 640 OpenLearn Units / short description, topics, tags used to annotate the resource, its
language, the course it might relate to, and the license that applies to the content.
• Youtube videos:
– 900 videos / short description of the video, tags that were used to annotate the
video, collection it might be part of and link to the related course if relevant.
• University buildings:
– 100 buildings / address, a picture of the building and the sub-divisions of the building
into floors and spaces.
• Library catalogue:
– 12,000 books/ topics, authors, publisher and ISBN, as well as the course related.
• Others…
5. Applications Mobile and
Personal
Semantics
Social
Resource Discovery
Research
Exploration
7. Application 1: What we learned
• From the users’ perspective
– Useful functionality can be very simple
– Combining information from different
sources
– Transparent/Seamless
• From the developers’ perspective
– Development time: from months to Looked at it in rage for
minutes hours… just didn’t think
– Interacting directly with the data, rather it wouldn’t give me an
than multiple different systems error if I mispelled the
– Lack of awareness of Semantic Web name of a property
technologies
– Correspondance with other, more common
technologies (e.g., SQL and relational
DBs) misleading
– Performance: large number of SPARQL
queries not easy to handle. Requires
caching of pre-canned queries. Contradict
the idea of open and unexpected reuse
9. Application 2: What we learned
• From the users’ perspective Really? This uses
– No additional or duplicated output required from users: linked data? I thought
reusing what was collected in multiple systems we bought it from
– Again transparent/seemless technology some company…
– Still some confusion related to consistancy across
systems/representations
– Assumptions hard to conform with when data is drawn from
multiple systems with “unwritten conventions”
• From the developers’ perspective
– Again, rapide development
– Extensibility and flexibility
– SPARQL Query / SPARQL Update duo very powerful for
lightweight interfaces (even client side) Can you add a
– Dealing with incomplete data is tricky (we don’t know when it new field?
is incomplete)
– No “meta-properties” of the data (i.e., all IDs are unique and
non redundant)
– Assumption made are specific to the application, not generic
– Where is the problem? In the application, linked data, the
original data?
11. Application 3: What we learned
• From the users’ perspective Shouldn’t that be here
– Generic: more knowledge = more in that case?
functionalisties
– Generic: homogeneous interface to
heterogenesous data
– Generic: more demanding for users
– Application-driven vs data-driven navigation
– Specific interface allows for more complexity
• From the developers’ perspective
– Generic is harder: can’t make assumptions
related to the specific data/application
– Specific is less customisable/extensible:
adding new features requires custom code
12. Application 4: The OU in the media
Academics in “Arts and Humanities” Topics most commonly mentioned by
most often involved with the media (in news outlets own by the BBC (in
number of news items) number of news items)
13. Application 4: What we learned
• From the users’ perspective I would like this
chart for my
– Easy understandable outputs: embedable blog…
charts What do you
mean by “give me
– Customisable: build a dynamics dashboard in 3 minutes”?
minutes
– Benefits of linked data: bring external data
that can be jointly queried with you own
• From the developers’ perspective
– Requires a good understanding of the data
and the technology (especially SPARQL)
– Generic component to build specific interfaces
(best of both words?)
– But again cannot rely on application/data
specific assumptions (meta-properties
regarding redundancy, completeness, etc.)
14. Discussion
• Linked data should be hidden from the users
– Obvious? Yes… but is it really happening?
– Requires some aspects of the data tto be persent, eg. Huamn readbale labels
– Many lapplicatoins of linked data are still linked data applications
– Higher level concpets, such as data0integration from multiple sources, are harder to
hide
• Generic vs Specigic
– Reuse of software components is good
– But forces to addopt a specifi form of interatction witch is driven by the technicallities
and the data
– Trade-off to be found: generic + customisable
• Openess and flexibility
– … are not always easy to deal with
– Building interfaces fro the unknown.
– No assumption can be made on the data, regarding redundance and complete ness
– Need for meta-properties that can guide the building of applications (see what is
applicable)
15. Conclusion
• Applications in an large
organisations used to more
common technologies raise
challenges that help understanding
the common pitfalls of interactions
with linked data
• Important to share experiences in Thank you!
addition to techniques/tools
• To build better systems and
approaches for interaction
Any question?
16. Images (others are mine)
• Broadcast:
http://commons.wikimedia.org/wiki/File:Ibaraki_Broadcast_System_he
adquater01.jpg
• Don’t know:
http://commons.wikimedia.org/wiki/File:I_Don%27t_Know_ANY_of_Th
is!.jpg
• Development: http://commons.wikimedia.org/wiki/File:Applications-
development.svg
• Learning:
http://www.flickr.com/photos/vivacomopuder/3122401239/
• Course / degree:
http://commons.wikimedia.org/wiki/File:Degree.svg
• Article :
http://commons.wikimedia.org/wiki/File:Articles.JPG
• Open Learning: http://commons.wikimedia.org/wiki/File:Colearn_-
_learning_together.jpg
• Youtube:
http://commons.wikimedia.org/wiki/File:Logo_YouTube_por_Hernando
.svg
• Open University building :
http://www.flickr.com/photos/rattyfied/3011643690/
• Library:
http://commons.wikimedia.org/wiki/File:SteacieLibrary.jpg