This document discusses defining common digital object properties for Planets. It considers what kind of properties could be defined (e.g. type properties, rendition properties), the requirements and challenges of defining objective, measurable properties that fully describe digital objects, and different approaches that could be taken to define properties (e.g. using a working group, monitoring property models that emerge from PLATO templates). In the end, it recommends clearly defining objectives and allocating effort before attempting to define common digital object properties in Planets.
VIVO and persistent identifiers: Integrating ORCID_08152013Rebecca Bryant, PhD
Title: VIVO and persistent identifiers: Integrating ORCID
Presented at the VIVO 2013 conference in St. Louis, MO, 08/15/13
Presenters:
Rebecca Bryant, PhD, ORCID, Bethesda, MD, USA
Hal Warren, American Psychological Association, Washington DC
Simeon Warner, PhD, Cornell University, Ithaca, NY
Abstract:
Since the launch of the ORCID Registry in October 2012, thousands of researchers have claimed their ORCID iD. Organizations have been embedding ORCID identifiers in manuscript submission systems, in funding applications, and adding them to university profile systems. Even before launch, the VIVO ontology had incorporated an ORCID field. In this panel, we will provide an overview of the ORCID registry and adoption, and demonstrate how the American Psychological Association (APA) has integrated ORCID identifiers into its VIVO system and developed an application to populate ORCID records with demographic and publication attributes from APA VIVO RDF files. The ORCID data are packaged as a JSON object stored as a URI in the VIVO record. This serves as a cross-check for ORCID assertions from the publisher of works claimed and allows APA to use VIVO to extend valid provenance assertions for publications in a Linked Open Data Trust Framework. We will discuss the application of this use case for other VIVO implementations and other researcher profiling systems, focusing on integrations at universities.
ORCID Implementation in Open Access Repositories and Institutional Research I...Simeon Warner
Slides from presentation with Pablo de Castro at Open Repositories 2013 (http://or2012.net/)
ORCID provides individual researchers and scholars with a persistent unique identifier. Initial adoption has been rapid but the full benefit will be realized only if ORCID iDs are used by all stakeholder communities. ORCID iDs enable reuse of items in new contexts by making connections between items from the same author in different places. Through its author-focused approach ORCID will contribute to bridging the current divide between management of publications and research data, which are often carried out in independent ways through different, frequently disconnected kinds of repositories. We discuss procedures and strategies for ORCID iD implementation in two different contexts: Open Access repositories, and institutional research information management systems.
IIPC General Assembly 2016 - Tool Development PortfolioTom-Cramer
Framing, discussion and notes from the International Internet Preservation Consortium's new portfolio on collaborative tool development. Presented and discussed at the IIPC General Assembly in Reykjavic, 11 April 2016.
10-1-13 “Research Data Curation at UC San Diego: An Overview” Presentation Sl...DuraSpace
“Hot Topics: The DuraSpace Community Webinar Series, " Series Six: Research Data in Repositories” Curated by David Minor, Research Data Curation Program, UC San Diego Library. Webinar 1: “Research Data Curation at UC San Diego: An Overview”
Presented by David Minor & Declan Fleming, Chief Technology Strategist, UC San Diego Library
VIVO and persistent identifiers: Integrating ORCID_08152013Rebecca Bryant, PhD
Title: VIVO and persistent identifiers: Integrating ORCID
Presented at the VIVO 2013 conference in St. Louis, MO, 08/15/13
Presenters:
Rebecca Bryant, PhD, ORCID, Bethesda, MD, USA
Hal Warren, American Psychological Association, Washington DC
Simeon Warner, PhD, Cornell University, Ithaca, NY
Abstract:
Since the launch of the ORCID Registry in October 2012, thousands of researchers have claimed their ORCID iD. Organizations have been embedding ORCID identifiers in manuscript submission systems, in funding applications, and adding them to university profile systems. Even before launch, the VIVO ontology had incorporated an ORCID field. In this panel, we will provide an overview of the ORCID registry and adoption, and demonstrate how the American Psychological Association (APA) has integrated ORCID identifiers into its VIVO system and developed an application to populate ORCID records with demographic and publication attributes from APA VIVO RDF files. The ORCID data are packaged as a JSON object stored as a URI in the VIVO record. This serves as a cross-check for ORCID assertions from the publisher of works claimed and allows APA to use VIVO to extend valid provenance assertions for publications in a Linked Open Data Trust Framework. We will discuss the application of this use case for other VIVO implementations and other researcher profiling systems, focusing on integrations at universities.
ORCID Implementation in Open Access Repositories and Institutional Research I...Simeon Warner
Slides from presentation with Pablo de Castro at Open Repositories 2013 (http://or2012.net/)
ORCID provides individual researchers and scholars with a persistent unique identifier. Initial adoption has been rapid but the full benefit will be realized only if ORCID iDs are used by all stakeholder communities. ORCID iDs enable reuse of items in new contexts by making connections between items from the same author in different places. Through its author-focused approach ORCID will contribute to bridging the current divide between management of publications and research data, which are often carried out in independent ways through different, frequently disconnected kinds of repositories. We discuss procedures and strategies for ORCID iD implementation in two different contexts: Open Access repositories, and institutional research information management systems.
IIPC General Assembly 2016 - Tool Development PortfolioTom-Cramer
Framing, discussion and notes from the International Internet Preservation Consortium's new portfolio on collaborative tool development. Presented and discussed at the IIPC General Assembly in Reykjavic, 11 April 2016.
10-1-13 “Research Data Curation at UC San Diego: An Overview” Presentation Sl...DuraSpace
“Hot Topics: The DuraSpace Community Webinar Series, " Series Six: Research Data in Repositories” Curated by David Minor, Research Data Curation Program, UC San Diego Library. Webinar 1: “Research Data Curation at UC San Diego: An Overview”
Presented by David Minor & Declan Fleming, Chief Technology Strategist, UC San Diego Library
Object Modeling Technique (OMT) is real world based modeling approach for software modeling and designing. It was developed basically as a method to develop object-oriented systems and to support object-oriented programming. It describes the static structure of the system.
Object Modeling Technique is easy to draw and use. It is used in many applications like telecommunication, transportation, compilers etc. It is also used in many real world problems. OMT is one of the most popular object oriented development techniques used now-a-days. OMT was developed by James Rambaugh.
Purpose of Object Modeling Technique:
To test physical entity before construction of them.
To make communication easier with the customers.
To present information in an alternative way i.e. visualization.
To reduce the complexity of software.
To solve the real world problems.
Object Modeling Technique’s Models:
There are three main types of models that has been proposed by OMT.
Object Model:
Object Model encompasses the principles of abstraction, encapsulation, modularity, hierarchy, typing, concurrency and persistence. Object Model basically emphasizes on the object and class. Main concepts related with Object Model are classes and their association with attributes. Predefined relationships in object model are aggregation and generalization (multiple inheritance).
Dynamic Model:
Dynamic Model involves states, events and state diagram (transition diagram) on the model. Main concepts related with Dynamic Model are states, transition between states and events to trigger the transitions. Predefined relationships in object model are aggregation (concurrency) and generalization.
Functional Model:
Functional Model focuses on the how data is flowing, where data is stored and different processes. Main concepts involved in Functional Model are data, data flow, data store, process and actors. Functional Model in OMT describes the whole processes and actions with the help of data flow diagram (DFD).
Phases of Object Modeling Technique:
OMT has the following phases:
Analysis:
This the first phase of the object modeling technique. This phase involves the preparation of precise and correct modelling of the real world problems. Analysis phase starts with setting a goal i.e. finding the problem statement. Problem statement is further divided into above discussed three models i.e. object, dynamic and functional model.
System Design:
This is the second phase of the object modeling technique and it comes after the analysis phase. It determines all system architecture, concurrent tasks and data storage. High level architecture of the system is designed during this phase.
FOR MORE INFORMATION CLICK ON THE LINK BELOW :
https://uii.io/programming
Lecture two,
An introduction to Design Pattern
History
Pattern Language,
Categorization according to GoF
MVC
Creational Design Patterm
Factory Method
Abstract Factory
Singleton
Builder
Object Modeling Technique (OMT) is real world based modeling approach for software modeling and designing. It was developed basically as a method to develop object-oriented systems and to support object-oriented programming. It describes the static structure of the system.
Object Modeling Technique is easy to draw and use. It is used in many applications like telecommunication, transportation, compilers etc. It is also used in many real world problems. OMT is one of the most popular object oriented development techniques used now-a-days. OMT was developed by James Rambaugh.
Purpose of Object Modeling Technique:
To test physical entity before construction of them.
To make communication easier with the customers.
To present information in an alternative way i.e. visualization.
To reduce the complexity of software.
To solve the real world problems.
Object Modeling Technique’s Models:
There are three main types of models that has been proposed by OMT.
Object Model:
Object Model encompasses the principles of abstraction, encapsulation, modularity, hierarchy, typing, concurrency and persistence. Object Model basically emphasizes on the object and class. Main concepts related with Object Model are classes and their association with attributes. Predefined relationships in object model are aggregation and generalization (multiple inheritance).
Dynamic Model:
Dynamic Model involves states, events and state diagram (transition diagram) on the model. Main concepts related with Dynamic Model are states, transition between states and events to trigger the transitions. Predefined relationships in object model are aggregation (concurrency) and generalization.
Functional Model:
Functional Model focuses on the how data is flowing, where data is stored and different processes. Main concepts involved in Functional Model are data, data flow, data store, process and actors. Functional Model in OMT describes the whole processes and actions with the help of data flow diagram (DFD).
Phases of Object Modeling Technique:
OMT has the following phases:
Analysis:
This the first phase of the object modeling technique. This phase involves the preparation of precise and correct modelling of the real world problems. Analysis phase starts with setting a goal i.e. finding the problem statement. Problem statement is further divided into above discussed three models i.e. object, dynamic and functional model.
System Design:
This is the second phase of the object modeling technique and it comes after the analysis phase. It determines all system architecture, concurrent tasks and data storage. High level architecture of the system is designed during this phase.
FOR MORE INFORMATION CLICK ON THE LINK BELOW :
https://uii.io/programming
Lecture two,
An introduction to Design Pattern
History
Pattern Language,
Categorization according to GoF
MVC
Creational Design Patterm
Factory Method
Abstract Factory
Singleton
Builder
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
2. Where are we now, technically?
• Current digital object property architecture:
– PLATO captures institutional requirements.
– Can link them directly to ‘technical’ properties, e.g. those
extracted via the XCL tools.
• Planets has no other ‘Digital Object Properties’
– Conceptual models and definitions are considered
institutionally dependent.
– Note that O.T. templates can allow common property
models and definitions to emerge.
• Should we try to define them? How?
3. Original Testbed Property Model
• Evaluate migration services based on comparing
‘Intellectual Properties’ of the input and output.
– Defined as Properties of Types of Intellectual Object.
– Extract relevant Intellectual Properties from initial and
final Digital Objects.
– Intellectual Properties judged same/similar/different.
– Evaluate service/tool based on how well the properties
were preserved.
4. Digital Object ‘Types’
• In TB, an interpretation of the rendered object.
– i.e. a scanned TIFF of a journal article could be interpreted
as both an Image and a Document, employing properties
from both.
• For comparison, in XCL, ‘type’ would refer to the
information model which encompasses the
information content of some set of formats.
– e.g. the image model which covers PNG, TIFF, etc.
5. Significance & Objectivity
• General realisation that ‘significance’ is at least very
difficult to define.
• Significance of properties differs between Planets
Partners.
• Can we define ‘objective’ properties?
– Requires a shared system of property definitions.
– Attach ‘significance’ to these properties in PLATO.
6. Requirements For Intellectual Properties
• Requirements
– Must pertain to the rendition, i.e. transcend file format.
– Must be institutionally independent, -> ‘objective’.
– Must be self-consistent, implying one system of
intellectual properties per rendition ‘type’.
– Must be concrete, i.e. measurable in principle.
– Must be capable of completely describing the rendered
form (required in order to be ‘objective’, to be capable of
verifying ‘authenticity’ for all definitions of ‘significant’).
• This has proven very difficult!
7. Problems with that model
• Not clear if some of those requirements are even
possible to meet in principle.
• Clearly a very large amount of work.
– Even if achieved, does not include enough technical
properties that are of interest.
• e.g. format properties with non-trivial consequences.
Does service preserve the method of compression?
• e.g. properties of services (speed etc), agents, technical
environments, etc.
• So, where should we focus our efforts?
8. What kind of properties?
• If we do want higher-level properties…
– ‘Type’ Properties?
– Properties of the Rendition?
• What degree of coverage, of what?
– Aim for completeness, but narrowly focussed?
– Sparse, but concentrate on risks and flaws?
• Explicitly define comparative properties?
– Don’t just want compare properties extracted from files.
– e.g. define a property that means the RMS difference
between two images when rendered (c.f. XCL).
9. Ideas for how to do it.
• Use DOPWG(?) to pool knowledge only?
– Park it until Planets 2.
• Use DOPWG as seed for long-term working group?
• Use DOPWG to monitor PLATO templates for
emergence of common property models?
– Look for commonality and encourage convergence?
• Use DOPWG to generate a property model?
– Produce a document, XML Schema, an ontology, a
dictionary, or PLATO O.T. template?
10. Summary
• Should we attempt to define common ‘Digital Object
Properties’ in Planets (1)?
– What kind of properties?
• ‘Object’ or ‘Rendering/Performance/BetterWord’?
• ‘Comparative’ as well as ‘of-an-instance’?
– How to proceed?
• Observe, steer, or control?
• Define inside or outside PLATO? Augment ontology?
• Needs well-defined objectives and allocated effort!