The Personal Networks of Novice Librarian ResearchersIRDL
This presentation reports the findings of an analysis of personal network data gathered from the novice librarian researcher participants of the summer workshop of the Institute for Research Design in Librarianship (IRDL), an institute designed to provide instruction in how to conduct a research project and establish a peer-network of like-minded library professionals to support each other throughout the research process. The first wave of data was gathered before the participants began IRDL, again at the completion of the workshop, at six months after completing the workshop, and will be gathered again at the one-year marker. The data gathered is about the people and the strength of the relationship in the personal research networks of each of the IRDL participants. During the presentation we will report on the observations of the research networks over time.
Highlighted in the presentation is the use of the freely available, open source, web-based software used to gather the personal network data, EgoWeb 2.0. We will describe the process of customizing the survey software to ask questions about the names of people these novice researchers go to get or give advice or help related to research, how often they interact (related to research or not), modes in which the interactions take place, and whether or not the people in the network know each other. We will report the statistical results that the software computes, about density and closeness and provides a customizable visualization of the personal network.
Who models the world? Collaborative ontology creation and user roles in WikidataAlessandro Piscopo
S1. Wikidata is a collaborative knowledge graph created in 2012 with over 100,000 users and 40 million entities. While it does not have a formally defined ontology, items can represent entities or classes, and classes are related through taxonomic properties like P279 (subclass of) or P31 (instance of).
S2. The researchers analyzed Wikidata's ontology quality over time using seven metrics and identified two main user roles - leaders and contributors - based on editing patterns. Leaders made more edits overall and to the ontology, while contributors edited less and used batch tools more.
S3. Regression analysis found that while higher leader activity did not significantly reduce empty classes, it was positively correlated with increased inheritance richness and
What Wikidata teaches us about knowledge engineeringElena Simperl
This document provides an overview of a presentation about Wikidata and what it teaches about knowledge engineering. Some key points:
- Wikidata is a collaborative knowledge graph created by Wikimedia in 2012 with over 23,000 active users and billions of edits.
- It contains statements about items linked by properties, with items identified by Q codes and properties by P codes. Items can represent classes, entities, or values.
- Statements include optional qualifiers and required references, which can link internally or externally.
- The knowledge graph is co-edited by both humans and over 340 bots created by the community.
- Research on Wikidata explores how group composition, diversity, and provenance impact knowledge
Publishing in a High Quality Journal.pptxIbrahim573144
The document provides biographical information about two speakers for an upcoming seminar on publishing in high-quality journals:
1) Alvin K. Mulashani, who has degrees in oil and natural gas engineering from XSYU and CUG and works in the School of Earth Resources at Wuhan University.
2) Ibrahim AL-Wesabi, who has degrees in artificial intelligence from SU and CUG and is pursuing a PhD in artificial intelligence and optimization algorithms for renewable energy resources at Wuhan University.
The seminar will be held on September 22nd at the Silk Road Institute campus and discuss topics such as introducing artificial intelligence and bioinspired algorithms, using AI in renewable energy, publishing background,
1) The document describes the SOPHIA project, which aims to build altmetric networks of researchers and institutions to understand how research impacts spread in society.
2) SOPHIA collects data from Scopus and social media sources to build a heterogeneous graph network, and analyzes the network using graph metrics to measure the influence and authority of researchers and institutions.
3) The project has developed visualization and search tools to explore the altmetric networks, annotated documents, and metrics within a software prototype called SOPHIA.
What Wikidata teaches us about knowledge engineeringElena Simperl
This document summarizes an expert talk about Wikidata and knowledge engineering. The key points are:
1) Wikidata is a collaborative knowledge graph with over 28,000 active users that contains over 97 million items and 1.6 billion edits. It allows both human and bot editors.
2) Studies of Wikidata show that a balanced mix of bot and human editors, as well as diversity in editor tenure and interests, leads to higher quality knowledge graph items and ontology.
3) Provenance or references are important for trust in Wikidata statements, but the quality of these references is not well understood and varies across languages. Further research is exploring how to better evaluate reference quality.
This presentation was provided by Kristi Holmes of Northwestern University during the NISO hot topic virtual conference "Effective Data Management," which was held on September 29, 2021.
The Innovation Engine for Team Building – The EU Aristotele Approach From Ope...ARISTOTELE
ARISTOTELE approach has been presented at the Innovation Adoption Forum for Industry and Public Sector within the 6th IEEE International Conference on Digital Ecosystem Technologies (IEEE DEST - CEE 2012). The presentation about ARISTOTELE has been held by Paolo Ceravolo and Ernesto Damiani (University of Milan) during the keynote "The Innovation Engine for Team Building – The EU Aristotele Approach". Learn more on http://www.aristotele-ip.eu/
The Personal Networks of Novice Librarian ResearchersIRDL
This presentation reports the findings of an analysis of personal network data gathered from the novice librarian researcher participants of the summer workshop of the Institute for Research Design in Librarianship (IRDL), an institute designed to provide instruction in how to conduct a research project and establish a peer-network of like-minded library professionals to support each other throughout the research process. The first wave of data was gathered before the participants began IRDL, again at the completion of the workshop, at six months after completing the workshop, and will be gathered again at the one-year marker. The data gathered is about the people and the strength of the relationship in the personal research networks of each of the IRDL participants. During the presentation we will report on the observations of the research networks over time.
Highlighted in the presentation is the use of the freely available, open source, web-based software used to gather the personal network data, EgoWeb 2.0. We will describe the process of customizing the survey software to ask questions about the names of people these novice researchers go to get or give advice or help related to research, how often they interact (related to research or not), modes in which the interactions take place, and whether or not the people in the network know each other. We will report the statistical results that the software computes, about density and closeness and provides a customizable visualization of the personal network.
Who models the world? Collaborative ontology creation and user roles in WikidataAlessandro Piscopo
S1. Wikidata is a collaborative knowledge graph created in 2012 with over 100,000 users and 40 million entities. While it does not have a formally defined ontology, items can represent entities or classes, and classes are related through taxonomic properties like P279 (subclass of) or P31 (instance of).
S2. The researchers analyzed Wikidata's ontology quality over time using seven metrics and identified two main user roles - leaders and contributors - based on editing patterns. Leaders made more edits overall and to the ontology, while contributors edited less and used batch tools more.
S3. Regression analysis found that while higher leader activity did not significantly reduce empty classes, it was positively correlated with increased inheritance richness and
What Wikidata teaches us about knowledge engineeringElena Simperl
This document provides an overview of a presentation about Wikidata and what it teaches about knowledge engineering. Some key points:
- Wikidata is a collaborative knowledge graph created by Wikimedia in 2012 with over 23,000 active users and billions of edits.
- It contains statements about items linked by properties, with items identified by Q codes and properties by P codes. Items can represent classes, entities, or values.
- Statements include optional qualifiers and required references, which can link internally or externally.
- The knowledge graph is co-edited by both humans and over 340 bots created by the community.
- Research on Wikidata explores how group composition, diversity, and provenance impact knowledge
Publishing in a High Quality Journal.pptxIbrahim573144
The document provides biographical information about two speakers for an upcoming seminar on publishing in high-quality journals:
1) Alvin K. Mulashani, who has degrees in oil and natural gas engineering from XSYU and CUG and works in the School of Earth Resources at Wuhan University.
2) Ibrahim AL-Wesabi, who has degrees in artificial intelligence from SU and CUG and is pursuing a PhD in artificial intelligence and optimization algorithms for renewable energy resources at Wuhan University.
The seminar will be held on September 22nd at the Silk Road Institute campus and discuss topics such as introducing artificial intelligence and bioinspired algorithms, using AI in renewable energy, publishing background,
1) The document describes the SOPHIA project, which aims to build altmetric networks of researchers and institutions to understand how research impacts spread in society.
2) SOPHIA collects data from Scopus and social media sources to build a heterogeneous graph network, and analyzes the network using graph metrics to measure the influence and authority of researchers and institutions.
3) The project has developed visualization and search tools to explore the altmetric networks, annotated documents, and metrics within a software prototype called SOPHIA.
What Wikidata teaches us about knowledge engineeringElena Simperl
This document summarizes an expert talk about Wikidata and knowledge engineering. The key points are:
1) Wikidata is a collaborative knowledge graph with over 28,000 active users that contains over 97 million items and 1.6 billion edits. It allows both human and bot editors.
2) Studies of Wikidata show that a balanced mix of bot and human editors, as well as diversity in editor tenure and interests, leads to higher quality knowledge graph items and ontology.
3) Provenance or references are important for trust in Wikidata statements, but the quality of these references is not well understood and varies across languages. Further research is exploring how to better evaluate reference quality.
This presentation was provided by Kristi Holmes of Northwestern University during the NISO hot topic virtual conference "Effective Data Management," which was held on September 29, 2021.
The Innovation Engine for Team Building – The EU Aristotele Approach From Ope...ARISTOTELE
ARISTOTELE approach has been presented at the Innovation Adoption Forum for Industry and Public Sector within the 6th IEEE International Conference on Digital Ecosystem Technologies (IEEE DEST - CEE 2012). The presentation about ARISTOTELE has been held by Paolo Ceravolo and Ernesto Damiani (University of Milan) during the keynote "The Innovation Engine for Team Building – The EU Aristotele Approach". Learn more on http://www.aristotele-ip.eu/
This document outlines a workshop on using blogs to share knowledge and engage communities online. It discusses how blogs fit alongside other organizational activities and shares the presenter's experience with an institutional blog. The document also contrasts individual and multi-author blogs. Additionally, it provides examples of tools that can be used to develop an online community and engagement through social media and blogging. The workshop aims to help participants reflect on using blogs and online engagement strategies.
Nominet trust projects theory of change presentation 2016Daniel Robinson
This document provides an overview of theory of change approaches for evaluating social impact. It defines theory of change as describing how specific changes are expected to occur as a result of interventions and actions. The document discusses best practices for developing theories of change such as having plausible, doable, and testable causal links between activities and outcomes. It also addresses challenges such as the complexity of social systems and limitations of research available. Throughout, it provides exercises and examples to illustrate key concepts for developing and critiquing theories of change.
My Empirikom 2012 presentation in Aachen, Germany. I discuss my work with analytical constructs (genre ecologies, activity systems, activity networks), illustrating them with a case and showing how they might point to better understandings of computer-mediated communication in professional environments.
Peer Review of Workflow ModelsIn the Week 4 Discussion, you ex.docxtemplestewart19
Peer Review of Workflow Models
In the Week 4 Discussion, you explored the benefits of gaining an outsider’s perspective on a workflow issue or gap you are investigating. It can be equally beneficial to request feedback from others on the accuracy and clarity of a workflow model.
In this Discussion, you and your colleagues critique one another’s Visio drafts of your workflow models that you created for Part 1 of the Course Project and provide feedback on how to make the workflow model more complete. You also receive feedback on your own workflow model and consider additional information that you may need to collect as you conduct your gap analysis.
To prepare
By
Day 1
of this week, your Instructor will have assigned you to review two colleague's Visio drafts. Locate these drafts in
Doc Sharing
. (
Please see the attached files for assigned colleague Visio drafts)
Examine each workflow model using the basic requirements outlined in the Course Project. Consider the following:
Does each draft use standard Visio workflow shapes for start and end points, basic steps, and decision points?
Are all points connected with arrows flowing in the correct direction?
Are swimlanes present to identify who completes each task?
Carefully read through each workflow model.
Does it make sense?
What areas are unclear or confusing?
Are all decision points adequately explained?
What parts need additional detail?
Identify at least one additional gap in each workflow. For example, this may be a redundant task, an unnecessary task, an ineffective system or process, or an area where staff need support. What meaningful use objective or objectives are related to the identified gap?
With these thoughts in mind:
Post by tomorrow 10/04/16, a minimum of 550 words in APA with 2 references, addressing the level one headings below:
1)
Your reviews of your colleagues’ Visio drafts. Identify any basic requirements (standard workflow shapes, arrow directions, decision points, swimlanes, etc.) that are unmet or need revision. Also identify areas that lack information and what additional detail is necessary to clarify those areas.
PLEASE SEE THE ATTACHED FILES FOR THE VISIO DRAFT TO REVIEW
2)
Describe the gap you identified in the workflow and explain how it is related to at least one meaningful use objective.
Required Readings
Dennis, A., Wixom, B. H., & Roth, R. M. (2015).
Systems analysis and design
(6th ed.). Hoboken, NJ: Wiley.
Review Chapter 5, “Process Modeling” (pp. 153–186)
Helmers, S. (2011).
Microsoft Visio 2010 step by step
. Sebastopol, CA: O’Reilly.
Chapter 7, “Adding and Using Hyperlinks” (pp. 215–238)
This chapter includes instructions for adding hyperlinks to a Visio drawing. These can be links to a website, another document, or another Visio page.
Chapter 8, “Sharing and Publishing Diagrams: Part 1” (pp. 239–270)
This chapter introduces how to preview and print a Visio diagram, how to create Visio templates, and how to post a diagram to the.
Aniket Kittur, Bongwon Suh, Bryan Pendleton, Ed H. Chi.
He Says, She Says: Conflict and Coordination in Wikipedia.
In Proc. of ACM Conference on Human Factors in Computing Systems (CHI2007), pp. 453--462, April 2007. ACM Press. San Jose, CA.
http://www-users.cs.umn.edu/~echi/papers/2007-CHI/2007-Wikipedia-coordination-PARC-CHI2007.pdf
Action Learning Sets: An Innovative Way to Facilitate Writing for Publication Self Employed
Presentation given by Maria J Grant, Research Fellow, University of Salford, UK at the 7th International Evidence Based Library and Information Practice (EBLIP7) conference, University of Saskatchewan, Saskatoon, Canada, 15th-18th July 20013.
www.eblip7.library.usask.ca
Operationalisation of Collaboration Sunbelt 2015Dawn Foster
The operationalisation of collaboration: in search of a definition and its consequences on
analysis
Collaboration has been defined in numerous ways. Researchers interested in collaboration at the
individual or organizational level need to pay special attention to the adoption of a specific definition, as
this is likely to have major implications for the research design and outcomes. With respect to
collaboration within open source software projects, this presentation has two objectives. Firstly, this
presentation will investigate a wide variety of definitions of collaboration from the existing literature.
Secondly, the presentation will look at theoretically informed selection of a definition. Throughout the
presentation, specific emphasis will be put on the implications of adoption of several definitions of
collaboration for the application of Social Network Analysis to the study of open source software,
particularly considering data collection and analysis. Open source software is developed in the open
where anyone can view the source code and anyone with the knowledge to do so can contribute to the
project. Because people from around the world work on these projects together using online tools, it is
a relevant setting for studying collaboration. An interesting aspect of open source collaboration is that
private resources from individuals and organizations are used to develop software that is released as a
public good. Social Network Analysis can be used to understand the network relationships between the
individuals who develop this software. Given the interest in collaboration from researchers from different
backgrounds and disciplines, similar research is likely to produce considerations to stimulate further
thoughts about definitions of collaboration in several domains and research settings.
Professor Dagobert Soergel's talk (2009 CISTA Award Recipient): Task-centric ...kristenlabonte
"The task-centric revolution. Weaving information into workflows." Systems should be centered around tasks, not applications. This talk will present ideas and techniques towards the design of task-centric systems.
Activating Research Collaboratories with Collaboration PatternsCommunitySense
This presentation explains how collaborative communities require evolving socio-technical systems. Collaboration patterns are important to design these systems and capture lessons learnt. The role of librarians as collaboration pattern stewards and collaborative working system architects is outlined.
The document summarizes presentations from three perspectives on progress towards open and interoperable research data service workflows:
1) Angus Whyte of the Digital Curation Centre discussed new DCC guidance and design principles for integrating research data service workflows.
2) Rory Macneil of Research Space discussed integrating their ELN with University of Edinburgh's DataShare and Harvard's Dataverse repositories using open standards.
3) Stuart Lewis of University of Edinburgh discussed their DataVault prototype for packaging data to be archived from a Jisc Research Data Spring project. The case studies illustrate challenges and opportunities for improving integration between active data management and long-term preservation services.
This presentation was provided by Chris Erdmann of Library Carpentries and by Judy Ruttenberg of ARL during the NISO virtual conference, Open Data Projects, held on Wednesday, June 13, 2018.
Slides for:
"Software Citation in Theory and Practice," by Daniel S. Katz and Neil P. Chue Hong (published paper - https://doi.org/10.1007/978-3-319-96418-8_34; preprint - https://arxiv.org/abs/1807.08149), presented at International Congress on Mathematical Software (ICMS 2018)
Abstract. In most fields, computational models and data analysis have become a significant part of how research is performed, in addition to the more traditional theory and experiment. Mathematics is no exception to this trend. While the system of publication and credit for theory and experiment (journals and books, often monographs) has developed and has become an expected part of the culture, how research is shared and how candidates for hiring, promotion are evaluated, software (and data) do not have the same history. A group working as part of the FORCE11 community developed a set of principles for software citation that fit software into the journal citation system, allow software to be published and then cited, and there are now over 50,000 DOIs that have been issued for software. However, some challenges remain, including: promoting the idea of software citation to developers and users; collaborating with publishers to ensure that systems collect and retain required metadata; ensuring that the rest of the scholarly infrastructure, particu- larly indexing sites, include software; working with communities so that software efforts count; and understanding how best to cite software that has not been published.
Skills & ideas for #ProblemGamblingKTE Anne Bergen
Skills & ideas for #ProblemGamblingKTE. (2104). Part of the "Moving Research Forward" Workshop Series for the Ontario Problem Gambling Research Centre.
How to Revise an Essay - From Draft to Perfection in 5 Steps. How to Revise Essay in a Few Easy Steps by Essay Revisor - Issuu. Revision Checklist for Essays & Short Stories — The Autodidacts. Free essay revision From undergraduate writers Writers.. Essay Revisions 101 - Mrs. Yonker. Essay Revision Is What Every Student Needs | smartessayrewriter.com.
Building better knowledge graphs through social computingElena Simperl
Elena Simperl discusses how social computing can help build better knowledge graphs. She presents research on how the editing behaviors and diversity of communities impact the quality of knowledge graphs like Wikidata and DBpedia. Her studies found that bot edits, tenure diversity, and interest diversity positively influence item and ontology quality. She also shows how crowdsourcing can enhance knowledge graphs by having experts and non-experts perform different quality assurance tasks, like detecting errors or classifying entities.
This document discusses the principles of user-centered design. It emphasizes the importance of understanding users, conducting research to learn about their needs and tasks, and involving users throughout the design process. Some key user research methods mentioned include wants and needs analysis, card sorting, group task analysis, and contextual interviews. The document stresses that good design starts with the user, and that consulting with and keeping users as the central focus leads to designs that best solve the problems users face.
This document discusses the principles of user-centered design. It emphasizes the importance of understanding users, conducting research to learn about their needs and tasks, and involving users throughout the design process. Some key user research methods mentioned include wants and needs analysis, card sorting, group task analysis, and contextual interviews. The document stresses that good design starts with the user, and that innovation comes from addressing the right problems for the target users.
Better software, better service, better research: The Software Sustainabilit...Carole Goble
Ever spotted some great looking software only to discover you can’t get it, it doesn’t work, there is no documentation to help fix it and the developers don’t have the time or incentive to help? Ever produced some software that you want to be widely used or have folks contribute? What’s the sustainability of that key platform/library/tool /database your lab uses day in and day out? Are you helping the providers? The same issues stand for Data (or as we now say “FAIR” Findable, Accessible, Interoperable, Reusable Data) and its metadata. Is anyone looking out for Europe’s data services– the datasets and analysis systems you use and you make – the standards they use and the curators and developers who make them? Or is FAIR just a FAIRy story? I’ll tell how two organisations with quite different structures and approaches - the UK’s Software Sustainability Institute and the ELIXIR European Research Infrastructure for Life Science Data – are working for the common goal of better software, better service, and better research.
https://www.rothamsted.ac.uk/events/14th-international-symposium-integrative-bioinformatics
Methods for Intrinsic Evaluation of Links in the Web of DataCristina Sarasua
The current Web of Data contains a large amount of interlinked data. However, there is still a limited understanding about the quality of the links connecting entities of different and distributed data sets. Our goal is to provide a collection of indicators that help assess existing interlinking. In this paper, we present a framework for the intrinsic evaluation of RDF links, based on core principles of Web data integration and foundations of Information Retrieval. We measure the extent to which links facilitate the discovery of an extended description of entities, and the discovery of other entities in other data sets. We also measure the use of different vocabularies. We analysed links extracted from a set of data sets from the Linked Data Crawl 2014 using these measures.
This document discusses how linking open data can make data more valuable and useful. It recommends following semantic web and linked data practices like publishing data using RDF, linking entities to related datasets, and maintaining and improving links over time. Linking data allows queries across datasets, facilitates data integration, and enables new applications by connecting related information. The key is to link data in a way that answers questions and benefits both data publishers and users, and to iteratively enhance link quality and coverage.
More Related Content
Similar to Editing Behavior over Time Power vs. Standard Wikidata Editors
This document outlines a workshop on using blogs to share knowledge and engage communities online. It discusses how blogs fit alongside other organizational activities and shares the presenter's experience with an institutional blog. The document also contrasts individual and multi-author blogs. Additionally, it provides examples of tools that can be used to develop an online community and engagement through social media and blogging. The workshop aims to help participants reflect on using blogs and online engagement strategies.
Nominet trust projects theory of change presentation 2016Daniel Robinson
This document provides an overview of theory of change approaches for evaluating social impact. It defines theory of change as describing how specific changes are expected to occur as a result of interventions and actions. The document discusses best practices for developing theories of change such as having plausible, doable, and testable causal links between activities and outcomes. It also addresses challenges such as the complexity of social systems and limitations of research available. Throughout, it provides exercises and examples to illustrate key concepts for developing and critiquing theories of change.
My Empirikom 2012 presentation in Aachen, Germany. I discuss my work with analytical constructs (genre ecologies, activity systems, activity networks), illustrating them with a case and showing how they might point to better understandings of computer-mediated communication in professional environments.
Peer Review of Workflow ModelsIn the Week 4 Discussion, you ex.docxtemplestewart19
Peer Review of Workflow Models
In the Week 4 Discussion, you explored the benefits of gaining an outsider’s perspective on a workflow issue or gap you are investigating. It can be equally beneficial to request feedback from others on the accuracy and clarity of a workflow model.
In this Discussion, you and your colleagues critique one another’s Visio drafts of your workflow models that you created for Part 1 of the Course Project and provide feedback on how to make the workflow model more complete. You also receive feedback on your own workflow model and consider additional information that you may need to collect as you conduct your gap analysis.
To prepare
By
Day 1
of this week, your Instructor will have assigned you to review two colleague's Visio drafts. Locate these drafts in
Doc Sharing
. (
Please see the attached files for assigned colleague Visio drafts)
Examine each workflow model using the basic requirements outlined in the Course Project. Consider the following:
Does each draft use standard Visio workflow shapes for start and end points, basic steps, and decision points?
Are all points connected with arrows flowing in the correct direction?
Are swimlanes present to identify who completes each task?
Carefully read through each workflow model.
Does it make sense?
What areas are unclear or confusing?
Are all decision points adequately explained?
What parts need additional detail?
Identify at least one additional gap in each workflow. For example, this may be a redundant task, an unnecessary task, an ineffective system or process, or an area where staff need support. What meaningful use objective or objectives are related to the identified gap?
With these thoughts in mind:
Post by tomorrow 10/04/16, a minimum of 550 words in APA with 2 references, addressing the level one headings below:
1)
Your reviews of your colleagues’ Visio drafts. Identify any basic requirements (standard workflow shapes, arrow directions, decision points, swimlanes, etc.) that are unmet or need revision. Also identify areas that lack information and what additional detail is necessary to clarify those areas.
PLEASE SEE THE ATTACHED FILES FOR THE VISIO DRAFT TO REVIEW
2)
Describe the gap you identified in the workflow and explain how it is related to at least one meaningful use objective.
Required Readings
Dennis, A., Wixom, B. H., & Roth, R. M. (2015).
Systems analysis and design
(6th ed.). Hoboken, NJ: Wiley.
Review Chapter 5, “Process Modeling” (pp. 153–186)
Helmers, S. (2011).
Microsoft Visio 2010 step by step
. Sebastopol, CA: O’Reilly.
Chapter 7, “Adding and Using Hyperlinks” (pp. 215–238)
This chapter includes instructions for adding hyperlinks to a Visio drawing. These can be links to a website, another document, or another Visio page.
Chapter 8, “Sharing and Publishing Diagrams: Part 1” (pp. 239–270)
This chapter introduces how to preview and print a Visio diagram, how to create Visio templates, and how to post a diagram to the.
Aniket Kittur, Bongwon Suh, Bryan Pendleton, Ed H. Chi.
He Says, She Says: Conflict and Coordination in Wikipedia.
In Proc. of ACM Conference on Human Factors in Computing Systems (CHI2007), pp. 453--462, April 2007. ACM Press. San Jose, CA.
http://www-users.cs.umn.edu/~echi/papers/2007-CHI/2007-Wikipedia-coordination-PARC-CHI2007.pdf
Action Learning Sets: An Innovative Way to Facilitate Writing for Publication Self Employed
Presentation given by Maria J Grant, Research Fellow, University of Salford, UK at the 7th International Evidence Based Library and Information Practice (EBLIP7) conference, University of Saskatchewan, Saskatoon, Canada, 15th-18th July 20013.
www.eblip7.library.usask.ca
Operationalisation of Collaboration Sunbelt 2015Dawn Foster
The operationalisation of collaboration: in search of a definition and its consequences on
analysis
Collaboration has been defined in numerous ways. Researchers interested in collaboration at the
individual or organizational level need to pay special attention to the adoption of a specific definition, as
this is likely to have major implications for the research design and outcomes. With respect to
collaboration within open source software projects, this presentation has two objectives. Firstly, this
presentation will investigate a wide variety of definitions of collaboration from the existing literature.
Secondly, the presentation will look at theoretically informed selection of a definition. Throughout the
presentation, specific emphasis will be put on the implications of adoption of several definitions of
collaboration for the application of Social Network Analysis to the study of open source software,
particularly considering data collection and analysis. Open source software is developed in the open
where anyone can view the source code and anyone with the knowledge to do so can contribute to the
project. Because people from around the world work on these projects together using online tools, it is
a relevant setting for studying collaboration. An interesting aspect of open source collaboration is that
private resources from individuals and organizations are used to develop software that is released as a
public good. Social Network Analysis can be used to understand the network relationships between the
individuals who develop this software. Given the interest in collaboration from researchers from different
backgrounds and disciplines, similar research is likely to produce considerations to stimulate further
thoughts about definitions of collaboration in several domains and research settings.
Professor Dagobert Soergel's talk (2009 CISTA Award Recipient): Task-centric ...kristenlabonte
"The task-centric revolution. Weaving information into workflows." Systems should be centered around tasks, not applications. This talk will present ideas and techniques towards the design of task-centric systems.
Activating Research Collaboratories with Collaboration PatternsCommunitySense
This presentation explains how collaborative communities require evolving socio-technical systems. Collaboration patterns are important to design these systems and capture lessons learnt. The role of librarians as collaboration pattern stewards and collaborative working system architects is outlined.
The document summarizes presentations from three perspectives on progress towards open and interoperable research data service workflows:
1) Angus Whyte of the Digital Curation Centre discussed new DCC guidance and design principles for integrating research data service workflows.
2) Rory Macneil of Research Space discussed integrating their ELN with University of Edinburgh's DataShare and Harvard's Dataverse repositories using open standards.
3) Stuart Lewis of University of Edinburgh discussed their DataVault prototype for packaging data to be archived from a Jisc Research Data Spring project. The case studies illustrate challenges and opportunities for improving integration between active data management and long-term preservation services.
This presentation was provided by Chris Erdmann of Library Carpentries and by Judy Ruttenberg of ARL during the NISO virtual conference, Open Data Projects, held on Wednesday, June 13, 2018.
Slides for:
"Software Citation in Theory and Practice," by Daniel S. Katz and Neil P. Chue Hong (published paper - https://doi.org/10.1007/978-3-319-96418-8_34; preprint - https://arxiv.org/abs/1807.08149), presented at International Congress on Mathematical Software (ICMS 2018)
Abstract. In most fields, computational models and data analysis have become a significant part of how research is performed, in addition to the more traditional theory and experiment. Mathematics is no exception to this trend. While the system of publication and credit for theory and experiment (journals and books, often monographs) has developed and has become an expected part of the culture, how research is shared and how candidates for hiring, promotion are evaluated, software (and data) do not have the same history. A group working as part of the FORCE11 community developed a set of principles for software citation that fit software into the journal citation system, allow software to be published and then cited, and there are now over 50,000 DOIs that have been issued for software. However, some challenges remain, including: promoting the idea of software citation to developers and users; collaborating with publishers to ensure that systems collect and retain required metadata; ensuring that the rest of the scholarly infrastructure, particu- larly indexing sites, include software; working with communities so that software efforts count; and understanding how best to cite software that has not been published.
Skills & ideas for #ProblemGamblingKTE Anne Bergen
Skills & ideas for #ProblemGamblingKTE. (2104). Part of the "Moving Research Forward" Workshop Series for the Ontario Problem Gambling Research Centre.
How to Revise an Essay - From Draft to Perfection in 5 Steps. How to Revise Essay in a Few Easy Steps by Essay Revisor - Issuu. Revision Checklist for Essays & Short Stories — The Autodidacts. Free essay revision From undergraduate writers Writers.. Essay Revisions 101 - Mrs. Yonker. Essay Revision Is What Every Student Needs | smartessayrewriter.com.
Building better knowledge graphs through social computingElena Simperl
Elena Simperl discusses how social computing can help build better knowledge graphs. She presents research on how the editing behaviors and diversity of communities impact the quality of knowledge graphs like Wikidata and DBpedia. Her studies found that bot edits, tenure diversity, and interest diversity positively influence item and ontology quality. She also shows how crowdsourcing can enhance knowledge graphs by having experts and non-experts perform different quality assurance tasks, like detecting errors or classifying entities.
This document discusses the principles of user-centered design. It emphasizes the importance of understanding users, conducting research to learn about their needs and tasks, and involving users throughout the design process. Some key user research methods mentioned include wants and needs analysis, card sorting, group task analysis, and contextual interviews. The document stresses that good design starts with the user, and that consulting with and keeping users as the central focus leads to designs that best solve the problems users face.
This document discusses the principles of user-centered design. It emphasizes the importance of understanding users, conducting research to learn about their needs and tasks, and involving users throughout the design process. Some key user research methods mentioned include wants and needs analysis, card sorting, group task analysis, and contextual interviews. The document stresses that good design starts with the user, and that innovation comes from addressing the right problems for the target users.
Better software, better service, better research: The Software Sustainabilit...Carole Goble
Ever spotted some great looking software only to discover you can’t get it, it doesn’t work, there is no documentation to help fix it and the developers don’t have the time or incentive to help? Ever produced some software that you want to be widely used or have folks contribute? What’s the sustainability of that key platform/library/tool /database your lab uses day in and day out? Are you helping the providers? The same issues stand for Data (or as we now say “FAIR” Findable, Accessible, Interoperable, Reusable Data) and its metadata. Is anyone looking out for Europe’s data services– the datasets and analysis systems you use and you make – the standards they use and the curators and developers who make them? Or is FAIR just a FAIRy story? I’ll tell how two organisations with quite different structures and approaches - the UK’s Software Sustainability Institute and the ELIXIR European Research Infrastructure for Life Science Data – are working for the common goal of better software, better service, and better research.
https://www.rothamsted.ac.uk/events/14th-international-symposium-integrative-bioinformatics
Similar to Editing Behavior over Time Power vs. Standard Wikidata Editors (20)
Methods for Intrinsic Evaluation of Links in the Web of DataCristina Sarasua
The current Web of Data contains a large amount of interlinked data. However, there is still a limited understanding about the quality of the links connecting entities of different and distributed data sets. Our goal is to provide a collection of indicators that help assess existing interlinking. In this paper, we present a framework for the intrinsic evaluation of RDF links, based on core principles of Web data integration and foundations of Information Retrieval. We measure the extent to which links facilitate the discovery of an extended description of entities, and the discovery of other entities in other data sets. We also measure the use of different vocabularies. We analysed links extracted from a set of data sets from the Linked Data Crawl 2014 using these measures.
This document discusses how linking open data can make data more valuable and useful. It recommends following semantic web and linked data practices like publishing data using RDF, linking entities to related datasets, and maintaining and improving links over time. Linking data allows queries across datasets, facilitates data integration, and enables new applications by connecting related information. The key is to link data in a way that answers questions and benefits both data publishers and users, and to iteratively enhance link quality and coverage.
Workshop "Weaving Relations of Trust in Crowd Work: Transparency and Reputation across Platforms" co-located with ACM Web Science Conference 2016.
http://trustincrowdwork.west.uni-koblenz.de/
trustincrowdwork trust crowdsourcing websci16
Workshop "Weaving Relations of Trust in Crowd Work: Transparency and Reputation across Platforms" co-located with ACM Web Science Conference 2016.
http://trustincrowdwork.west.uni-koblenz.de/
trustincrowdwork trust crowdsourcing websci16
Workshop "Weaving Relations of Trust in Crowd Work: Transparency and Reputation across Platforms" co-located with ACM Web Science Conference 2016.
http://trustincrowdwork.west.uni-koblenz.de/
trustincrowdwork trust crowdsourcing websci16
This document contains the agenda for a workshop on building trust in crowd work. The morning sessions include an invited talk and three presentations on studies of crowd workers, focusing on task clarity, the role of empathy in interfaces, and a demographic survey. After a statement marathon, more presentations are scheduled for the afternoon on crowd work platforms, followed by a panel discussion and crowdsourced reviews to close the workshop.
Workshop "Weaving Relations of Trust in Crowd Work: Transparency and Reputation across Platforms" co-located with ACM Web Science Conference 2016.
http://trustincrowdwork.west.uni-koblenz.de/
trustincrowdwork trust crowdsourcing websci16
Workshop "Weaving Relations of Trust in Crowd Work: Transparency and Reputation across Platforms" co-located with ACM Web Science Conference 2016.
http://trustincrowdwork.west.uni-koblenz.de/
El documento habla sobre la desigualdad de género en la tecnología y Wikipedia. Señala que aunque hay más hombres que mujeres en estos campos, también hubo y hay importantes contribuidoras femeninas como Ada Lovelace, Grace Hopper y Anita Borg. Explica iniciativas para promover la participación de las mujeres, como editatones y organizaciones de apoyo. Finalmente, destaca el potencial de Wikidata para gestionar la información de manera más consistente entre idiomas y reutilizar los datos.
Este documento presenta una introducción a Wikidata. Resume que Wikidata es un proyecto para crear una base de datos libre sobre conocimiento mundial mediante la colaboración voluntaria. Explica algunas características clave como que los datos están estructurados y son multiidioma, y proporciona estadísticas sobre su crecimiento. También describe conceptos básicos como elementos, propiedades, declaraciones y herramientas para editar y consultar Wikidata.
The document discusses how interlinking on the Web of Data involves more than just owl:sameAs identity links, and should also include domain-specific relationship links. It proposes that boosting the creation of these domain-specific links could improve the current situation where identity links are more prevalent. Methods to do so include developing link discovery techniques that go beyond simple resource comparisons, drawing on vocabularies and contextual relevance. Evaluation campaigns may also need extending beyond instance matching to properly assess these relationship links.
Programmatic Access to Crowdsourced Human Computation for Designing and Enhan...Cristina Sarasua
The document discusses using crowdsourced human computation to improve interlinking of datasets in the Linked Open Data cloud. It presents CROWDKI, a system that manages microtasks on crowdsourcing platforms to systematically involve humans in assessing potential interlinks and validating automatically generated links. Two use cases are described: 1) having crowds assess the relevance of different interlinking possibilities between resources and 2) having crowds validate and enhance automatically computed links between datasets. The architecture utilizes a crowdsourcing platform API to generate and collect microtasks that can be completed rapidly and at low cost to enhance interlinking over purely automatic methods.
This document discusses using microtask crowdsourcing to support data interlinking in semantic libraries. It describes how microtasks can be used to generate candidate links between datasets, collect crowd worker responses on those links, and aggregate the responses to generate a final set of links. Several challenges of this approach are also identified, including analyzing crowd workers, selecting representative test links, and providing context information with the microtasks. Potential use cases discussed include mapping vocabularies, discovering instance links, curating mapping extensions, and checking links with library users.
With an increasing micro-labor supply and a larger available workforce, new microtask platforms have emerged providing an extensive list of marketplaces
where microtasks are offered by requesters and completed by crowd workers. The current microtask crowdsourcing infrastructure does not offer the possibility to be recognised for already accomplished and offered work in different microtask platforms. This lack of information leads to uninformed decisions in selection processes, which have been acknowledged as a promising way
to improve the quality of crowd work. To overcome this limitation, we propose Crowd Work CV, an RDF-based data model that, similarly to traditional Curriculum Vitae, captures crowd workers’ interests, qualifications and work history, as well as requesters’ information. Crowd Work CV enables the representation of crowdsourcing agents’ identities and promotes their work experience across the
different microtask marketplaces.
The document discusses using crowdsourcing to assist with data interlinking. It describes how crowdsourcing can be used to extend resource descriptions, enable richer queries, and create links between datasets. The approach involves combining algorithmic and human computation by having humans complete microtasks to validate, correct, and create new links. Several challenges are discussed, such as how to optimize the process, incentivize workers, and assess when crowdsourcing is needed.
Exploring the challenge of linking scientific publications and studies with c...Cristina Sarasua
1) A researcher wants to analyze voting patterns in Germany over 20 years but publications and research data are published separately without links between them.
2) The document proposes using crowd workers to link publications to research studies by having them identify references, connections, and relation types between publications and studies.
3) An initial case study showed crowd workers could correctly link publications and studies and improve automatically generated links, though tasks requiring background knowledge or deciding publication types showed worse results.
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Editing Behavior over Time Power vs. Standard Wikidata Editors
1. Editing Behavior over Time
Power vs. Standard Wikidata Editors
Cristina Sarasua*
, Alessandro Checco, Gianluca Demartini,
Djellel E. Difallah, Michael Feldman, Lydia Pintscher
sarasua@ifi.uzh.ch
@csarasuagar
WikidataCon 2017
2. 6.8K - 8.7K
Active Editors
Source: 08.2016 - 08.2017 The Wikidata Revolution, Lydia Pintscher, Wikimania 2017
7. 1. Understand differences in the behaviour
between power editors and standard editors
2. Be able to identify if an editor will be “power”
or “standard” editor
3. Provide a method that helps interested
standard editors find their editing mission
Data-driven study
Discussion
8. Editor Types Evolution
S1 S2 S3 S4
M1 M2 M3 M4
session-based
month-based
# edits (volume)
● High
● Low
# months (lifespan)
● Long
● Short
11. What does the related work say?
“Wikipedians are born, not made. They don’t do more over time
and they maintain a high and constant level of participation.”
[Panciera et al. 2009, Data-driven study]
“Wikidatians” acquire a higher sense of responsibility for
their work, interact more with the community, take on
more advanced tasks, and use a wider range of tools”
[Piscopo et al. 2017, Interviews]
“There are different functional roles among editors: reference editor, item
editor, item creator, item expert, property editor, and property engineer.”
[Mueller-Birn et al. 2015, Data-driven study]
12. Methodology
139+K editors, 32+M
edits, 7+M items
Data
(human edits, item
pages, without tools)
Grouped in sessions
Descriptive Statistics
Statistical Model to see
Trends among different
editors
Classification method to
guess the lifespan and edits
that an editor will have
14. Edit sessions
F1. Shorter times between edits, and a longer definition of session than in
Wikipedia (4.37 hours)
[Wikipedia, Geiger et al. 2013]
15. Editors and Items
F2. Few editors with many edits (and vice versa), few items with many editors
(and vice versa)
16. Lifespan
F3. Few editors worked over almost 4y, no linear relation between edit count and
lifespan
17. F4.CONTRIBUTION
# edits (session, month)
# edits per item (s,m)
# items edited (s,m)
Editors with longer lifespan
tend to maintain a constant
contribution.
Others don’t.
Editors with higher volume
tend to maintain a constant
contribution.
Others don’t (not as clear).
i1 m
lifespan
i1 m
editcount
18. F5.PARTICIPATION
# seconds spent (session)
Editors with a long lifespan
maintain a constant
participation.
Others don’t.
Some editors with high volume
of edits maintain a constant
participation. i4 s
lifespan
i4 s
editcount
19. F6.DIVERSITY
# entropy of type of edit
(s,m)
Editors with long lifespan tend
to increase the diversity of the
type of their edits (m).
For the others, some
increase others decrease.
i5 m
lifespan
i5 m
editcount
20. Identifying power and standard editors
Lifespan prediction: F1-score for Random Forest and Logistic
Classifier predicting using different # of sessions
Volume of edits prediction: F1-score for Random Forest and
Logistic Classifier predicting using different # of sessions
15 months
100 edits
● Lifespan is predicted better
than volume of edits.
21. Identifying power and standard editors
● Lifespan is predicted better
than volume of edits.
● We can predict volume of edits
better for standard editors than
power users (both in session-
and month-based evolution).
As for lifespan, it is better for
power editors.
Lifespan prediction: F1-score for Random Forest and Logistic
Classifier predicting using different # of sessions
Volume of edits prediction: F1-score for Random Forest and
Logistic Classifier predicting using different # of sessions
15 months
100 edits
22. Conclusions
from this research
● Skewed distribution in volume
of edits.
● 46 % of editors are presumably
“gone”.
● Power editors (in contrast to
standard editors) tend to have
habits and be constant in
contribution and participation.
● Power editors tend to increase
diversity of type of actions over
months.
23. How do we help standard users to
have editing habits that suit them?
24. How do we help standard users to
have editing habits that suit them?
25. Proposal
● Define intentions,
resolutions
● Identify with roles and
missions
● Publish calls for
actions
● Define data needs
Standard Editors
? Power Editors, Data Providers
Individual / social missions Best practices dissemination
Method & Tool
Focus
Routines
26. @ Researchers,
Developers
● Related theories to consider?
● What Wikidata tools to integrate
in the process?
@ Editors, Community
Managers
● Are there people overwhelmed
who don’t know how to
contribute best?
● How do we collect and
disseminate tips and tricks
about deciding what to edit?
● How can we enable 1:1
collaboration between power
editors / data providers and
standard users?
28. References
Katherine Panciera, Aaron Halfaker, and Loren Terveen. 2009. Wikipedians are born, not made: a study of power editors on
Wikipedia. In Proceedings of the ACM 2009 international conference on Supporting group work (GROUP '09). ACM, New
York, NY, USA, 51-60. DOI=http://dx.doi.org/10.1145/1531674.1531682
Piscopo, Alessandro, Phethean, Christopher and Simperl, Elena (2017) Wikidatians are born: paths to full participation in a
collaborative structured knowledge base In Proceedings of the 50th Hawaii International Conference on System Sciences.
University of Hawaii. 10 pp, pp. 4354-4363. (doi:10.24251/HICSS.2017.527).
Claudia Müller-Birn, Benjamin Karran, Janette Lehmann, and Markus Luczak-Rösch. 2015. Peer-production system or
collaborative ontology engineering effort: what is Wikidata?. In Proceedings of the 11th International Symposium on Open
Collaboration (OpenSym '15). ACM, New York, NY, USA, Article 20, 10 pages. DOI:
https://doi.org/10.1145/2788993.2789836
R. Stuart Geiger and Aaron Halfaker. 2013. Using edit sessions to measure participation in wikipedia. In Proceedings of the
2013 conference on Computer supported cooperative work (CSCW '13). ACM, New York, NY, USA, 861-870. DOI:
https://doi.org/10.1145/2441776.2441873
29. Image sources
Slide 6 Attribution Nalex.25 - Creative Commons Attribution-Share Alike 4.0 International
Slide 8 CC0 https://pixabay.com/en/books-education-school-literature-484766/
https://pixabay.com/en/hourglass-sand-watch-time-glass-1046841/
Sliide 9 https://pixabay.com/en/question-mark-pile-question-mark-2492009/
CC0 https://pixabay.com/en/business-success-winning-chart-163464/
https://pixabay.com/en/code-technology-monitor-computer-2588957/
Slide CC0 https://pixabay.com/en/user-person-people-profile-account-1633249/
Slide 24 CC0 https://pixabay.com/en/user-group-icon-person-business-1275780/ https://pixabay.com/en/man-woman-question-mark-problems-2814937/
https://pixabay.com/en/map-travel-compass-magnifying-glass-2685795/
Slide 25 https://pixabay.com/en/protest-models-art-artist-2265287/
Slide 26 https://blog.wikimedia.de/2012/04/04/meet-the-wikidata-team/ photo by Phillip Wilke. CC-BY-SA-3.0
Slide 26 Group photo of Wikimania 2017 attendees. Photo by Victor Grigas/Wikimedia Foundation, CC BY-SA 4.0.