Wikidata: Verifiable, Linked Open Knowledge That Anyone Can EditDario Taraborelli
Slides for my September 23 talk on Wikidata and WikiCite – NIH Frontiers in Data Science lecture series.
Persistent URL: https://dx.doi.org/10.6084/m9.figshare.3850821
Opportunities and challenges presented by Wikidata in the context of biocurationBenjamin Good
Abstract—Wikidata is a world readable and writable knowledge base maintained by the Wikimedia Foundation. It offers the opportunity to collaboratively construct a fully open access knowledge graph spanning biology, medicine, and all other domains of knowledge. To meet this potential, social and technical challenges must be overcome - many of which are familiar to the biocuration community. These include community ontology building, high precision information extraction, provenance, and license management. By working together with Wikidata now, we can help shape it into a trustworthy, unencumbered central node in the Semantic Web of biomedical data.
There are high expectations for Linked Government Data—the practice of publishing public sector information on the Web using Linked Data formats. This slideset reviews some of the ongoing work in the US, UK, and within W3C, as well as activities within my institute (DERI, National University of Ireland, Galway).
Wikidata: Verifiable, Linked Open Knowledge That Anyone Can EditDario Taraborelli
Slides for my September 23 talk on Wikidata and WikiCite – NIH Frontiers in Data Science lecture series.
Persistent URL: https://dx.doi.org/10.6084/m9.figshare.3850821
Opportunities and challenges presented by Wikidata in the context of biocurationBenjamin Good
Abstract—Wikidata is a world readable and writable knowledge base maintained by the Wikimedia Foundation. It offers the opportunity to collaboratively construct a fully open access knowledge graph spanning biology, medicine, and all other domains of knowledge. To meet this potential, social and technical challenges must be overcome - many of which are familiar to the biocuration community. These include community ontology building, high precision information extraction, provenance, and license management. By working together with Wikidata now, we can help shape it into a trustworthy, unencumbered central node in the Semantic Web of biomedical data.
There are high expectations for Linked Government Data—the practice of publishing public sector information on the Web using Linked Data formats. This slideset reviews some of the ongoing work in the US, UK, and within W3C, as well as activities within my institute (DERI, National University of Ireland, Galway).
SciDataCon 2014 Data Papers and their applications workshop - NPG Scientific ...Susanna-Assunta Sansone
Part of the SciDataCon14 workshop on "Data Papers and their applications" run by myself and Brian Hole to help attendees understand current data-publishing journals and trends and help them understand the editorial processes on NPG's Scientific Data and Ubiquity's Open Health Data.
We describe current work in federating data from institutional research profiling systems – providing single-point
access to substantial numbers of investigators through concept-driven search, visualization of the relationships
among those investigators and the ability to interlink systems into a single information ecosystem.
Validata: A tool for testing profile conformanceAlasdair Gray
Validata (http://hw-swel.github.io/Validata/) is an online web application for validating a dataset description expressed in RDF against a community profile expressed as a Shape Expression (ShEx). Additionally it provides an API for programmatic access to the validator. Validata is capable of being used for multiple community agreed standards, e.g. DCAT, the HCLS community profile, or the Open PHACTS guidelines, and there are currently deployments to support each of these. Validata can be easily repurposed for different deployments by providing it with a new ShEx schema. The Validata code is available from GitHub (https://github.com/HW-SWeL/Validata).
Presentation given at SDSVoc https://www.w3.org/2016/11/sdsvoc
The HCLS Community Profile: Describing Datasets, Versions, and DistributionsAlasdair Gray
Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting HCLS community profile covers elements of description, identification, attribution, versioning, provenance, and content summarization. The HCLS community profile reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets.
The goal of this presentation is to give an overview of the HCLS Community Profile and explain how it extends and builds upon other approaches.
Presentation given at SDSVoc (https://www.w3.org/2016/11/sdsvoc/)
Supporting Dataset Descriptions in the Life SciencesAlasdair Gray
Machine processable descriptions of datasets can help make data more FAIR; that is Findable, Accessible, Interoperable, and Reusable. However, there are a variety of metadata profiles for describing datasets, some specific to the life sciences and others more generic in their focus. Each profile has its own set of properties and requirements as to which must be provided and which are more optional. Developing a dataset description for a given dataset to conform to a specific metadata profile is a challenging process.
In this talk, I will give an overview of some of the dataset description specifications that are available. I will discuss the difficulties in writing a dataset description that conforms to a profile and the tooling that I've developed to support dataset publishers in creating metadata description and validating them against a chosen specification.
Seminar talk given at the EBI on 5 April 2017
Tutorial: Describing Datasets with the Health Care and Life Sciences Communit...Alasdair Gray
Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting HCLS community profile covers elements of description, identification, attribution, versioning, provenance, and content summarization. The HCLS community profile reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets. The goal of this tutorial is to explain elements of the HCLS community profile and to enable users to craft and validate descriptions for datasets of interest.
Linking Scientific Metadata (presented at DC2010)Jian Qin
Linked entity data in metadata records builds a foundation for semantic web. Even though metadata records contain rich entity data, there is no linking between associated entities such as persons, datasets, projects, publications, or organizations. We conducted a small experiment using the dataset collection from the Hubbard Brook Ecosystem Study (HBES), in which we converted the entities and their relationships into RDF triples and linked the URIs contained in RDF triples to the corresponding entities in the Ecological Metadata Language (EML) records. Through the transformation program written in XML Stylesheet Language (XSL), we turned a plain EML record display into an interlinked semantic web of ecological datasets. The experiment suggests a methodological feasibility in incorporating linked entity data into metadata records. The paper also argues for the need of changing the scientific as well as general metadata paradigm.
STM Week: Demonstrating bringing publications to life via an End-to-end XML p...GigaScience, BGI Hong Kong
Scott Edmunds at the STM Week 2020 Digital Publishing seminar on Demonstrating bringing publications to life via an End-to-end XML publishing platform. 2nd December 2020
The vision for ‘the Research Paper of the Future’ promises
to make scholarship more discoverable, transparent,
inspectable, reusable and sustainable. Yet new forms
of scientific output also challenge authors, librarians,
publishers and service providers to register, validate,
disseminate and preserve them as elements of the scholarly
record. What constitutes authorship in a collaborative
process of GitHub pull requests and commits? When to
capture, reference and preserve dynamic data sets that
change over time? How to package and render complex
executable collections for review and delivery? This session
considers key challenges in operationalising the Research
Paper of the Future from the perspectives of a publisher,
a library administrator and a scientist/developer of a
collaborative authoring platform.
Integrating with others: Stable VIVO URIs for local authority records; linkin...Violeta Ilik
Integrating with others: Stable VIVO URIs for local authority records; linking to VIAF; ORCID organizational identifiers; W3C Dataset ontology work by Melissa Haendel & Violeta Ilik, VIVO Implementation Fest, Durham NC, March 20, 2014
Starting from scratch – building the perfect digital repositoryVioleta Ilik
By establishing a digital repository on the Feinberg School of Medicine (FSM), Northwestern University, Chicago campus, we anticipate to gain ability to create, share, and preserve attractive, functional, and citable digital collections and exhibits. Galter Health Sciences Library did not have a repository as of November 2014. In just a few moths we formed a small team that was charged at looking to select the most suitable open source platform for our digital repository software. We followed the National Library of Medicine master evaluation criteria by looking at various factors that included: functionality, scalability, extensibility, interoperability, ease of deployment, system security, system, physical environment, platform support, demonstrated successful deployments, system support, strength of development community, stability of development organization, and strength of technology roadmap for the future. These factors are important for our case considering the desire to connect the digital repository with another platform that was an essential piece in the big FSM picture – VIVO. VIVO is a linked data platform that serves as a researchers’ hub and which provides the names of researchers from academic institutions along with their research output, affiliation, research overview, service, background, researcher’s identities, teaching, and much more.
Wikidata tutorial presented at the U.S. National Archives on October 10, 2015 as part of WikiConference USA.
Contains edits and corrections from version presented.
Released under CC0.
SciDataCon 2014 Data Papers and their applications workshop - NPG Scientific ...Susanna-Assunta Sansone
Part of the SciDataCon14 workshop on "Data Papers and their applications" run by myself and Brian Hole to help attendees understand current data-publishing journals and trends and help them understand the editorial processes on NPG's Scientific Data and Ubiquity's Open Health Data.
We describe current work in federating data from institutional research profiling systems – providing single-point
access to substantial numbers of investigators through concept-driven search, visualization of the relationships
among those investigators and the ability to interlink systems into a single information ecosystem.
Validata: A tool for testing profile conformanceAlasdair Gray
Validata (http://hw-swel.github.io/Validata/) is an online web application for validating a dataset description expressed in RDF against a community profile expressed as a Shape Expression (ShEx). Additionally it provides an API for programmatic access to the validator. Validata is capable of being used for multiple community agreed standards, e.g. DCAT, the HCLS community profile, or the Open PHACTS guidelines, and there are currently deployments to support each of these. Validata can be easily repurposed for different deployments by providing it with a new ShEx schema. The Validata code is available from GitHub (https://github.com/HW-SWeL/Validata).
Presentation given at SDSVoc https://www.w3.org/2016/11/sdsvoc
The HCLS Community Profile: Describing Datasets, Versions, and DistributionsAlasdair Gray
Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting HCLS community profile covers elements of description, identification, attribution, versioning, provenance, and content summarization. The HCLS community profile reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets.
The goal of this presentation is to give an overview of the HCLS Community Profile and explain how it extends and builds upon other approaches.
Presentation given at SDSVoc (https://www.w3.org/2016/11/sdsvoc/)
Supporting Dataset Descriptions in the Life SciencesAlasdair Gray
Machine processable descriptions of datasets can help make data more FAIR; that is Findable, Accessible, Interoperable, and Reusable. However, there are a variety of metadata profiles for describing datasets, some specific to the life sciences and others more generic in their focus. Each profile has its own set of properties and requirements as to which must be provided and which are more optional. Developing a dataset description for a given dataset to conform to a specific metadata profile is a challenging process.
In this talk, I will give an overview of some of the dataset description specifications that are available. I will discuss the difficulties in writing a dataset description that conforms to a profile and the tooling that I've developed to support dataset publishers in creating metadata description and validating them against a chosen specification.
Seminar talk given at the EBI on 5 April 2017
Tutorial: Describing Datasets with the Health Care and Life Sciences Communit...Alasdair Gray
Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting HCLS community profile covers elements of description, identification, attribution, versioning, provenance, and content summarization. The HCLS community profile reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets. The goal of this tutorial is to explain elements of the HCLS community profile and to enable users to craft and validate descriptions for datasets of interest.
Linking Scientific Metadata (presented at DC2010)Jian Qin
Linked entity data in metadata records builds a foundation for semantic web. Even though metadata records contain rich entity data, there is no linking between associated entities such as persons, datasets, projects, publications, or organizations. We conducted a small experiment using the dataset collection from the Hubbard Brook Ecosystem Study (HBES), in which we converted the entities and their relationships into RDF triples and linked the URIs contained in RDF triples to the corresponding entities in the Ecological Metadata Language (EML) records. Through the transformation program written in XML Stylesheet Language (XSL), we turned a plain EML record display into an interlinked semantic web of ecological datasets. The experiment suggests a methodological feasibility in incorporating linked entity data into metadata records. The paper also argues for the need of changing the scientific as well as general metadata paradigm.
STM Week: Demonstrating bringing publications to life via an End-to-end XML p...GigaScience, BGI Hong Kong
Scott Edmunds at the STM Week 2020 Digital Publishing seminar on Demonstrating bringing publications to life via an End-to-end XML publishing platform. 2nd December 2020
The vision for ‘the Research Paper of the Future’ promises
to make scholarship more discoverable, transparent,
inspectable, reusable and sustainable. Yet new forms
of scientific output also challenge authors, librarians,
publishers and service providers to register, validate,
disseminate and preserve them as elements of the scholarly
record. What constitutes authorship in a collaborative
process of GitHub pull requests and commits? When to
capture, reference and preserve dynamic data sets that
change over time? How to package and render complex
executable collections for review and delivery? This session
considers key challenges in operationalising the Research
Paper of the Future from the perspectives of a publisher,
a library administrator and a scientist/developer of a
collaborative authoring platform.
Integrating with others: Stable VIVO URIs for local authority records; linkin...Violeta Ilik
Integrating with others: Stable VIVO URIs for local authority records; linking to VIAF; ORCID organizational identifiers; W3C Dataset ontology work by Melissa Haendel & Violeta Ilik, VIVO Implementation Fest, Durham NC, March 20, 2014
Starting from scratch – building the perfect digital repositoryVioleta Ilik
By establishing a digital repository on the Feinberg School of Medicine (FSM), Northwestern University, Chicago campus, we anticipate to gain ability to create, share, and preserve attractive, functional, and citable digital collections and exhibits. Galter Health Sciences Library did not have a repository as of November 2014. In just a few moths we formed a small team that was charged at looking to select the most suitable open source platform for our digital repository software. We followed the National Library of Medicine master evaluation criteria by looking at various factors that included: functionality, scalability, extensibility, interoperability, ease of deployment, system security, system, physical environment, platform support, demonstrated successful deployments, system support, strength of development community, stability of development organization, and strength of technology roadmap for the future. These factors are important for our case considering the desire to connect the digital repository with another platform that was an essential piece in the big FSM picture – VIVO. VIVO is a linked data platform that serves as a researchers’ hub and which provides the names of researchers from academic institutions along with their research output, affiliation, research overview, service, background, researcher’s identities, teaching, and much more.
Wikidata tutorial presented at the U.S. National Archives on October 10, 2015 as part of WikiConference USA.
Contains edits and corrections from version presented.
Released under CC0.
Lookout iOS developer Stephanie Shupe presented at the Grace Hopper Celebration of Women in Computing on October 10, 2014. She explains the processes that Lookout has used to successfully scale its mobile app.
We Are Museums 2016 workshop: Introduction to usability testingTiana Tasich
Slides from the We Are Museums 2016 workshop, Introduction to usability testing, by Tiana Tasich, digital consultant and strategist at Digitelling Agency.
Please note the blurriness of the slides is due to Slideshare not catering for direct uploads of Keynote files so the slides have been uploaded as a pdf, losing some of the quality of the presentation. For a better experience and more information see my blog post http://digitelling.agency/usability-testing-for-beginners/
Get in touch:
http://digitelling.agency
Citing as a public service. Building the sum of all human citationsDario Taraborelli
Slides from my talk at the Wikipedia Science Conference (#wikisci). London, September 3, 2015.
https://wikimedia.org.uk/wiki/User:DarTar/Citing_as_a_public_service
Scott Edmunds slides for class 8 from the HKU Data Curation (module MLIM7350 from the Faculty of Education) course covering open science and data publishing
Linking Knowledge Organization Systems via Wikidata (DCMI conference 2018)Joachim Neubert
Wikidata has been used sucessfully as a linking hub for authoritiy files. Knowledge organization systems like thesauri or classifications are more complex and pose additional challenges.
OBJECTIVES: Translational research focuses on the bench-to-bedside information transfer process — getting the information from researchers into the hands of clinical decision makers. At the same time, researchers who manage international research collaborations could benefit from increased knowledge and awareness of online collaboration tools to support these projects. Our goal was to support both needs through building awareness and skills with online and social media.
METHODS: The Library developed a curricula targeted specifically to academic researchers focusing on collaboration technologies and online tools to support the research process. The curricula will provide instruction at three levels: gateway, bridge, and mastery tools. The goal of Level One is to persuade researchers of the utility of online social tools. To develop the program, input was solicited from researchers identified as leaders in this area as well as focus groups of students to discover which tools are already being used.
RESULTS: Training is being provided on those tools identified as most likely to engage researchers (Google Docs, Skype, online scheduling, Adobe Connect, citation sharing tools). The curricula is being delivered as workshops duplicated as podcasts and in other online media.
CONCLUSIONS: Online and social media are practical tools for supporting distance collaborations relatively inexpensively while offering the added benefit of placing selected information in online spaces that facilitate discovery and discussion with clinical care providers, thus supporting the fundamental research processes at the same time as promoting bench-to-bedside information transfer.
A presentation by Gordon Dunsire.
Delivered at the Cataloguing and Indexing Group Scotland (CIGS) Linked Open Data (LOD) Conference which took place Fri 21 September 2012 at the Edinburgh Centre for Carbon Innovation.
Digital Identity is fundamental to collaboration in bioinformatics research and development because it enables attribution, contribution, publication to be recorded and quantified.
However, current models of identity are often obsolete and have problems capturing both small contributions "microattribution" and large contributions "mega-attribution" in Science. Without adequate identity mechanisms, the incentive for collaboration can be reduced, and the utility of collaborative social tools hindered.
Using examples of metabolic pathway analysis with the taverna workbench and myexperiment.org, this talk will illustrate problems and solutions to identifying scientists accurately and effectively in collaborative bioinformatics networks on the Web.
This presentation was provided by Jim Hahn of The University of Pennsylvania, during the NISO event "Transforming Search: What the Information Community Can and Should Build." The virtual conference was held on August 26, 2020.
Talk at the World Science Festival at Columbia, June 2, 2017: session on Big Data and Physics: http://www.worldsciencefestival.com/programs/big-data-future-physics/
Similar to Verifiable, linked open knowledge that anyone can edit (20)
Slides from my Wikimania 2014 presentation on targeted acquisition/contribution campaigns. https://wikimania2014.wikimedia.org/wiki/Submissions/The_missing_Wikipedia_ads:_Designing_targeted_contribution_campaigns
Intro slides for the EventLogging Workshop, introducing a new infrastructure built by the Wikimedia Foundation for web analytics and collaborative data modeling.
http://www.mediawiki.org/wiki/EventLogging/Workshop
Slides from my presentation at the Wikimedia Foundation/Stanford SNAP Group Meeting on the use of microtasks and recommender systems to better engage with Wikipedia readers and new users.
New editors not welcome: When Wikipedia articles trendDario Taraborelli
My lightning talk at WMF All Hands 2011 on trending and semi-protected Wikipedia articles that are mostly read-only (or hard to edit) for anonymous users
Paper presented at WikiSym 2008, showing what factors are likely to boost or hinder the growth of a wiki-based community. Full paper available at http://nitens.org/docs/wikidyn.pdf and in the forthcoming WikiSym 2008 proceedings.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
2. A short history of Wikipedia
A website that anyone can edit
The largest reference work on the internet
A multi-language online encyclopedia
3. A short history of Wikipedia
A website that anyone can edit
The largest reference work on the internet
A multi-language online encyclopedia
4. A short history of Wikipedia
A website that anyone can edit
The largest reference work on the internet
A multi-language online encyclopedia
5. Wikipedia: unintended outcomes
accelerate the dissemination of scholarship
support open scientific research
enable distributed fact-checking and curation of scientific knowledge
6. accelerate the dissemination of scholarship
support open scientific research
enable distributed fact-checking and curation of scientific knowledge
7. Wikipedia: unintended outcomes
accelerate the dissemination of scholarship
support open scientific research
enable distributed fact-checking and curation of scientific knowledge
8. Outline
1. Wikipedia as the front matter to all research
2. A new kind of open knowledge
3. Wikidata: Collaboratively curated linked open data
4. WikiCite: Building the sum of all human citations
5. Applications
6. Concluding remarks
10. “Wikipedia is not the bottom
layer of authority, nor the top,
but in fact the highest layer
without formal vetting. In this
unique role, it serves as an
ideal bridge between the
validated and unvalidated
Web.”
Casper Grathwohl
Chronicle of Higher Education
http://chronicle.com/article/article-content/125899/
11. Top sources of DOI resolutions
http://crosstech.crossref.org/2014/02/many-metrics-such-data-wow.html
http://blog.crossref.org/2016/05/https-and-wikipedia.html
12. The world’s most accessed online medical resource?
Heilman and West (2015) doi.org/10.2196/jmir.4069
13. Most visited resource on Ebola in West Africa
Heilman (2016) http://tinyurl.com/jfuyduv
Most used internet site in Liberia,
Sierra Leone and Guinea for
Ebola during 2014 outbreak
Greater than CNN, CDC and WHO
21. Wikidata
Free knowledge base that anyone can edit
Launched in 2012
Integrated with Wikipedia and other sister
projects
Statistics (Aug 2016)
Nearly 20M items
Over 100M statements
29. Expert curation of scientific open data
Benjamin Good (2016) Opportunities and challenges
presented by Wikidata in the context of
biocuration
http://tinyurl.com/hk9qrmz
30. Expert curation of scientific open data
Gene Wiki: WIkidata SPARQL examples
https://bitbucket.org/sulab/wikidatasparqlexamples/overview
Get a list of all diseases treated by Metformin
Get all the gene ontology evidence codes used in Wikidata
Get all known drug-drug interactions for Methadone via its CHEMBL id
31. WikiCite
Building the sum of all human citations
Randall Munroe, Wikipedian protester http://tinyurl.com/p3rodlb [CC BY]
36. Linking is a small act of generosity that sends people away from
your site to some other that you think shows the world in a way
worth considering. [...]
[Sources] that are not generous with linking [...] are a stopping
point in the ecology of information. That’s the operational
definition of authority: The last place you visit when you’re
looking for an answer. If you are satisfied with the answer, you
stop your pursuit of it. Take the links out and you think you look
like more of an authority.
D. Weinberger (2012) Linking is a public good
http://www.hyperorg.com/blogger/2012/02/26/2b2k-linking-is-a-public-good/
43. The molecular origins of insulin go at least as far back as
the simplest unicellular [[eukaryotes]].<ref
name='LeRoith'>{{cite journal | vauthors = LeRoith D, Shiloach
J, Heffron R, Rubinovitz C, Tanenbaum R, Roth J | title =
Insulin-related material in microbes: similarities and
differences from mammalian insulins | journal = Can. J.
Biochem. Cell Biol. | volume = 63 | issue = 8 | pages = 839–49
| year = 1985 | pmid = 3933801 | doi = 10.1139/o85-106
}}</ref> Apart from animals, insulin-like proteins are also
known to exist in Fungi and Protista kingdoms.
References in Wikipedia
44.
45. Wikicite: goals
Lay the foundations for building a repository of all Wikimedia
citations and source metadata as structured data
Design data models and technology to improve the coverage,
quality, standards-compliance and machine-readability of
citations and source metadata in Wikimedia projects
https://meta.wikimedia.org/wiki/WikiCite_2016
46. Wikidata as the solution
Vision
Technology
Community
Scale
Licensing
Independence
57. Most cited authors in the research corpus on Zika
SPARQL: http://tinyurl.com/jb8da68
58. Semi-automated recommendation of missing statements or sources for unsourced
statements
https://www.wikidata.org/wiki/Wikidata:Primary_sources_tool
https://meta.wikimedia.org/wiki/Grants:IEG/StrepHit:_Wikidata_Statements_Validation_via_References
59. Tools for crowdsourcing entity matching / disambiguation
http://www.generalist.org.uk/blog/2014/wikidata-identifiers-and-the-odnb-where-next/
http://www.generalist.org.uk/blog/2014/wikidata-and-identifiers-part-2-the-matching-process/
60. all statements citing a New York Times article
the most popular scholarly journals used as citations for statements in any item that
is a subclass of economics
all statements citing the works of Joseph Stiglitz
all statements citing journal articles by physicists from Oxford University
all statements citing a journal article that was retracted
all statements citing a source that cites a journal article that was retracted
New opportunities for linked open knowledge curation and discovery
https://meta.wikimedia.org/wiki/WikiCite_2016/Report/Group_5
62. Liberate public domain bibliographic and citation data
Support new forms of open curation and distributed fact-checking
Accelerate open scientific research
Verifiable, Linked Open Knowledge
That Anyone can Edit
64. Thank you
Acknowledgments
Daniel Mietchen, Jonathan Dugan, Lydia Pintscher, Cameron Neylon, James Hare, James Heilman,
Magnus Manske, the Gene Wiki team (especially Andra Waagmeester and Benjamin Good), the
University of Chicago Knowledge Lab, all WikiCite 2016 participants and Wikidata Source Metadata
project contributors.
Additional image credits
Printing press, M. Wirth https://thenounproject.com/term/printing/11880/ [CC BY]
Cocitation network for openfMRI papers, F. Å. Nielsen https://twitter.com/fnielsen/status/752860630932156416
dario@wikimedia.org • @readermeter • @Wikidata • @WikiCite • @WikiResearch