BiOnym is a concept-mapping workflow for taxon name reconciliation that addresses issues in integrating and matching large biodiversity databases. It provides a flexible modular architecture to customize the matching process and integrate third-party components. BiOnym leverages existing open-source tools and uses standard formats to enable interoperability. The workflow includes steps for data preparation, multiple chained matchers, and review of matching results.
We are an ISO 2001-2008 organization engaged in rendering highly effective Manpower Recruitment Services. Our services have won us the unflinching faith of clients due to their professional management and timely execution.
We are an ISO 2001-2008 organization engaged in rendering highly effective Manpower Recruitment Services. Our services have won us the unflinching faith of clients due to their professional management and timely execution.
"Filling the Digital Preservation Gap" with ArchivematicaJenny Mitcham
A webinar given by Jenny Mitcham and Simon Wilson to Digital Preservation Coalition members on 25th November 2015. It describes work underway in the "Filling the Digital Preservation Gap" project using Archivematica to preserve research data
Predictive Analytics: Context and Use Cases
Historical context for successful implementation of predictive analytic techniques and examples of implementation of successful use cases.
CLIMB System Introduction Talk - CLIMB LaunchTom Connor
Talk outlining the CLoud Infrastructure for Microbial Bioinformatics (CLIMB) system given at the CLIMB Launch in July 2016. CLIMB is a UK national e-infrastructure providing Microbial Bioinformatics as a Service.
Jenny Mitcham from the University of York and Chris Awre from the University of Hull share lessons learned from their project to explore the potential of the digital preservation solution Archivematica to help manage research data that academics within the University produce. The project 'Filling the Digital Preservation Gap' has been carried out with funding from Jisc as part of their Research Data Spring program and was a collaboration of the University of York and the University of Hull. The project did not only explore Archivematica as a possible solution but also how it could integrate with the repositories and other systems for the management of research data.
The Series is jointly sponsored by ANDS and CAUL.
Expert panel on industrialising microbiomics - with UnileverEagle Genomics
A panel of experts, including Dr Barry Murphy, Microbiomics Science Lead at Unilever, Dr Craig McAnulla, Senior Consultant for Bioinformatics and Dr Yasmin Alam-Faruque, Scientific Data Manager/Biocurator discuss first-hand experience and views on how to get better insights faster from microbiome data.
INNOVATION AND RESEARCH (Digital Library Information Access)Libcorpio
Innovation and research, Digital Library Information Access, LIS Education, Library and Information Science, LIS Studies, Information Management, Education and Learning, Library science, Information science, Digital Libraries, Research on Digital Libraries, DL, Innovation in libraries and publishing, Areas of Research for DL, Information Discovery, Collection Management and Preservation, Interoperability, Economic, Social and Legal Issues, Core Topics In Digital Libraries, DL Research Around The World
"Filling the Digital Preservation Gap" with ArchivematicaJenny Mitcham
A webinar given by Jenny Mitcham and Simon Wilson to Digital Preservation Coalition members on 25th November 2015. It describes work underway in the "Filling the Digital Preservation Gap" project using Archivematica to preserve research data
Predictive Analytics: Context and Use Cases
Historical context for successful implementation of predictive analytic techniques and examples of implementation of successful use cases.
CLIMB System Introduction Talk - CLIMB LaunchTom Connor
Talk outlining the CLoud Infrastructure for Microbial Bioinformatics (CLIMB) system given at the CLIMB Launch in July 2016. CLIMB is a UK national e-infrastructure providing Microbial Bioinformatics as a Service.
Jenny Mitcham from the University of York and Chris Awre from the University of Hull share lessons learned from their project to explore the potential of the digital preservation solution Archivematica to help manage research data that academics within the University produce. The project 'Filling the Digital Preservation Gap' has been carried out with funding from Jisc as part of their Research Data Spring program and was a collaboration of the University of York and the University of Hull. The project did not only explore Archivematica as a possible solution but also how it could integrate with the repositories and other systems for the management of research data.
The Series is jointly sponsored by ANDS and CAUL.
Expert panel on industrialising microbiomics - with UnileverEagle Genomics
A panel of experts, including Dr Barry Murphy, Microbiomics Science Lead at Unilever, Dr Craig McAnulla, Senior Consultant for Bioinformatics and Dr Yasmin Alam-Faruque, Scientific Data Manager/Biocurator discuss first-hand experience and views on how to get better insights faster from microbiome data.
INNOVATION AND RESEARCH (Digital Library Information Access)Libcorpio
Innovation and research, Digital Library Information Access, LIS Education, Library and Information Science, LIS Studies, Information Management, Education and Learning, Library science, Information science, Digital Libraries, Research on Digital Libraries, DL, Innovation in libraries and publishing, Areas of Research for DL, Information Discovery, Collection Management and Preservation, Interoperability, Economic, Social and Legal Issues, Core Topics In Digital Libraries, DL Research Around The World
A step into the future of iMarine: The iMarine Public-centred Partnership Bus...iMarine283644
Presentation by Marc Taconet - FAO-FI, Chief Fisheries Statistics and Information Branch (FIPS) & iMarine Board Chair, Patricio Bernal - IUCN High Seas Initiatives and Hervé Camount - Terradue, Program Manager on the sustainability plan of the iMarine initiative
iMarine data e-infrastructure: Data access, harmonization, analysis, and mana...iMarine283644
On the 22 July 2014, OpenChannels.org and the EBM Tools Network, two of the premier sources of information about coastal and marine planning and management tools in the United States and internationally, hosted the iMarine webinar: iMarine Data e-Infrastructure Initiative for Fisheries Management and Conservation of Marine Living Resources.
The webinar focused on the presentation of the iMarine initiative and its powerful data e-infrastructures and services, followed by a presentation of a set of use cases related to Geospatial Analysis, Ecology, Biodiversity and Life History Traits. The presentations were given by Pasquale Pagano, CNR-ISTI and iMarine Technical Director and Gianpaolo Coro, CNR-ISTI. Watch the video of the webinar here https://www.youtube.com/watch?v=lgf30BPyBbk
Integrating Heterogeneous and Distributed Information about Marine Species th...iMarine283644
On the 21st of November 2013, Yannis Tzitzikas, FORTH, presented the Integrating heterogeneous and distributed information about marine species through a top level ontology paper at the 7th Metadata and Semantic Research Conference in Thessaloniki, Greece.
In computational statistics, algorithms often have specialized implementations that address very specific problems. Every so often, these algorithms are applicable also to other problems than the original ones. Today, interest is growing towards modular and pluggable solutions that enable the repetition and validation of the experiments made by other scientists and allow the exploitation of those algorithms in other contexts. Furthermore, such procedures are requested to be remotely hosted and to “hide” the complexity of the calculations, managed by remote computational infrastructures behind the scenes. For such reasons, the usual solution of supplying modular software libraries containing implementations of algorithms is leaving the place to Web Services accessible through standard protocols and hosting such implementations. The protocols describing the computational capabilities of these Services are more and more elaborate, so that modular workflows can rely on them.
Part 1 - What is a data e-infrastructure?
Part 2 - Serving policy frameworks facing BIG challenges
Part 3 - The power of an e-Infrastructure - Synergies and efficiencies through Global collaboration communities
The iMarine initiative provides a data infrastructure aimed at facilitating open access, the sharing of data, collaborative analysis, processing and mining processing, as well as the dissemination of newly generated knowledge. The iMarine data infrastructure is developed to support decision making in high-level challenges that require policy decisions typical of the ecosystem approach. The iMarine offering can be articulated in six bundles. A “bundle” is a set of services and technologies grouped according to a family of related tasks for achieving a common objective. Bundles can be customized and/or enriched into flexible, purpose-built Virtual Research Environments (VRE). Virtual research environments offer flexible and secure web-based, community-centric platforms, so researchers can work together on common challenges. Each VRE in the infrastructure is tightly integrated with the underlying gCube enabling software, and can access and re-purpose data from other iMarine applications.
Marine Knowledge Meeting, 11-12 Oct 2012, Brussels: All About iMarine iMarine283644
iMarine is empowering users in the marine community and beyond by providing a highly efficient e-Infrastructure to accelerate data discovery, exchange, and analysis, tools and platforms that facilitates scientific discovery. Funded by the European Commission's 7th Framework Programme, a number of iMarine services are already available through the iMarine Gateway supplying cross disciplinary data supporting experts in the field.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
BiOnym
1. iME4d - BiOnym
A concept-mapping workflow for taxon names reconciliation
Friday 7 March 2014 – Rome
A concept-mapping workflow for taxon names reconciliation
Fabio Fiorellato, Edward Vanden Berghe, Gianpaolo Coro, Nicolas Bailly
2. Big Data make its way to biology
• Data volumes increase dramatically
– Management of large databases (millions of
records) easier
• no longer the realm of professional IT people• no longer the realm of professional IT people
– Biologists wake up to the advantages of
• Good data management, including preservation
• Data sharing
• Makes it possible to do science in a different
way
3. ‘Big Data’: Need for data integration
• Becoming a very realistic possibility
– Management of DBs of millions of records
• Needs integration of small, restricted-scope
datasets into massive databasesdatasets into massive databases
– Intra-discipline integration (homogenous)
– Inter-discipline integration (heterogeneous)
• Individual studies too small to inform on a scale
commensurate with problems humankind faces
– Evidence-based management of living resources
– Climate change, global warming…
4. iMarine biodiversity ‘ecosystem’
Taxon name enrichment
Taxon name reconciliationTaxon name access
Occurrence data access
Environmental data access
openModeller
AquaMaps
Distribution modelling
Occurrence data enrichment
Occurrence data reconciliation
5. Central role of taxon name reconciliation
Taxon name enrichment
Taxon name reconciliationTaxon name access
Occurrence data access
Environmental data access
openModeller
AquaMaps
Distribution modelling
Occurrence data enrichment
Occurrence data reconciliation
6. Taxonomic names are the keys…
• … Keys to bind together information on the
same taxon from different sources
• But there are problems• But there are problems
– Different research groups use different spellings
– Accidental misspellings
– Synonym, homonym reconciliation (but outside
scope of ByOnym)
7. Some people can’t type
• Asthenognathas inaefaipes
• Asthenognathus inaeqipes
• Asthenognathus maefaipes• Asthenognathus maefaipes
• Astheognathus inaequipes
• Asthenognathus inaeguipes
• Astheognathus inaeqinipes
• Asthenognathus inaequipes
8. Things can go wrong with Excel…
• Clupea harengus Linnaeus, 1758
• Clupea harengus Linnaeus, 1759
• Clupea harengus Linnaeus, 1760• Clupea harengus Linnaeus, 1760
• Clupea harengus Linnaeus, 1761
• Clupea harengus Linnaeus, 1762
• …
10. Taxonomic names are the keys…
• … Keys to bind together information on the
same taxon from different sources
• But there are problems• But there are problems
– Different research groups use different spellings
– Accidental misspellings
• Reconciliation is necessity, not luxury!!!
11. Existing systems…
• … Are not flexible
– We need flexibility, as our use case will dictate what the ‘optimal’
behaviour of the system is
• E.g. manual vs automatic systems
• … Are often coupled to a single ‘reference list’• … Are often coupled to a single ‘reference list’
– Using different tax. Scope for test and reference only increases
false positives
• E.g. TaxaMatch with IRMNG…
• …Don’t always have throughput needed for
large-scale projects
– Largest db appr. 20M names – too many pairs!
12. Our need
• A flexible, highly customisable, workflow-
based approach to taxon name matching
– User controls input
– Output can be used as input in other– Output can be used as input in other
processes
– Running on high performance computing
infrastructure
BiOnym!
13. Introduction to BiOnym
• As a workflow for taxon name mapping and reconciliation, it is
a real-world application of the concept-mapping principles
• It is focused on the domain of taxonomy, with an initial
restriction to marine species only
• Provides a full workflow (not only the concept mapping part)
• Tries to address - and possibly solve - many issues common to• Tries to address - and possibly solve - many issues common to
the taxonomic community
• Its key concept is “species taxonomy”, where concept
properties are the taxonomic atoms
• Is open to integration from third party components
• Takes advantage of the iMarine distributed infrastructure
14. The iMarine solution: existing state-of-the-art
• A general purpose concept mapping framework
(COMET) was already available in FAO:
– based on an existing FAO product (limited to the fishing
vessels domain) initially developed with the support of the
Japanese trust fund
– domain independent (can be tailored to any custom– domain independent (can be tailored to any custom
domain with little effort)
– provided with all the necessary building blocks and
components for general purpose usage
15. The iMarine solution: the quest for integration
• The integration of COMET inside iMarine was hailed
and expected.
• Its main challenges:
– Identify and define the custom domain (biological taxonomy)
– Design and implement:
• custom COMET matchlets (engine assigning similarity scores to pairs of names)
• additional, reusable tools for data interchange and data preparation
(DwCA converter, input parser, pre- and post-processors)
– Enable components to be easily distributed among worker nodes
inside the infrastructure
– Integration in the iMarine Statistical Manager
16. The iMarine solution: a success story
• The COMET integration inside iMarine, as part of the
BiOnym workflow, is an example of success story:
– Solving the integration challenges required limited effort
• Harvest names for input through iMarine tools
• Send output from BiOnym/COMET on to further tools
– The core matching capabilities of BiOnym were first made– The core matching capabilities of BiOnym were first made
available in June 2013
• Pre- and post-processing; parsing
• Matching through (a series of) matchlets, assigning a similarity
score to pairs of names
– The modular architecture enabled developers to add new
functionalities or improve existing ones with ease
17. BiOnym key concepts and features
• Its modular architecture is open to contribution and
alternatives
– Workflow stages can be plugged-in with custom business implementations
– Can leverage third party components (e.g. the input data parsing is available
both as an in-house component or as a wrapper of the GNI parser from
globalnames.org)
• Based on standard and open formats• Based on standard and open formats
– Reference data are synthesized from DWCA files
– Input data and matching results are expected and produced as CSV files
– Matching results can also be emitted as XML files in the COMET format
• High flexibility
– Multiple chained matchers, each with its own configuration and thresholds
– Third party matchers (e.g. Tony Rees’ TaxaMatch) can be seamlessly ‘wrapped’
and plugged in the workflow
– Support for collaborative matching results evaluation (expected soon)
20. Where are we?
• Infrastructure has largely been built
• User-friendly GUI is under development
• Evaluation
– Efficiency: speed of computations– Efficiency: speed of computations
• Parallel system, compares well with others
– Effectiveness: are the results OK?
• Ran experiments on different test datasets
– Deliberately introducing misspellings in known lists
– ‘Real’ misspellings manually corrected for other purposes
21. The Bionym Interface
Never mind the small print.
Step 1: Select your data
Step 2: Compose the
matching process. This
relies on infrastructure
resources
Step 3: review results. This
can be private and ‘for your
eyes only’, or public.
24. Where to from here?
• Validation
– Not in terms of quality of output but…
– Uptake by the biodiversity community
• Sustainability• Sustainability
– Who will take over maintenance after iMarine
ends?
• BiOnym is a tool, it is the means to an end
– Support Ecosystem Approach to Fisheries
25. iMarine biodiversity ‘ecosystem’
Taxon name enrichment
Taxon name reconciliationTaxon name access
Occurrence data access
Environmental data access
openModeller
AquaMaps
Distribution modelling
Occurrence data enrichment
Occurrence data reconciliation
26. BiOnym in its environment
Ecological modelling – Rich data management
Taxa Authority FileTaxa Authority File
Vernacular Names
Authority File
Vernacular Names
Authority File
Darwin Core ArchiveDarwin Core Archive
Based on the COMET Framework
developed by Fabio Fiorellato (FAO)