3. History
• Early TDWG standard were ‘data standards’
such as lists of abbreviations
4. History
• Early TDWG standard were ‘data standards’
such as lists of abbreviations
• HISPID was an exception to this
5. History
• Early TDWG standard were ‘data standards’
such as lists of abbreviations
• HISPID was an exception to this
• After 2000 standards were XML Schema
based ‘exchange standards’
6. History
• Early TDWG standard were ‘data standards’
such as lists of abbreviations
• HISPID was an exception to this
• After 2000 standards were XML Schema
based ‘exchange standards’
• Much overlap between schemas and no
reuse
11. 2008 - Moribund!
• http://rs.tdwg.org/ontology/voc/TaxonName
• TDWG Ontology moribund for a year
12. 2008 - Moribund!
• http://rs.tdwg.org/ontology/voc/TaxonName
• TDWG Ontology moribund for a year
• Work on Darwin Core proceeds and includes
RDF rendition but not in ontology namespace
13. 2008 - Moribund!
• http://rs.tdwg.org/ontology/voc/TaxonName
• TDWG Ontology moribund for a year
• Work on Darwin Core proceeds and includes
RDF rendition but not in ontology namespace
• PESI funds me to work on Taxon names and
concepts
16. 2009 Re-Organisation
• All files to OWL DL and editable in Protege 4
• Highly structured Bricks and Mortar approach
(explained in a moment)
17. 2009 Re-Organisation
• All files to OWL DL and editable in Protege 4
• Highly structured Bricks and Mortar approach
(explained in a moment)
• Will change visualization layer - use OWLDoc
18. 2009 Re-Organisation
• All files to OWL DL and editable in Protege 4
• Highly structured Bricks and Mortar approach
(explained in a moment)
• Will change visualization layer - use OWLDoc
• Will need minor namespace changes
19. 2009 Re-Organisation
• All files to OWL DL and editable in Protege 4
• Highly structured Bricks and Mortar approach
(explained in a moment)
• Will change visualization layer - use OWLDoc
• Will need minor namespace changes
• Under discussion!
22. Bricks and Mortar
• Break ontology into two
TaxonName
types of file
• “Bricks” don’t reference
or import from other
files
TaxonConcept
Rank
23. Bricks and Mortar
• Break ontology into two
Taxonomy Ontoloogy
TaxonName
types of file
• “Bricks” don’t reference
or import from other
files
TaxonConcept
• “Mortars” import and
specify relationships - no
new ‘concepts’
Rank
24. Bricks and Mortar
• Break ontology into two
Taxonomy Ontoloogy
TaxonName
types of file
• “Bricks” don’t reference
or import from other
files
TaxonConcept
• “Mortars” import and
specify relationships - no
new ‘concepts’
Rank
• Bricks can be re-used
26. PESI
• A Pan-European Species directories Infrastructure
27. PESI
• A Pan-European Species directories Infrastructure
• An official species ontology for Europe?
28. PESI
• A Pan-European Species directories Infrastructure
• An official species ontology for Europe?
• Plan to expose classification as linked data in a form that
should integrate with other semantic web efforts
29. PESI
• A Pan-European Species directories Infrastructure
• An official species ontology for Europe?
• Plan to expose classification as linked data in a form that
should integrate with other semantic web efforts
• Also define geographic regions and occurrence codes
30. PESI
• A Pan-European Species directories Infrastructure
• An official species ontology for Europe?
• Plan to expose classification as linked data in a form that
should integrate with other semantic web efforts
• Also define geographic regions and occurrence codes
• This is research! We need to experiment with the best
way to do it using semantic technologies
31. PESI
• A Pan-European Species directories Infrastructure
• An official species ontology for Europe?
• Plan to expose classification as linked data in a form that
should integrate with other semantic web efforts
• Also define geographic regions and occurrence codes
• This is research! We need to experiment with the best
way to do it using semantic technologies
• http://www.eu-nomen.eu/pesi/
33. GUID
• Globally Unique Identifiers are mandatory
34. GUID
• Globally Unique Identifiers are mandatory
• LSID have been adopted by TDWG but issues with roll
out and with integration with semantic web
35. GUID
• Globally Unique Identifiers are mandatory
• LSID have been adopted by TDWG but issues with roll
out and with integration with semantic web
• DOI also has semantic web issues
36. GUID
• Globally Unique Identifiers are mandatory
• LSID have been adopted by TDWG but issues with roll
out and with integration with semantic web
• DOI also has semantic web issues
• GBIF task group being set up to look into central service
37. GUID
• Globally Unique Identifiers are mandatory
• LSID have been adopted by TDWG but issues with roll
out and with integration with semantic web
• DOI also has semantic web issues
• GBIF task group being set up to look into central service
• Any outcome must support Linked Data paradigm
38. GUID
• Globally Unique Identifiers are mandatory
• LSID have been adopted by TDWG but issues with roll
out and with integration with semantic web
• DOI also has semantic web issues
• GBIF task group being set up to look into central service
• Any outcome must support Linked Data paradigm
• HTTP URI’s my preferred standard
40. Summary
• TDWG is the place to define the core semantics for
biodiversity informatics (specimens, occurrences, taxa,
nomenclature, etc)
41. Summary
• TDWG is the place to define the core semantics for
biodiversity informatics (specimens, occurrences, taxa,
nomenclature, etc)
• Previously taken complex hybrid approach but should
now be more purely semantic web based (with mappings
to XML document based approaches)
42. Summary
• TDWG is the place to define the core semantics for
biodiversity informatics (specimens, occurrences, taxa,
nomenclature, etc)
• Previously taken complex hybrid approach but should
now be more purely semantic web based (with mappings
to XML document based approaches)
• Things will move during this year and you can help shape
the future
43. Summary
• TDWG is the place to define the core semantics for
biodiversity informatics (specimens, occurrences, taxa,
nomenclature, etc)
• Previously taken complex hybrid approach but should
now be more purely semantic web based (with mappings
to XML document based approaches)
• Things will move during this year and you can help shape
the future
• We need client applications outside of taxonomy
44. See Also
Informal, opinionated articles related to
biodiversity informatics
• GUID Persistence as Zen kōan
http://www.hyam.net/blog/archives/346
• A Position on LSIDs
http://www.hyam.net/blog/archives/325
• Identifiers, Identity and Me
http://www.hyam.net/blog/archives/304
roger@hyam.net
Editor's Notes
I am sorry I can't be with you but I had promised my kids I would go camping with them this week. If the weather is bad I may be able to join you remotely.
I hope also that this brief talk is inline with what you want to discuss and that it at least gives some background.
Early TDWG standards had been what I now call data standards. These were things like lists of abbreviations published in book form.
By 2004 there had been a change in focus to what could be called data exchange standards. These mainly took the form of XML document structures specified in XML Schema.
Because of the nature of biodiversity data (particularly taxonomic data) there was an overlap between these schemas. Taxon names and specimens, for example, were defined in both ABCD and DarwinCore. It proved very difficult, if not actually impossible, to incorporate “concepts” from one schema into another both at a syntactic and semantic level.
In 2005 I was given a 2.5 year contract as TDWG Standards Architect. Part of my role was to try and bring some integration across TDWG.
Early TDWG standards had been what I now call data standards. These were things like lists of abbreviations published in book form.
By 2004 there had been a change in focus to what could be called data exchange standards. These mainly took the form of XML document structures specified in XML Schema.
Because of the nature of biodiversity data (particularly taxonomic data) there was an overlap between these schemas. Taxon names and specimens, for example, were defined in both ABCD and DarwinCore. It proved very difficult, if not actually impossible, to incorporate “concepts” from one schema into another both at a syntactic and semantic level.
In 2005 I was given a 2.5 year contract as TDWG Standards Architect. Part of my role was to try and bring some integration across TDWG.
Early TDWG standards had been what I now call data standards. These were things like lists of abbreviations published in book form.
By 2004 there had been a change in focus to what could be called data exchange standards. These mainly took the form of XML document structures specified in XML Schema.
Because of the nature of biodiversity data (particularly taxonomic data) there was an overlap between these schemas. Taxon names and specimens, for example, were defined in both ABCD and DarwinCore. It proved very difficult, if not actually impossible, to incorporate “concepts” from one schema into another both at a syntactic and semantic level.
In 2005 I was given a 2.5 year contract as TDWG Standards Architect. Part of my role was to try and bring some integration across TDWG.
Early TDWG standards had been what I now call data standards. These were things like lists of abbreviations published in book form.
By 2004 there had been a change in focus to what could be called data exchange standards. These mainly took the form of XML document structures specified in XML Schema.
Because of the nature of biodiversity data (particularly taxonomic data) there was an overlap between these schemas. Taxon names and specimens, for example, were defined in both ABCD and DarwinCore. It proved very difficult, if not actually impossible, to incorporate “concepts” from one schema into another both at a syntactic and semantic level.
In 2005 I was given a 2.5 year contract as TDWG Standards Architect. Part of my role was to try and bring some integration across TDWG.
The conclusion of my work as architect is summed up in this rather scary diagram produced in 2007.
I'm going to talk through this diagram for a few minutes as background.
The blue areas on the diagram are XML Schema based exchange standards.
The green areas Semantic web based.
The yellow areas specific to the TDWG TAPIR protocol and represent a mapping (using what are called TAPIR output models) between XML documents and XML serialised RDF in the green areas.
This diagram represents an attempt to “square the circle” and adopt semantic web technologies without letting go of XML Schema based exchange standards.
There were good reasons for taking this approach.
●Firstly there was considerable reluctance in the community to leaving XML Schemas behind.
●Data sharing networks using the XML schemas DiGIR and BioCASe and their replacement TAPIR were in place.
On reflection this is a bad approach because it is very complicated. It involves mapping XML document structures to XML serialisations of RDF. To specify a XML document in XSD that is valid XML RDF is very complex
I am becoming increasingly of the opinion that semantic technologies are the best way forward and this kind of binding to XML Schema is a mistake .
On the right hand side of the diagram are two blocks that are relevant to the current discussions.
●TDWG Ontology
●LSID Vocabularies
From the start it was realised that we needed some form of ontology. Develop such an ontology is time consuming and complex.
At the same time as the ontology was being developed TDWG was actively promoting LSIDs as the preferred Globally Unique Identifier technology.
The default metadata return type for LSIDs is RDF. We therefore needed some form of vocabulary for the RDF elements returned.
Top down modeling of the ontology was not going to provide the application level classes quickly enough.
The solution to this problem was to split the modeling effort into two parts. The LSID Vocabulary was the application level stuff. The TDWG Ontology was the top down stuff.
Some of the LSID classes are in use with data providers such as IPNI and Index Fungorum. I am not aware or the top-down higher level classes being is use.
Virtually no development occurred on the TDWG ontology during 2008 as no one had it as part of their paid role to maintain it.
You can still see the pages that render the existing OWL files using XSLT on the TDWG site.
Meanwhile work has continued on the DarwinCore exchange standard that was never formally ratified. This now includes and RDF rendition and follows the Dublin Core model as closely as possible.
This is not in the TDWG ontology names space but could be moved there or aliased from there.
PESI funds me to work on standards implementation for that project and as part of that work I will update the TDWG ontology.
Virtually no development occurred on the TDWG ontology during 2008 as no one had it as part of their paid role to maintain it.
You can still see the pages that render the existing OWL files using XSLT on the TDWG site.
Meanwhile work has continued on the DarwinCore exchange standard that was never formally ratified. This now includes and RDF rendition and follows the Dublin Core model as closely as possible.
This is not in the TDWG ontology names space but could be moved there or aliased from there.
PESI funds me to work on standards implementation for that project and as part of that work I will update the TDWG ontology.
Virtually no development occurred on the TDWG ontology during 2008 as no one had it as part of their paid role to maintain it.
You can still see the pages that render the existing OWL files using XSLT on the TDWG site.
Meanwhile work has continued on the DarwinCore exchange standard that was never formally ratified. This now includes and RDF rendition and follows the Dublin Core model as closely as possible.
This is not in the TDWG ontology names space but could be moved there or aliased from there.
PESI funds me to work on standards implementation for that project and as part of that work I will update the TDWG ontology.
Virtually no development occurred on the TDWG ontology during 2008 as no one had it as part of their paid role to maintain it.
You can still see the pages that render the existing OWL files using XSLT on the TDWG site.
Meanwhile work has continued on the DarwinCore exchange standard that was never formally ratified. This now includes and RDF rendition and follows the Dublin Core model as closely as possible.
This is not in the TDWG ontology names space but could be moved there or aliased from there.
PESI funds me to work on standards implementation for that project and as part of that work I will update the TDWG ontology.
This is what I plan to do.
All files will be moved to being OWL DL and editable using Protege 4. I now believe Protege is stable enough and usable enough to do this.
We will take a structured approach to ontology modeling where we have ‘Bricks’ and ‘Mortar’ ontologies.
We will need to change the way the ontologies are visualized on line as it currently relies on a specific XML serialization of the OWL. We may use the OWL Doc plugin for Protege - I’d be grateful for feedback on this.
We will need to change some namespaces but changes are likely to be minor.
This new organisation is still up for discussion
This is what I plan to do.
All files will be moved to being OWL DL and editable using Protege 4. I now believe Protege is stable enough and usable enough to do this.
We will take a structured approach to ontology modeling where we have ‘Bricks’ and ‘Mortar’ ontologies.
We will need to change the way the ontologies are visualized on line as it currently relies on a specific XML serialization of the OWL. We may use the OWL Doc plugin for Protege - I’d be grateful for feedback on this.
We will need to change some namespaces but changes are likely to be minor.
This new organisation is still up for discussion
This is what I plan to do.
All files will be moved to being OWL DL and editable using Protege 4. I now believe Protege is stable enough and usable enough to do this.
We will take a structured approach to ontology modeling where we have ‘Bricks’ and ‘Mortar’ ontologies.
We will need to change the way the ontologies are visualized on line as it currently relies on a specific XML serialization of the OWL. We may use the OWL Doc plugin for Protege - I’d be grateful for feedback on this.
We will need to change some namespaces but changes are likely to be minor.
This new organisation is still up for discussion
This is what I plan to do.
All files will be moved to being OWL DL and editable using Protege 4. I now believe Protege is stable enough and usable enough to do this.
We will take a structured approach to ontology modeling where we have ‘Bricks’ and ‘Mortar’ ontologies.
We will need to change the way the ontologies are visualized on line as it currently relies on a specific XML serialization of the OWL. We may use the OWL Doc plugin for Protege - I’d be grateful for feedback on this.
We will need to change some namespaces but changes are likely to be minor.
This new organisation is still up for discussion
This is what I plan to do.
All files will be moved to being OWL DL and editable using Protege 4. I now believe Protege is stable enough and usable enough to do this.
We will take a structured approach to ontology modeling where we have ‘Bricks’ and ‘Mortar’ ontologies.
We will need to change the way the ontologies are visualized on line as it currently relies on a specific XML serialization of the OWL. We may use the OWL Doc plugin for Protege - I’d be grateful for feedback on this.
We will need to change some namespaces but changes are likely to be minor.
This new organisation is still up for discussion
What I am calling Bricks and Mortar is just a modular or component based approach.
There are two distinct types of ontology files.
Bricks are totally self contained and do not import or reference other ontology elements.
Mortars are used to join existing Bricks into more complex ontologies.
The aim is to maximize re-use of Bricks by making sure they entail very little.
What I am calling Bricks and Mortar is just a modular or component based approach.
There are two distinct types of ontology files.
Bricks are totally self contained and do not import or reference other ontology elements.
Mortars are used to join existing Bricks into more complex ontologies.
The aim is to maximize re-use of Bricks by making sure they entail very little.
What I am calling Bricks and Mortar is just a modular or component based approach.
There are two distinct types of ontology files.
Bricks are totally self contained and do not import or reference other ontology elements.
Mortars are used to join existing Bricks into more complex ontologies.
The aim is to maximize re-use of Bricks by making sure they entail very little.
What I am calling Bricks and Mortar is just a modular or component based approach.
There are two distinct types of ontology files.
Bricks are totally self contained and do not import or reference other ontology elements.
Mortars are used to join existing Bricks into more complex ontologies.
The aim is to maximize re-use of Bricks by making sure they entail very little.
What I am calling Bricks and Mortar is just a modular or component based approach.
There are two distinct types of ontology files.
Bricks are totally self contained and do not import or reference other ontology elements.
Mortars are used to join existing Bricks into more complex ontologies.
The aim is to maximize re-use of Bricks by making sure they entail very little.
What I am calling Bricks and Mortar is just a modular or component based approach.
There are two distinct types of ontology files.
Bricks are totally self contained and do not import or reference other ontology elements.
Mortars are used to join existing Bricks into more complex ontologies.
The aim is to maximize re-use of Bricks by making sure they entail very little.
PESI is the project that is allowing me to re-engage with the ontology efforts.
PESI’s aim is to provide an infrastructure for the taxonomy of European organisms.
From a semantic web point of view this could be viewed as a species ontology for Europe. We should certainly be able to expose data in a form that can be consumed on the semantic web.
This will involve producing some useful ontologies of related data such as geographic regions and occurrence statuses such as “winter migrant”
PESI is a real project with real deliverables and we need to be careful that we deliver data in a way that can be consumed by real users. Although this may be in line with semantic web technologies it may also involve compromises. There is an element of research in this.
Read more about PESI here: http://www.eu-nomen.eu/pesi/
PESI is the project that is allowing me to re-engage with the ontology efforts.
PESI’s aim is to provide an infrastructure for the taxonomy of European organisms.
From a semantic web point of view this could be viewed as a species ontology for Europe. We should certainly be able to expose data in a form that can be consumed on the semantic web.
This will involve producing some useful ontologies of related data such as geographic regions and occurrence statuses such as “winter migrant”
PESI is a real project with real deliverables and we need to be careful that we deliver data in a way that can be consumed by real users. Although this may be in line with semantic web technologies it may also involve compromises. There is an element of research in this.
Read more about PESI here: http://www.eu-nomen.eu/pesi/
PESI is the project that is allowing me to re-engage with the ontology efforts.
PESI’s aim is to provide an infrastructure for the taxonomy of European organisms.
From a semantic web point of view this could be viewed as a species ontology for Europe. We should certainly be able to expose data in a form that can be consumed on the semantic web.
This will involve producing some useful ontologies of related data such as geographic regions and occurrence statuses such as “winter migrant”
PESI is a real project with real deliverables and we need to be careful that we deliver data in a way that can be consumed by real users. Although this may be in line with semantic web technologies it may also involve compromises. There is an element of research in this.
Read more about PESI here: http://www.eu-nomen.eu/pesi/
PESI is the project that is allowing me to re-engage with the ontology efforts.
PESI’s aim is to provide an infrastructure for the taxonomy of European organisms.
From a semantic web point of view this could be viewed as a species ontology for Europe. We should certainly be able to expose data in a form that can be consumed on the semantic web.
This will involve producing some useful ontologies of related data such as geographic regions and occurrence statuses such as “winter migrant”
PESI is a real project with real deliverables and we need to be careful that we deliver data in a way that can be consumed by real users. Although this may be in line with semantic web technologies it may also involve compromises. There is an element of research in this.
Read more about PESI here: http://www.eu-nomen.eu/pesi/
PESI is the project that is allowing me to re-engage with the ontology efforts.
PESI’s aim is to provide an infrastructure for the taxonomy of European organisms.
From a semantic web point of view this could be viewed as a species ontology for Europe. We should certainly be able to expose data in a form that can be consumed on the semantic web.
This will involve producing some useful ontologies of related data such as geographic regions and occurrence statuses such as “winter migrant”
PESI is a real project with real deliverables and we need to be careful that we deliver data in a way that can be consumed by real users. Although this may be in line with semantic web technologies it may also involve compromises. There is an element of research in this.
Read more about PESI here: http://www.eu-nomen.eu/pesi/
PESI is the project that is allowing me to re-engage with the ontology efforts.
PESI’s aim is to provide an infrastructure for the taxonomy of European organisms.
From a semantic web point of view this could be viewed as a species ontology for Europe. We should certainly be able to expose data in a form that can be consumed on the semantic web.
This will involve producing some useful ontologies of related data such as geographic regions and occurrence statuses such as “winter migrant”
PESI is a real project with real deliverables and we need to be careful that we deliver data in a way that can be consumed by real users. Although this may be in line with semantic web technologies it may also involve compromises. There is an element of research in this.
Read more about PESI here: http://www.eu-nomen.eu/pesi/
The whole semantic enablement of biodiversity data relies on objects having globally unique ids. Without these ids it is not possible to make assertions about specific objects.
Because of this TDWG have been through a process of selecting a GUID technology and have chosen LSIDs.
There are issues with this. Personally I believe they are redundant because of the need to proxy them for use in any client software and especially in semantically aware applications.
The approach TDWG have taken is not unusual. The publishing industry have taken a similar approach with DOI and they have similar issues.
A new GBIF task group is being established to look into the problems and try and come up with a solution - even if this involves a centralized authority.
In my view any outcome must support the Linked Data paradigm
Personally HTTP URIs are my preferred standard.
The whole semantic enablement of biodiversity data relies on objects having globally unique ids. Without these ids it is not possible to make assertions about specific objects.
Because of this TDWG have been through a process of selecting a GUID technology and have chosen LSIDs.
There are issues with this. Personally I believe they are redundant because of the need to proxy them for use in any client software and especially in semantically aware applications.
The approach TDWG have taken is not unusual. The publishing industry have taken a similar approach with DOI and they have similar issues.
A new GBIF task group is being established to look into the problems and try and come up with a solution - even if this involves a centralized authority.
In my view any outcome must support the Linked Data paradigm
Personally HTTP URIs are my preferred standard.
The whole semantic enablement of biodiversity data relies on objects having globally unique ids. Without these ids it is not possible to make assertions about specific objects.
Because of this TDWG have been through a process of selecting a GUID technology and have chosen LSIDs.
There are issues with this. Personally I believe they are redundant because of the need to proxy them for use in any client software and especially in semantically aware applications.
The approach TDWG have taken is not unusual. The publishing industry have taken a similar approach with DOI and they have similar issues.
A new GBIF task group is being established to look into the problems and try and come up with a solution - even if this involves a centralized authority.
In my view any outcome must support the Linked Data paradigm
Personally HTTP URIs are my preferred standard.
The whole semantic enablement of biodiversity data relies on objects having globally unique ids. Without these ids it is not possible to make assertions about specific objects.
Because of this TDWG have been through a process of selecting a GUID technology and have chosen LSIDs.
There are issues with this. Personally I believe they are redundant because of the need to proxy them for use in any client software and especially in semantically aware applications.
The approach TDWG have taken is not unusual. The publishing industry have taken a similar approach with DOI and they have similar issues.
A new GBIF task group is being established to look into the problems and try and come up with a solution - even if this involves a centralized authority.
In my view any outcome must support the Linked Data paradigm
Personally HTTP URIs are my preferred standard.
The whole semantic enablement of biodiversity data relies on objects having globally unique ids. Without these ids it is not possible to make assertions about specific objects.
Because of this TDWG have been through a process of selecting a GUID technology and have chosen LSIDs.
There are issues with this. Personally I believe they are redundant because of the need to proxy them for use in any client software and especially in semantically aware applications.
The approach TDWG have taken is not unusual. The publishing industry have taken a similar approach with DOI and they have similar issues.
A new GBIF task group is being established to look into the problems and try and come up with a solution - even if this involves a centralized authority.
In my view any outcome must support the Linked Data paradigm
Personally HTTP URIs are my preferred standard.
The whole semantic enablement of biodiversity data relies on objects having globally unique ids. Without these ids it is not possible to make assertions about specific objects.
Because of this TDWG have been through a process of selecting a GUID technology and have chosen LSIDs.
There are issues with this. Personally I believe they are redundant because of the need to proxy them for use in any client software and especially in semantically aware applications.
The approach TDWG have taken is not unusual. The publishing industry have taken a similar approach with DOI and they have similar issues.
A new GBIF task group is being established to look into the problems and try and come up with a solution - even if this involves a centralized authority.
In my view any outcome must support the Linked Data paradigm
Personally HTTP URIs are my preferred standard.
To Summarize
TDWG is the place to do it if you want semantic definition of useful things in the biodiversity informatics.
Previously we have taken complex hybrid approaches that were part of a growing experience. I believe we should have a clearer separation of semantic and other technologies now.
Stuff is going to happen this year and you can help shape it.
We need client applications outside of taxonomy to consume this stuff or there is no point in doing it.
To Summarize
TDWG is the place to do it if you want semantic definition of useful things in the biodiversity informatics.
Previously we have taken complex hybrid approaches that were part of a growing experience. I believe we should have a clearer separation of semantic and other technologies now.
Stuff is going to happen this year and you can help shape it.
We need client applications outside of taxonomy to consume this stuff or there is no point in doing it.
To Summarize
TDWG is the place to do it if you want semantic definition of useful things in the biodiversity informatics.
Previously we have taken complex hybrid approaches that were part of a growing experience. I believe we should have a clearer separation of semantic and other technologies now.
Stuff is going to happen this year and you can help shape it.
We need client applications outside of taxonomy to consume this stuff or there is no point in doing it.
To Summarize
TDWG is the place to do it if you want semantic definition of useful things in the biodiversity informatics.
Previously we have taken complex hybrid approaches that were part of a growing experience. I believe we should have a clearer separation of semantic and other technologies now.
Stuff is going to happen this year and you can help shape it.
We need client applications outside of taxonomy to consume this stuff or there is no point in doing it.
Finally you may be interested in a few of my blog posts.
I find myself explaining the same things over and over again in emails and over beer so I have recently decided to capture such thoughts in blog posts.
These are very informal but may be useful background reading on GUIDs.
Thanks for your time and please email me any comments. I will try and answer them during the meeting.