Knowledge Organization System (KOS) for biodiversity information resources, G...Dag Endresen
Slides from a presentation on the Knowledge Organization System (KOS) work program for GBIF. KOS developments for biodiversity information resources and input to the emerging Vocabulary Management Task Group (VoMaG).
Links
GBIF KOS prototype tools, http://kos.gbif.org/
Tool: Semantic Wiki prototype, http://terms.gbif.org/wiki/
Tool: ISOcat prototype demo, http://kos.gbif.org/isocat/
GBIF concept vocabulary term browser, http://kos.gbif.org/termbrowser/
GBIF Resources Repository, http://rs.gbif.org/terms/
GBIF Vocabulary Server, http://vocabularies.gbif.org/
GBIF Resources Browser, http://tools.gbif.org/resource-browser/
Where is the opportunity for libraries in the collaborative data infrastructure?LIBER Europe
Presentation by Susan Reilly at Bibsys2013 on the opportunties for libraries and their role in the collaborative data infrastructure. Looks at data sharing, authentication, preservation and advocacy.
Curation and Preservation of Crystallography DataManjulaPatel
A presentation given by Manjula Patel (UKOLN) at "Chemistry in the Digital Age: A Workshop connecting research and education", June 11-12th 2009, Penn State University,
http://www.chem.psu.edu/cyberworkshop09
Knowledge Organization System (KOS) for biodiversity information resources, G...Dag Endresen
Slides from a presentation on the Knowledge Organization System (KOS) work program for GBIF. KOS developments for biodiversity information resources and input to the emerging Vocabulary Management Task Group (VoMaG).
Links
GBIF KOS prototype tools, http://kos.gbif.org/
Tool: Semantic Wiki prototype, http://terms.gbif.org/wiki/
Tool: ISOcat prototype demo, http://kos.gbif.org/isocat/
GBIF concept vocabulary term browser, http://kos.gbif.org/termbrowser/
GBIF Resources Repository, http://rs.gbif.org/terms/
GBIF Vocabulary Server, http://vocabularies.gbif.org/
GBIF Resources Browser, http://tools.gbif.org/resource-browser/
Where is the opportunity for libraries in the collaborative data infrastructure?LIBER Europe
Presentation by Susan Reilly at Bibsys2013 on the opportunties for libraries and their role in the collaborative data infrastructure. Looks at data sharing, authentication, preservation and advocacy.
Curation and Preservation of Crystallography DataManjulaPatel
A presentation given by Manjula Patel (UKOLN) at "Chemistry in the Digital Age: A Workshop connecting research and education", June 11-12th 2009, Penn State University,
http://www.chem.psu.edu/cyberworkshop09
Biodiversity Informatics: An Interdisciplinary ChallengeBryan Heidorn
"Impacto de la Informática en el Conocimiento de la Biodiversidad: Actualidad y Futuro” at Universidad Nacional de Colombia on August 12, 2011. https://sites.google.com/site/simposioinformaticaicn/home
Research data management (RDM) and the FAIR principles (Findable, Accessible, Interoperable, Reusable) are widely
promoted as basis for a shared research data infrastructure. Nevertheless, researchers involved in next generation
sequencing (NGS) still lack adequate RDM solutions. The NGS metadata is generally not stored together with the raw
NGS data, but kept by individual researchers in separate files. This situation complicates RDM practice. Moreover,
the (meta)data does often not meet the FAIR principles [6]. Consequently, a central FAIR-compliant repository
is highly desirable to support NGS related research. We have selected iRODS (Rule-Oriented Data management
systems) [3] as a basis for implementing a sequencing data repository because it allows storing both data and metadata
together. iRODS serves as scalable middleware to access different storage facilities in a centralized and virtualized
way, and supports different types of clients. This repository will be part of an ecosystem of RDM solutions that
cover complementary phases of the research data life cycle in our organization (Academic Medical Center of the
University of Amsterdam). We selected Virtuoso [5] to enrich the metadata from iRODS to enable the management
of a triplestore for linked data. The metadata in the iCat (iRODS’ metadata catalogue) and the ontology in Virtuoso
are kept synchronized by enforcement of strict data manipulation policies. We have implemented a prototype to
preserve raw sequencing data for one research group. Three iRODS client interfaces are used for different purposes:
Davrods [4] for data and metadata ingestion, data retrieval; Metalnx-web [7] for administration, data curation, and
repository browsing; and iCommands [2] for all tasks by advanced users. Different user profiles are defined (principal
investigator, data curator, repository administrator), with different access rights. New data is ingested by copying raw
sequence files and the corresponding metadata file (a sample sheet) to the landing collection on iRODS. An iRODS
rule is triggered by the sample sheet file, which extracts the metadata and registers it to the iCAT as AVU (Attribute,
Value and Unit). Ontology files are registered into Virtuoso. The sequence files are copied to the persistent collection
and are made uniquely identifiable based on metadata. All the steps are recorded into a report file that enables
monitoring and tracking of progress and faults. Here we describe the design and implementation of the prototype,
and discuss the first assessment results. Initial results indicate that the proposed solution is acceptable and fits the
researchers workflow well.
Integrated research data management in the Structural SciencesManjulaPatel
A presentation given by Manjula Patel (UKOLN, University of Bath) at the I2S2 workshop "Scaling Up to Integrated Research Data Management", IDCC 2010, 6th December 2010, Chicago.
http://www.ukoln.ac.uk/projects/I2S2/events/IDCC-2010-ScalingUp-Wksp/
A presentation given by Manjula Patel (UKOLN) at the Repository Curation Environments (RECURSE) Workshop held at the 4th International Digital Curation Conference, Edinburgh, 1st December 2008,
http://www.dcc.ac.uk/events/dcc-2008/programme/
Being FAIR: Enabling Reproducible Data ScienceCarole Goble
Talk presented at Early Detection of Cancer Conference, OHSU, Portland, Oregon USA, 2-4 Oct 2018, http://earlydetectionresearch.com/ in the Data Science session
Keynote presented to KE workshop held in conjunction with the release of the report "A Surfboard for Riding the Wave
Towards a four country action programme on research data": http://www.knowledge-exchange.info/Default.aspx?ID=469
Data sharing archiving discovery, Bill MichenerAlison Specht
A presentation by Bill Michener (University of New Mexico and DataONE) about data sharing, archiving and discovery. It was an introduction to a session co-hosted by FRB-CESAB and CEFE (CNRS) in Montpellier.
Metadata for digital long-term preservationMichael Day
Presentation given at the Max Planck Gesellschaft eScience Seminar 2008: Aspects of long-term archiving, hosted by the Gesellschaft für Wissenschaftliche Datenverarbeitung mbh Göttingen (GWDG), Göttingen, Germany, 19-20 June 2008
Being Reproducible: SSBSS Summer School 2017Carole Goble
Lecture 2:
Being Reproducible: Models, Research Objects and R* Brouhaha
Reproducibility is a R* minefield, depending on whether you are testing for robustness (rerun), defence (repeat), certification (replicate), comparison (reproduce) or transferring between researchers (reuse). Different forms of "R" make different demands on the completeness, depth and portability of research. Sharing is another minefield raising concerns of credit and protection from sharp practices.
In practice the exchange, reuse and reproduction of scientific experiments is dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: the codes fork, data is updated, algorithms are revised, workflows break, service updates are released. ResearchObject.org is an effort to systematically support more portable and reproducible research exchange.
In this talk I will explore these issues in more depth using the FAIRDOM Platform and its support for reproducible modelling. The talk will cover initiatives and technical issues, and raise social and cultural challenges.
Research Objects: more than the sum of the partsCarole Goble
Workshop on Managing Digital Research Objects in an Expanding Science Ecosystem, 15 Nov 2017, Bethesda, USA
https://www.rd-alliance.org/managing-digital-research-objects-expanding-science-ecosystem
Research output is more than just the rhetorical narrative. The experimental methods, computational codes, data, algorithms, workflows, Standard Operating Procedures, samples and so on are the objects of research that enable reuse and reproduction of scientific experiments, and they too need to be examined and exchanged as research knowledge.
A first step is to think of Digital Research Objects as a broadening out to embrace these artefacts or assets of research. The next is to recognise that investigations use multiple, interlinked, evolving artefacts. Multiple datasets and multiple models support a study; each model is associated with datasets for construction, validation and prediction; an analytic pipeline has multiple codes and may be made up of nested sub-pipelines, and so on. Research Objects (http://researchobject.org/) is a framework by which the many, nested and contributed components of research can be packaged together in a systematic way, and their context, provenance and relationships richly described.
FAIRDOM - FAIR Asset management and sharing experiences in Systems and Synthe...Carole Goble
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs and so forth. Don’t stop reading. Data management isn’t likely to win anyone a Nobel prize. But publications should be supported and accompanied by data, methods, procedures, etc. to assure reproducibility of results. Funding agencies expect data (and increasingly software) management retention and access plans as part of the proposal process for projects to be funded. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems Biology demands the interlinking and exchange of assets and the systematic recording
of metadata for their interpretation.
The FAIR Guiding Principles for scientific data management and stewardship (http://www.nature.com/articles/sdata201618) has been an effective rallying-cry for EU and USA Research Infrastructures. FAIRDOM (Findable, Accessible, Interoperable, Reusable Data, Operations and Models) Initiative has 8 years of experience of asset sharing and data infrastructure ranging across European programmes (SysMO and EraSysAPP ERANets), national initiatives (de.NBI, German Virtual Liver Network, UK SynBio centres) and PI's labs. It aims to support Systems and Synthetic Biology researchers with data and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety.
This talk will use the FAIRDOM Initiative to discuss the FAIR management of data, SOPs, and models for Sys Bio, highlighting the challenges of and approaches to sharing, credit, citation and asset infrastructures in practice. I'll also highlight recent experiments in affecting sharing using behavioural interventions.
http://www.fair-dom.org
http://www.fairdomhub.org
http://www.seek4science.org
Presented at COMBINE 2016, Newcastle, 19 September.
http://co.mbine.org/events/COMBINE_2016
Metadata and Semantics Research Conference, Manchester, UK 2015
Research Objects: why, what and how,
In practice the exchange, reuse and reproduction of scientific experiments is hard, dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: codes fork, data is updated, algorithms are revised, workflows break, service updates are released. Neither should they be viewed just as second-class artifacts tethered to publications, but the focus of research outcomes in their own right: articles clustered around datasets, methods with citation profiles. Many funders and publishers have come to acknowledge this, moving to data sharing policies and provisioning e-infrastructure platforms. Many researchers recognise the importance of working with Research Objects. The term has become widespread. However. What is a Research Object? How do you mint one, exchange one, build a platform to support one, curate one? How do we introduce them in a lightweight way that platform developers can migrate to? What is the practical impact of a Research Object Commons on training, stewardship, scholarship, sharing? How do we address the scholarly and technological debt of making and maintaining Research Objects? Are there any examples
I’ll present our practical experiences of the why, what and how of Research Objects.
Jump-Starting Data Standards I: Launching a Data Clean-Up ProgramCollectiveImagination
Presented by Merrianne Timko, Museum of Fine Arts, Houston, Data Standards Manager
Although standardization of data is crucial to the overall functionality of TMS, convincing curators that there can be only “one” way to catalogue a similar group of works can be challenging. However, when thousands of works in TMS need to have data standardized within a short period of time and with minimal resources, launching a data clean-up program is essential. Such a program should include a proposed timeline, a delineation of tasks and responsibilities, user-friendly guidelines, and new approaches to the utilization of TMS fields for melding a variety of cataloguing preferences.
Biodiversity Informatics: An Interdisciplinary ChallengeBryan Heidorn
"Impacto de la Informática en el Conocimiento de la Biodiversidad: Actualidad y Futuro” at Universidad Nacional de Colombia on August 12, 2011. https://sites.google.com/site/simposioinformaticaicn/home
Research data management (RDM) and the FAIR principles (Findable, Accessible, Interoperable, Reusable) are widely
promoted as basis for a shared research data infrastructure. Nevertheless, researchers involved in next generation
sequencing (NGS) still lack adequate RDM solutions. The NGS metadata is generally not stored together with the raw
NGS data, but kept by individual researchers in separate files. This situation complicates RDM practice. Moreover,
the (meta)data does often not meet the FAIR principles [6]. Consequently, a central FAIR-compliant repository
is highly desirable to support NGS related research. We have selected iRODS (Rule-Oriented Data management
systems) [3] as a basis for implementing a sequencing data repository because it allows storing both data and metadata
together. iRODS serves as scalable middleware to access different storage facilities in a centralized and virtualized
way, and supports different types of clients. This repository will be part of an ecosystem of RDM solutions that
cover complementary phases of the research data life cycle in our organization (Academic Medical Center of the
University of Amsterdam). We selected Virtuoso [5] to enrich the metadata from iRODS to enable the management
of a triplestore for linked data. The metadata in the iCat (iRODS’ metadata catalogue) and the ontology in Virtuoso
are kept synchronized by enforcement of strict data manipulation policies. We have implemented a prototype to
preserve raw sequencing data for one research group. Three iRODS client interfaces are used for different purposes:
Davrods [4] for data and metadata ingestion, data retrieval; Metalnx-web [7] for administration, data curation, and
repository browsing; and iCommands [2] for all tasks by advanced users. Different user profiles are defined (principal
investigator, data curator, repository administrator), with different access rights. New data is ingested by copying raw
sequence files and the corresponding metadata file (a sample sheet) to the landing collection on iRODS. An iRODS
rule is triggered by the sample sheet file, which extracts the metadata and registers it to the iCAT as AVU (Attribute,
Value and Unit). Ontology files are registered into Virtuoso. The sequence files are copied to the persistent collection
and are made uniquely identifiable based on metadata. All the steps are recorded into a report file that enables
monitoring and tracking of progress and faults. Here we describe the design and implementation of the prototype,
and discuss the first assessment results. Initial results indicate that the proposed solution is acceptable and fits the
researchers workflow well.
Integrated research data management in the Structural SciencesManjulaPatel
A presentation given by Manjula Patel (UKOLN, University of Bath) at the I2S2 workshop "Scaling Up to Integrated Research Data Management", IDCC 2010, 6th December 2010, Chicago.
http://www.ukoln.ac.uk/projects/I2S2/events/IDCC-2010-ScalingUp-Wksp/
A presentation given by Manjula Patel (UKOLN) at the Repository Curation Environments (RECURSE) Workshop held at the 4th International Digital Curation Conference, Edinburgh, 1st December 2008,
http://www.dcc.ac.uk/events/dcc-2008/programme/
Being FAIR: Enabling Reproducible Data ScienceCarole Goble
Talk presented at Early Detection of Cancer Conference, OHSU, Portland, Oregon USA, 2-4 Oct 2018, http://earlydetectionresearch.com/ in the Data Science session
Keynote presented to KE workshop held in conjunction with the release of the report "A Surfboard for Riding the Wave
Towards a four country action programme on research data": http://www.knowledge-exchange.info/Default.aspx?ID=469
Data sharing archiving discovery, Bill MichenerAlison Specht
A presentation by Bill Michener (University of New Mexico and DataONE) about data sharing, archiving and discovery. It was an introduction to a session co-hosted by FRB-CESAB and CEFE (CNRS) in Montpellier.
Metadata for digital long-term preservationMichael Day
Presentation given at the Max Planck Gesellschaft eScience Seminar 2008: Aspects of long-term archiving, hosted by the Gesellschaft für Wissenschaftliche Datenverarbeitung mbh Göttingen (GWDG), Göttingen, Germany, 19-20 June 2008
Being Reproducible: SSBSS Summer School 2017Carole Goble
Lecture 2:
Being Reproducible: Models, Research Objects and R* Brouhaha
Reproducibility is a R* minefield, depending on whether you are testing for robustness (rerun), defence (repeat), certification (replicate), comparison (reproduce) or transferring between researchers (reuse). Different forms of "R" make different demands on the completeness, depth and portability of research. Sharing is another minefield raising concerns of credit and protection from sharp practices.
In practice the exchange, reuse and reproduction of scientific experiments is dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: the codes fork, data is updated, algorithms are revised, workflows break, service updates are released. ResearchObject.org is an effort to systematically support more portable and reproducible research exchange.
In this talk I will explore these issues in more depth using the FAIRDOM Platform and its support for reproducible modelling. The talk will cover initiatives and technical issues, and raise social and cultural challenges.
Research Objects: more than the sum of the partsCarole Goble
Workshop on Managing Digital Research Objects in an Expanding Science Ecosystem, 15 Nov 2017, Bethesda, USA
https://www.rd-alliance.org/managing-digital-research-objects-expanding-science-ecosystem
Research output is more than just the rhetorical narrative. The experimental methods, computational codes, data, algorithms, workflows, Standard Operating Procedures, samples and so on are the objects of research that enable reuse and reproduction of scientific experiments, and they too need to be examined and exchanged as research knowledge.
A first step is to think of Digital Research Objects as a broadening out to embrace these artefacts or assets of research. The next is to recognise that investigations use multiple, interlinked, evolving artefacts. Multiple datasets and multiple models support a study; each model is associated with datasets for construction, validation and prediction; an analytic pipeline has multiple codes and may be made up of nested sub-pipelines, and so on. Research Objects (http://researchobject.org/) is a framework by which the many, nested and contributed components of research can be packaged together in a systematic way, and their context, provenance and relationships richly described.
FAIRDOM - FAIR Asset management and sharing experiences in Systems and Synthe...Carole Goble
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs and so forth. Don’t stop reading. Data management isn’t likely to win anyone a Nobel prize. But publications should be supported and accompanied by data, methods, procedures, etc. to assure reproducibility of results. Funding agencies expect data (and increasingly software) management retention and access plans as part of the proposal process for projects to be funded. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems Biology demands the interlinking and exchange of assets and the systematic recording
of metadata for their interpretation.
The FAIR Guiding Principles for scientific data management and stewardship (http://www.nature.com/articles/sdata201618) has been an effective rallying-cry for EU and USA Research Infrastructures. FAIRDOM (Findable, Accessible, Interoperable, Reusable Data, Operations and Models) Initiative has 8 years of experience of asset sharing and data infrastructure ranging across European programmes (SysMO and EraSysAPP ERANets), national initiatives (de.NBI, German Virtual Liver Network, UK SynBio centres) and PI's labs. It aims to support Systems and Synthetic Biology researchers with data and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety.
This talk will use the FAIRDOM Initiative to discuss the FAIR management of data, SOPs, and models for Sys Bio, highlighting the challenges of and approaches to sharing, credit, citation and asset infrastructures in practice. I'll also highlight recent experiments in affecting sharing using behavioural interventions.
http://www.fair-dom.org
http://www.fairdomhub.org
http://www.seek4science.org
Presented at COMBINE 2016, Newcastle, 19 September.
http://co.mbine.org/events/COMBINE_2016
Metadata and Semantics Research Conference, Manchester, UK 2015
Research Objects: why, what and how,
In practice the exchange, reuse and reproduction of scientific experiments is hard, dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: codes fork, data is updated, algorithms are revised, workflows break, service updates are released. Neither should they be viewed just as second-class artifacts tethered to publications, but the focus of research outcomes in their own right: articles clustered around datasets, methods with citation profiles. Many funders and publishers have come to acknowledge this, moving to data sharing policies and provisioning e-infrastructure platforms. Many researchers recognise the importance of working with Research Objects. The term has become widespread. However. What is a Research Object? How do you mint one, exchange one, build a platform to support one, curate one? How do we introduce them in a lightweight way that platform developers can migrate to? What is the practical impact of a Research Object Commons on training, stewardship, scholarship, sharing? How do we address the scholarly and technological debt of making and maintaining Research Objects? Are there any examples
I’ll present our practical experiences of the why, what and how of Research Objects.
Jump-Starting Data Standards I: Launching a Data Clean-Up ProgramCollectiveImagination
Presented by Merrianne Timko, Museum of Fine Arts, Houston, Data Standards Manager
Although standardization of data is crucial to the overall functionality of TMS, convincing curators that there can be only “one” way to catalogue a similar group of works can be challenging. However, when thousands of works in TMS need to have data standardized within a short period of time and with minimal resources, launching a data clean-up program is essential. Such a program should include a proposed timeline, a delineation of tasks and responsibilities, user-friendly guidelines, and new approaches to the utilization of TMS fields for melding a variety of cataloguing preferences.
IGSN: The International Geo Sample Number (DFG Roundtable)Kerstin Lehnert
This presentation provides an overview of the rationale for the IGSN, of the organizational structure and architecture of the IGSN e.V. , and the System for Earth Sample Registration.
Presentation about the IGSN and ongoing initiatives for the Internet of Samples at the EGU 2015 short course "Open Science Goes Geo: Beyond Data and Software".
Data You May Like: A Recommender System for Research Data DiscoveryAnusuriya Devaraju
Various data portals been developed to facilitate access to research datasets from different sources. For example, the Data Publisher for Earth & Environmental Science (PANGAEA), the Registry of Research Data Repositories (re3data.org), and the National Geoscience Data Centre (NGDC). Due to data quantity and heterogeneity, finding relevant datasets on these portals may be difficult and tedious. Keyword searches based on specific metadata elements or multi-key indexes may return irrelevant results. Faceted searches may be unsatisfactory and time consuming, especially when facet values are exhaustive. We need a much more intelligent way to complement existing searching mechanisms in order to enhance user experiences of the data portals.
We developed a recommender system that helps users to find the most relevant research datasets on the CSIRO’s Data Access Portal (DAP). The system is based on content-based filtering. We computed the similarity of datasets based on data attributes (e.g., descriptions, fields of research, location, contributors, and provenance) and inference from transaction logs (e.g., the relations among datasets and between queries and datasets). We improved the recommendation quality by assigning weights to data similarities. The weight values are drawn from a survey involving data users. The recommender results for a given dataset are accessible programmatically via a web service. Taking both data attributes and user actions into account, the recommender system will make it easier for researchers to find and reuse data offered through the data portal.
Irina Bastrakova, GV. Sample management and International geo-sample number (ISGN) implementation at geoscience Australia.
2 Nov 2016, Canberra. International Geo-Sample Number (IGSN) Symposium.
John Morrisseey - CSIRO. CSIRO National research collections Australia (NRCA). Specimen identifiers - possible futures.
2 Nov 2016, Canberra. International Geo-Sampling Number (IGSN) Symposium
Best practice strategies to clean up and maintain your database with Hether G...Blackbaud Pacific
In this webinar Hether Ghelf, Blackbaud Pacific’s Senior Consultant & Project Manager, discusses a best practice approach to database cleaning and continued maintenance.
Cleansing your data can have an immediate impact on your business by increasing retention and response rates, decreasing the volume of mail returned from post, and ensuring mail is reaching your organisation’s constituents.
View the recording here: https://www.blackbaud.com.au/notforprofit-events/webinars/past
Scott Edmunds: GigaScience - a journal or a database? Lessons learned from th...GigaScience, BGI Hong Kong
Scott Edmunds talk at the HUPO congress in Geneva, September 6th 2011 on GigaScience - a journal or a database? Lessons learned from the Genomics Tsunami.
Next-Generation Search Engines for Information RetrievalWaqas Tariq
In the recent years, there have been significant advancements in the areas of scientific data management and retrieval techniques, particularly in terms of standards and protocols for archiving data and metadata. Scientific data is generally rich, not easy to understand, and spread across different places. In order to integrate these pieces together, a data archive and associated metadata should be generated. This data should be stored in a format that can be locatable, retrievable and understandable, more importantly it should be in a form that will continue to be accessible as technology changes, such as XML. New search technologies are being implemented around these protocols, which makes searching easy, fast and yet robust. One such system, Mercury, a metadata harvesting, data discovery, and access system, built for researchers to search to, share and obtain spatiotemporal data used across a range of climate and ecological sciences.
Keynote presentation at GlobusWorld 2021. Highlights product updates and roadmap, as well as user success stories in research data management. Presented by Ian Foster, Rachana Ananthakrishnan, Kyle Chard and Vas Vasiliadis.
ODIN Final Event - The Care and Feeding of Scientific Datadatacite
Mercè Crosas @mercecrosas
Director of Data Science, IQSS, Harvard University
Presentation delivered at the ODIN Final Event in Amsterdam (Netherlands) on Wednesday, September 24, 2014: ORCID and DataCite: Towards Holistic Open Research.
More info: www.odin-project.eu
FAIR Data Management and FAIR Data SharingMerce Crosas
Presentation at the Critical Perspective on the Practice of Digiral Archeology symposium: http://archaeology.harvard.edu/critical-perspectives-practice-digital-archaeology
The Human Cell Atlas Data Coordination PlatformLaura Clarke
This presentation gives a brief summary of the Human Cell Atlas project and describes the data coordination platform which is being built to support it.
Lecture for a course at NTNU, 27th January 2021
CC-BY 4.0 Dag Endresen https://orcid.org/0000-0002-2352-5497
See also http://bit.ly/biodiversityinformatics
https://www.gbif.no/events/2021/lecture-ntnu-gbif.html
FAIR Data, Operations and Model management for Systems Biology and Systems Me...Carole Goble
FAIR Data, Operations and Model management for Systems Biology and Systems Medicine Projects given at 1st Conference of the European Association of Systems Medicine, 26-28 October 2016, Berlin. the FAIRDOM project is described.
Findable Accessable Interoperable Reusable < data |models | SOPs | samples | articles| * >. FAIR is a mantra; a meme; a myth; a mystery; a moan. For the past 15 years I have been working on FAIR in a bunch of projects and initiatives in Life Science projects. Some are top-down like Life Science European Research Infrastructures ELIXIR and ISBE, and some are bottom-up, supporting research projects in Systems and Synthetic Biology (FAIRDOM), Biodiversity (BioVel), and Pharmacology (open PHACTS), for example. Some have become movements, like Bioschemas, the Common Workflow Language and Research Objects. Others focus on cross-cutting approaches in reproducibility, computational workflows, metadata representation and scholarly sharing & publication. In this talk I will relate a series of FAIRy tales. Some of them are Grimm. Some have happy endings. Who are the villains and who are the heroes? What are the morals we can draw from these stories?
Presentation slides from a lecture given at the University of the West of England (UWE) as part of the MSc in Library and Library Management, University of the West of England, Frenchay Campus, Bristol, March 24, 2009
Similar to Data Standards & Best Practices for the Stratigraphic Record (20)
This slide deck provides an update on the development of the Astromaterials Data System, a project funded by NASA to ensure the long-term accessibility and utility of lab analytical data acquired on astromaterials samples curated at the Johnson Space Center, including samples collected on the moon during the Apollo missions and meteorites collected in Antarctica.
Presentation about geochemical research data access and publication provided to the Australian Geochemistry Network by Kerstin Lehnert of EarthChem and the Astromaterials Data System
Boosting Data Science in Geochemistry: We Need Global Geochemical Data Standa...Kerstin Lehnert
Presentation at AGU Fall Meeting 2018: Large-scale, global geochemical data syntheses like EarthChem and GEOROC have, for nearly two decades, inspired and made possible a vast range of scientific studies and new discoveries, facilitating the analysis and mining of geochemical data and creating new paradigms in geochemical data analysis such as statistical geochemistry. These syntheses provide easy access to fully integrated compilations of thousands of datasets (‘data fusion’) with millions of geochemical measurements that are accompanied by comprehensive and harmonized metadata for context and provenance to search, filter, sort, and evaluate the data.
The syntheses have been assembled and maintained through manual labor by data managers, who extract data and metadata from text, tables, and supplements of publications for inclusion in the databases, a time-consuming task due to the multitude of data formats, units, normalizations, vocabularies, etc., i.e. lack of best practices for geochemical data reporting. In order to support and advance future science endeavors that rely on access to and analysis of large volumes of geochemical data, we need to develop and implement global standards for geochemical data that not only make geochemical data FAIR (Findable, Accessible, Interoperable, Re-usable), but ready for data fusion. As more geochemical data systems are emerging at national, programmatic, and subdomain levels in response to Open Access policies and science needs, standard protocols for exchanging geochemical data among these systems will need to be developed, implemented, and governed.
Critical is the alignment with existing standards such as the Semantic Sensor Network (SSN) ontology, a recent joint W3C and OGC standard that standardizes description of sensors, observation, sampling, and actuation, with sufficient flexibility to allow details of these elements to be defined in different domains. New initiatives within the International Council for Science and CODATA are working towards coordinating the International Science Unions to identify and endorse the more authoritative standards (including vocabularies and ontologies). These initiatives present a timely opportunity for geochemical data to ensure that they are born ‘connected’ within and across disciplines.
Looking at the past of infrastructure development for research data in the context of infrastructure development patterns and experiences from the evolution of the IEDA data facility to inform future pathways and developments. A major focus of the lecture is on the FAIR principles and the issues surrounding reusability of data.
Presentation that describes the experiences and insights of the IEDA data facility gained during the >10 years of building cyberinfrastructure for a long-tail community geochemistry
Advancing Reproducible Science from Physical Samples: The IGSN and the iSampl...Kerstin Lehnert
Presentation at the Geological Society of America (GSA) meeting 2016 in the session on FOSSIL SPECIMENS 0'S AND 1'S: DATABASES, STANDARDS, & MOBILIZATION
Making Small Data BIG (UT Austin, March 2016)Kerstin Lehnert
Presentation given at the Texas Advanced Computing Center. It describes the potential of re-using small data for new science, achievements and the challenges to make small data re-usable.
Research Data Infrastructure for Geochemistry (DFG Roundtable)Kerstin Lehnert
This presentation provides an overview of different aspects of data management for geochemistry and resources available at the EarthChem@IEDA data facility.
Interdisciplinary Data Resources for Volcanology at the IEDA (Interdisciplina...Kerstin Lehnert
Presentation given at the EGU 2015 General Assembly in session "Methods for Understanding Volcanic Hazards and Risks" (NH2.2), describing EarthChem data systems that make accessible and synthesize geochemical data of volcanic rocks and gases, and the System for Earth Sample Registration that catalogs sample metadata and provides persistent unique sample identifiers (International Geo Sample Number IGSN). It also mentions EarthChem's plans and ongoing work to link geochemical data with other volcanological databases, and the IEDA data rescue initiative.
Digital Representation of Physical Samples in Scientific PublicationsKerstin Lehnert
Presentation about the digital representation of physical samples in scientific publications, given at the European Geoscience Union meeting 2015 in the Splinter Meeting 1.36 "Digital Representation of Physical Samples in Scientific Publications".
Lehnert: Making Small Data Big, IACS, April2015Kerstin Lehnert
Seminar presentation at the Institute for Advanced Computational Science at Stony Brook University, April 9, 2015, describing achievements and challenges of data infrastructure in a long-tail science domain with the example of geochemistry.
iSamples Research Coordination Network (C4P Webinar)Kerstin Lehnert
The iSamples (Internet of Samples in the Earth Sciences) Research Coordination Network is part of EarthCube and focuses on the integration of physical samples and collections into digital data infrastructure in the Earth sciences. This presentation summarizes the activities of the iSamples RCN and presents results from a major community survey about sharing and management of physical samples that was conducted as part of the RCN.
MoonDB: Restoration & Synthesis of Planetary Geochemical DataKerstin Lehnert
This presentation explains the MoonDB project that will restore and synthesize geochemical and petrological data acquired on lunar samples over more than 4 decades. The project is a collaboration between the IEDA data facility (http://www.iedadata.org) at the Lamont-Doherty Earth Observatory of Columbia University and the Astromaterials Acquisition and Curation Office (AACO) at Johnson Space Center (JSC).
This presentation was part of a workshop of IEDA (http://www.iedadata.org) at the AGU (American Geophysical Union) Fall Meeting 2013 in San Francisco that was intended as an introduction to the topic of data publication.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Data Standards & Best Practices for the Stratigraphic Record
1. Data Standards & Best Practices
Kerstin Lehnert
Lamont-Doherty Earth Observatory
iedadata.or
g
2. 2
Vouchering the Stratigraphic
Record
A synthesis database?
Aggregates data that are published in articles or in data
repositories
Requirements: Integration, Quality (Trusted data!)
Needs standardized metadata, semantics, and persistent
unique identifiers
A trusted repository?
Publishes and ensures persistent access to data
Requirements: Compliance with international data
curation and repository standards
Long-term preservation, data identification (DOI), editorial
procedures, etc.
3. 3
Data Standards
“documented agreements on representation, format,
definition, structuring, tagging, transmission,
manipulation, use, and management of data.”
Discipline specific
Data type specific
Application specific
4. 4
Data Standards: Why?
Re-usability of data
Reproducibility of science
Integration/interoperability of data
5.
6. 6
Reproducibility in the Field
Sciences
Workshop in May 2015, organized by AAAS (M. McNutt), AGU,
and ESA, funded by the Arnold Foundation
Report in preparation
Technical Requirements for Transparent, Reproducible Data
1. The data themselves must be publicly available in machine-readable, non-
proprietary formats with accurate and precise descriptive metadata;
2. Data provenance—process(es) by which usable datasets were generated or
derived from raw, often streaming or machine-readable-only data—must be
accurately and precisely specified;
3. Computer code (“scripts”) and software with which datasets were analyzed
must be available and adequately described to ensure their repeated use and
be publicly available in non-proprietary formats, and;
4. Version control should be used to ensure that the original data and code are
maintained.
(from draft workshop report)
7. 7
Coalition for Publishing Data in the Earth
& Space Sciences (COPDESS)
Joint initiative of Earth Science publishers and Data
Facilities to better help translate the aspirations of
open, available, and useful data from policy into
practice.
Reaffirm and ensure adherence to existing journal and
publishing policies and society position statements
regarding open data sharing and archiving of data, tools,
and models.
Ensure that Earth science data will, to the greatest extent
possible, be stored in community approved repositories
that can provide additional data services.
Statement of Commitment signed by all major
Earth & Space Science publishers
7
www.copdess.org
8.
9. 9
Repository Standards
Open access
Data quality assurance (editorial process)
Persistence (long-term preservation)
Persistent & unique identification of data (DOI
registration)
Standard-based metadata (ISO) & APIs (OAI-
PHM)
9
11. 11
Distributed Data Curation
Alert: Stratigraphy is multi-disciplinary
There are many data types that already have homes
Paleobio Database
Macrostrat/Digital Crust
Geochron (@IEDA)
MagIC
Open Core Data (@IEDA – under development)
EarthChem (@IEDA)
System for Earth Sample Registration (@IEDA)
Don’t reinvent, but leverage, link, & integrate!
13. EarthCube: A Process
Get all the info at: http://earthcube.org
COMPUTER SCIENCES
SOFTWARE ENGINEERS
SCIENTIFIC VISION
TECHNICAL ARCHITECTURE
ENGAGEMENT
FUNDED PROJECTS
14. 14
Back to Data Standards
Metadata
Content
Structure (data model)
Vocabularies & Taxonomies
Identifiers
(API = Application Programming Interface)
16. 16
Open Geospatial Consortium (OGC):
Observations & Measurements
16
Sampling Observation
“Observations commonly involve sampling of an ultimate feature of
interest. This International Standard defines a common set of sampling
feature types classified primarily by topological dimension, as well as
samples for ex-situ observations.”
(OGC O&M 2.0.0 / ISO19156; editor: Simon Cox)
e.g. Station,
Transect, Section
17. Observation Data
Model v2
Kerstin Lehnert: "Making small data BIG: Insights from a Long-tail Geoscience Domain"
17
ODM2 Team:
J S Horsburgh
A K Aufdenkampe
L Hsu
A Jones
K Lehnert
E Mayorga
L Song
D Tarboton
I Zaslavsky
22. 22
Internet of Samples in the Earth
Sciences
Physical samples need to be linked to the digital data
generated by their study.
Reproducibility! Access to the physical samples is required to
verify & reproduce observations.
Re-usability! Access to information about samples is required
for proper evaluation & interpretation of sample-based data.
Physical samples need to be shared broadly for use &
re-use.
Samples are often expensive to collect (drilling, remote locations).
Many samples are unique and irreplaceable.
Re-analysis augments utility of existing data.
Samples often serve in ways that the collectors and repositories could
not have imagined.
3/26/2015
22
23. 23
Unique Sample Identification
Imagine the possibilities …
Easily find a specific sample and contact its owner
Find all publications that mention a specific sample
Find all data for that sample across the literature
and distributed databases
Find other samples with similar properties
geospatial
temporal
compositional
23
24. 24
Sample Identification Until Now
Samples have ambiguous and non-persistent
names and cannot be properly cited.
24
The EarthChem Portal shows
75 publications with
geochemical data
referenced to a sample with
the name M1 (or M-1).
(www.earthchem.org)
Names of dredge sample 3 of
the Amphitrite cruise
(PetDB database, www.petdb.org)
25. 25
Sample Identification From Now:
IGSN: International Geo Sample Number
Persistent unique identifier for physical objects in
the Earth Sciences
Global uniqueness guaranteed via governance by the
IGSN e.V.
Persistent access and preservation of sample
metadata
Cataloguing services of IGSN e.V. members
Allows to build central search engine
Resolving service of the IGSN central registry
Does not replace personal or institutional naming
protocols
25
27. 27
IGSN Status
International governance established in 2011
14 members (organizations) in the IGSN e.V. (www.igsn.org)
ca. 4 million samples registered (registration tripled in 2014)
>350 active users, including
increasing number of individual scientists
sample repositories & museums (Smithsonian, marine cores,
geological surveys (USGS, Geoscience Australia, BGR)
large-scale observatories and sampling campaigns
ICDP, IODP, CZO, DCO, GeoPRISMs, etc.)
27
34. 34
Extended IGSN Metadata
Images
Documents (.pdf, .xls, .doc)
References
URLs for related data resources
User defined metadata
34
35. Advance use of innovative CI to connect physical samples
across the Earth Sciences with digital data infrastructure
Goals:
Improve discovery, access, and re-usability of physical samples
Improve re-usability and reproducibility of the data generated by their
study
Registries &
Catalogs
Metadata
Identifiers
Citation
Repositories
Software Tools
Taxonomies
36. C4P: Collaboration & Cyberinfrastructure for Paleoscience
An EarthCube Research Coordination Network
Unravel the large-scale, long-term evolution of the Earth-Life System
through the study of the geological record
Major challenges C4P addresses:
• Heterogeneous & dispersed data
• Modeling of age & time
• Legacy & ‘dark’ data
• Limited interoperability among resources
• Variable semantics & ontologies
A diverse community:
paleobiology, paleoclimate, paleoceanography, geochemistry,
dendrochronology, stratigraphy, geochronology, sample
curation, data management, bioinformatics, semantics,
software architecture, and more ...
C4P achievements:
• New resources
• data & software catalogs
• Educational materials (webinars)
• New collaborations
• Convergence on best practices (samples,
age, taxonomy)
37. 37
Take Away Messages
37
develop leading practices for data
get community buy-in
align & coordinate with existing leading
practices
leverage existing infrastructure
get started and don’t let the challenges stop
you
38. “The Hitchhiker’s Guide to
Geoinformatics”
(Lee Allison, LISTMG
Workshop 2004)“Building an International
Collaboration for
Geoinformatics”
(Walter Snyder, AGU 2005)
“Cyberinfrastructure for Solid Earth
Geochemistry” (Kerstin Lehnert, GSA 2003)
The Cultural Challenges
38
39. 39
Thank You!
"The wonderful thing about
standards is that there are so many
of them to choose from”.
(Grace Hopper)