Legal and regulatory challenges to data sharing for clinical genetics and ge...Human Variome Project
There are many factors that impede genomic variant sharing in the UK, despite it becoming a necessary part of clinical care. These include the lack of a designated infrastructure or mechanism aggravated by the complexity of laws that apply, and fragmented and variable advice from local ‘Caldicott guardians’ who guide NHS trusts on their responsibilities concerning data protection and confidentiality. Since the legitimacy of data sharing in the UK is framed in terms of ‘personal data’ being shared for ‘direct care’ (subject to legal exceptions), the blurred boundaries between clinical care and research, and the spectrum of identifiability of data also lead to differing interpretations resulting in inconsistent practices.
In a multidisciplinary collaboration, the PHG Foundation and the UK’s Association for Clinical Genetic Science co-hosted a workshop to examine the clinical necessity for sharing variant data and associated phenotypic information, the technical feasibility and the legal and regulatory impediments to such sharing. Delegates included clinicians, laboratory scientists, and key policy makers, including the National Data Guardian for Health and Care and representatives from the 100,000 Genomes Project, a pioneering research project which promises to build a legacy for future genomics services in the UK. The key finding from our work was that current arrangements for sharing genomic variants within the NHS are unsatisfactory and inconsistent practices are compromising safety and quality. Our workshop report [1] highlights the urgent need for (i) national agreement to optimise sharing within the NHS and develop consensus on the legitimacy of data sharing, (ii) standardised operational processes, including a designated sustainable database or mechanism for sharing, and (iii) strong leadership by the multiple relevant health organisations to demonstrate the benefits and risks associated with sharing and not sharing data.
Since publication of the workshop report, the NHS Consortium (operating within the DECIPHER database) has reported a 120% increase in the number of cases shared, the 100,000 Genomes Project and associated data embassy have got underway and the EU Data Protection Regulation has been finalised. However research highlights continuing public reservations about some aspects of data sharing including commercial access and misgivings around secondary uses of data. Publication of the National Data Guardian’s long-awaited review of consent and security provisions to provide guidance on a new consent and opt-out model for sharing patient information in the NHS, has been delayed pending the results of the EU referendum being known. Against this backdrop, the imperative to develop robust, proportionate policies for genomic data sharing becomes increasingly acute.
Funding from the PHG Foundation and the Association for Clinical Genetic Science.
Open Science Commons: a holistic and ecological view of science OpenAIRE
OpenAIRE presentation at IFLA 2019 annual conference.
Open science comes on the heels of the fourth paradigm of science, which is based on data-intensive scientific discovery, and represents a new paradigm shift, affecting the entire research lifecycle and all aspects of science execution, collaboration, communication, innovation. From supporting and using (big) data infrastructures for data archiving and analysis, to continuously sharing with peers all types of research results at any stage of the research endeavor and to communicating them to the broad public or commercial audiences, openness moves science away from being a concern exclusively of researchers and research performing organisations and brings it to center stage of our connected society, requiring the engagement of a much wider range of stakeholders: digital and research infrastructures, policy decision makers, funders, industry, and the public itself.
This presentation focuses on two Europe’s flagship initiatives for Open Science, the European Open Science Cloud and Open AIRE (www.openaire.eu), and discusses the role of the libraries in the wider data ecosystem as that of (i) an enabler for openness, FAIRness, participation, transparency and social impact, active in the preservation, curation, publication and dissemination of digital scientific materials, and (ii) a multiplier for training and supporting scientists and non-scientists alike (citizen science, open innovation) for a harmonic co-existence in this emerging environment.
Samantha Robertson - NHMRC Perspectives on Increasing Access to Data from Pub...Wiley
Governments and industries all over the world are tackling the challenges and opportunities of ‘Big Data’. In view of these challenges, the key drivers of change in this area are the behaviour of researchers, the introduction of incentives or rewards and funding for data sharing infrastructure. Governments and taxpayers also expect a return on investment from the money spent on publically funded research. Building on and learning from the successes (and failures) of others need to be part of the research vernacular. Issues such as open access, data curation, handling of data, and sharing of that data are all matters on which the National Health and Medical Research Council (NHMRC) has an interest in. NHMRC works with the sector to develop best practise policies on such matters.
Samantha Robertson
Executive Director, NHMRC Evidence, Advice & Governance
Presented at the 2015 Wiley Publishing Seminar, 5 November, Melbourne, Australia.
Legal and regulatory challenges to data sharing for clinical genetics and ge...Human Variome Project
There are many factors that impede genomic variant sharing in the UK, despite it becoming a necessary part of clinical care. These include the lack of a designated infrastructure or mechanism aggravated by the complexity of laws that apply, and fragmented and variable advice from local ‘Caldicott guardians’ who guide NHS trusts on their responsibilities concerning data protection and confidentiality. Since the legitimacy of data sharing in the UK is framed in terms of ‘personal data’ being shared for ‘direct care’ (subject to legal exceptions), the blurred boundaries between clinical care and research, and the spectrum of identifiability of data also lead to differing interpretations resulting in inconsistent practices.
In a multidisciplinary collaboration, the PHG Foundation and the UK’s Association for Clinical Genetic Science co-hosted a workshop to examine the clinical necessity for sharing variant data and associated phenotypic information, the technical feasibility and the legal and regulatory impediments to such sharing. Delegates included clinicians, laboratory scientists, and key policy makers, including the National Data Guardian for Health and Care and representatives from the 100,000 Genomes Project, a pioneering research project which promises to build a legacy for future genomics services in the UK. The key finding from our work was that current arrangements for sharing genomic variants within the NHS are unsatisfactory and inconsistent practices are compromising safety and quality. Our workshop report [1] highlights the urgent need for (i) national agreement to optimise sharing within the NHS and develop consensus on the legitimacy of data sharing, (ii) standardised operational processes, including a designated sustainable database or mechanism for sharing, and (iii) strong leadership by the multiple relevant health organisations to demonstrate the benefits and risks associated with sharing and not sharing data.
Since publication of the workshop report, the NHS Consortium (operating within the DECIPHER database) has reported a 120% increase in the number of cases shared, the 100,000 Genomes Project and associated data embassy have got underway and the EU Data Protection Regulation has been finalised. However research highlights continuing public reservations about some aspects of data sharing including commercial access and misgivings around secondary uses of data. Publication of the National Data Guardian’s long-awaited review of consent and security provisions to provide guidance on a new consent and opt-out model for sharing patient information in the NHS, has been delayed pending the results of the EU referendum being known. Against this backdrop, the imperative to develop robust, proportionate policies for genomic data sharing becomes increasingly acute.
Funding from the PHG Foundation and the Association for Clinical Genetic Science.
Open Science Commons: a holistic and ecological view of science OpenAIRE
OpenAIRE presentation at IFLA 2019 annual conference.
Open science comes on the heels of the fourth paradigm of science, which is based on data-intensive scientific discovery, and represents a new paradigm shift, affecting the entire research lifecycle and all aspects of science execution, collaboration, communication, innovation. From supporting and using (big) data infrastructures for data archiving and analysis, to continuously sharing with peers all types of research results at any stage of the research endeavor and to communicating them to the broad public or commercial audiences, openness moves science away from being a concern exclusively of researchers and research performing organisations and brings it to center stage of our connected society, requiring the engagement of a much wider range of stakeholders: digital and research infrastructures, policy decision makers, funders, industry, and the public itself.
This presentation focuses on two Europe’s flagship initiatives for Open Science, the European Open Science Cloud and Open AIRE (www.openaire.eu), and discusses the role of the libraries in the wider data ecosystem as that of (i) an enabler for openness, FAIRness, participation, transparency and social impact, active in the preservation, curation, publication and dissemination of digital scientific materials, and (ii) a multiplier for training and supporting scientists and non-scientists alike (citizen science, open innovation) for a harmonic co-existence in this emerging environment.
Samantha Robertson - NHMRC Perspectives on Increasing Access to Data from Pub...Wiley
Governments and industries all over the world are tackling the challenges and opportunities of ‘Big Data’. In view of these challenges, the key drivers of change in this area are the behaviour of researchers, the introduction of incentives or rewards and funding for data sharing infrastructure. Governments and taxpayers also expect a return on investment from the money spent on publically funded research. Building on and learning from the successes (and failures) of others need to be part of the research vernacular. Issues such as open access, data curation, handling of data, and sharing of that data are all matters on which the National Health and Medical Research Council (NHMRC) has an interest in. NHMRC works with the sector to develop best practise policies on such matters.
Samantha Robertson
Executive Director, NHMRC Evidence, Advice & Governance
Presented at the 2015 Wiley Publishing Seminar, 5 November, Melbourne, Australia.
2015 09-10 Health Valley meets Topsector LSH Alain van GoolAlain van Gool
Outline of the Radboud way towards Personalized Health(care)in a great session between health Valley, Topsector LSH, Radboudumc, province Gelderland and others.
2017 03-07 World Economic Forum - Dutch topsector Life Science Health, The Ha...Alain van Gool
Update to the World Economic Forum of the Dutch approach towards personalized medicine and health, leveraging the strong infrastructure components we have in the Netherlands and addressing the needs of society to maintain or restore personal health..
Presented by Claudia Stein, Director, Division of Information, Evidence, Research and Innovation, WHO/Europe, at the 64th session of the WHO Regional Committee for Europe.
EU Clinical Trials Regulation - IPPOSI perspectiveipposi
IPPOSI CEO, Dr Derick Mitchell delivered a presentation on the EU Clinical Trials Regulation from the patients' perspective at the 20th International Conference on Pharmaceutical Medicine, Athens, Greece.
Poster presentation stigma index Poland 2009gnpplus
A summary of the Stigma Index report of Poland by GNP+
For more information on The People living with HIV Stigma index and Poland, visit the the GNP+ PLHIV stigma index website: http://www.stigmaindex.org/
You can download the poster presentation from http://www.gnpplus.net
EUPATI 2013 Conference: Patient involvement in medicines R&D: Bringing to li...EUPATI
"Patient involvement in medicines R&D: Bringing to life with EUPATI", presented by Jan Geissler, EUPATI Director, at the EUPATI 2013 Conference on 19 April 2013
Guest lecture Programme in the Methods of Health Economics (Abteilung für Ges...healthdata be
Guest lecture Programme in the Methods of Health Economics (Abteilung für Gesundheitsökonomie, Zentrum für Public Health an der Medizinische Universität Wien)
ClinVar: Aggregating Data to Improve Variant Interpretation - Melissa LandrumHuman Variome Project
The rate of variant discovery continues to surpass the rate of clinicalgrade interpretation. This is a challenge for precision medicine, because fast, reliable access to variant interpretations is necessary to provide well-informed and timely interpretations of test results to patients. ClinVar is a public repository for interpretations of clinical significance and functional effects of variants in any gene and for any disease. Interpretations are submitted by many sources, including clinical testing laboratories, research laboratories, locus-specific databases, expert panels, practice guidelines, as well as OMIM® and GeneReviews™. Collecting variant interpretations in ClinVar depends on integrating data from these different sources, which has several benefits. First, data integration requires standardizing the data from each source. This improves the quality of the data in ClinVar as well as in each of the individual datasets. ClinVar staff validate HGVS expressions as a routine part of ClinVar submission processing. Submitters are encouraged to use standard terms in MedGen for diseases and phenotypes. Standard terms for clinical significance are used in ClinVar when available; for example, ClinVar uses the terms recommended by ACMG to classify variants for Mendelian diseases. Secondly, ClinVar aggregates all data for a variant defined by its genomic location. Therefore, HGVS descriptions on different transcripts or on different genomic sequences can be recognized as the same variant. Thirdly, integrating data from multiple submitters allows the evidence from all sources to be pooled together. This larger collection of evidence aids the re-evaluation of variant classifications, and is especially valuable for rare variants and novel gene-disease relationships. Fourthly, data integration means that variant interpretations from different sources can be viewed together and compared. Thus a ClinVar user has access to interpretations outside any internal system and knows when there is consensus in the interpretation or not. Submitting laboratories use reports of conflicting interpretations in ClinVar to prioritize variants that they should re-evaluate. ClinVar receives data from many data providers, and therefore provides clear attribution to each contributing group, including links to records in LSDBs. Each source may update their submission to ClinVar at any time. For example, a record may be updated when a variant is re-classified or when additional evidence is available to support the interpretation. Submitters may consider providing regular updates to ClinVar to prevent their interpretations from becoming out of date. Submissions to ClinVar describe variants that range in complexity from simple alleles with explicit sequence locations through copy number changes and cytogenetic rearrangements with fuzzy boundaries.
2015 09-10 Health Valley meets Topsector LSH Alain van GoolAlain van Gool
Outline of the Radboud way towards Personalized Health(care)in a great session between health Valley, Topsector LSH, Radboudumc, province Gelderland and others.
2017 03-07 World Economic Forum - Dutch topsector Life Science Health, The Ha...Alain van Gool
Update to the World Economic Forum of the Dutch approach towards personalized medicine and health, leveraging the strong infrastructure components we have in the Netherlands and addressing the needs of society to maintain or restore personal health..
Presented by Claudia Stein, Director, Division of Information, Evidence, Research and Innovation, WHO/Europe, at the 64th session of the WHO Regional Committee for Europe.
EU Clinical Trials Regulation - IPPOSI perspectiveipposi
IPPOSI CEO, Dr Derick Mitchell delivered a presentation on the EU Clinical Trials Regulation from the patients' perspective at the 20th International Conference on Pharmaceutical Medicine, Athens, Greece.
Poster presentation stigma index Poland 2009gnpplus
A summary of the Stigma Index report of Poland by GNP+
For more information on The People living with HIV Stigma index and Poland, visit the the GNP+ PLHIV stigma index website: http://www.stigmaindex.org/
You can download the poster presentation from http://www.gnpplus.net
EUPATI 2013 Conference: Patient involvement in medicines R&D: Bringing to li...EUPATI
"Patient involvement in medicines R&D: Bringing to life with EUPATI", presented by Jan Geissler, EUPATI Director, at the EUPATI 2013 Conference on 19 April 2013
Guest lecture Programme in the Methods of Health Economics (Abteilung für Ges...healthdata be
Guest lecture Programme in the Methods of Health Economics (Abteilung für Gesundheitsökonomie, Zentrum für Public Health an der Medizinische Universität Wien)
Similar to HVP Country Node: The Netherlands - Marielle van Gijn (20)
ClinVar: Aggregating Data to Improve Variant Interpretation - Melissa LandrumHuman Variome Project
The rate of variant discovery continues to surpass the rate of clinicalgrade interpretation. This is a challenge for precision medicine, because fast, reliable access to variant interpretations is necessary to provide well-informed and timely interpretations of test results to patients. ClinVar is a public repository for interpretations of clinical significance and functional effects of variants in any gene and for any disease. Interpretations are submitted by many sources, including clinical testing laboratories, research laboratories, locus-specific databases, expert panels, practice guidelines, as well as OMIM® and GeneReviews™. Collecting variant interpretations in ClinVar depends on integrating data from these different sources, which has several benefits. First, data integration requires standardizing the data from each source. This improves the quality of the data in ClinVar as well as in each of the individual datasets. ClinVar staff validate HGVS expressions as a routine part of ClinVar submission processing. Submitters are encouraged to use standard terms in MedGen for diseases and phenotypes. Standard terms for clinical significance are used in ClinVar when available; for example, ClinVar uses the terms recommended by ACMG to classify variants for Mendelian diseases. Secondly, ClinVar aggregates all data for a variant defined by its genomic location. Therefore, HGVS descriptions on different transcripts or on different genomic sequences can be recognized as the same variant. Thirdly, integrating data from multiple submitters allows the evidence from all sources to be pooled together. This larger collection of evidence aids the re-evaluation of variant classifications, and is especially valuable for rare variants and novel gene-disease relationships. Fourthly, data integration means that variant interpretations from different sources can be viewed together and compared. Thus a ClinVar user has access to interpretations outside any internal system and knows when there is consensus in the interpretation or not. Submitting laboratories use reports of conflicting interpretations in ClinVar to prioritize variants that they should re-evaluate. ClinVar receives data from many data providers, and therefore provides clear attribution to each contributing group, including links to records in LSDBs. Each source may update their submission to ClinVar at any time. For example, a record may be updated when a variant is re-classified or when additional evidence is available to support the interpretation. Submitters may consider providing regular updates to ClinVar to prevent their interpretations from becoming out of date. Submissions to ClinVar describe variants that range in complexity from simple alleles with explicit sequence locations through copy number changes and cytogenetic rearrangements with fuzzy boundaries.
Establishing validity, reproducibility, and utility of highly scalable geneti...Human Variome Project
Background: New technologies and increased competition have, and will continue to improve the cost-effectiveness of genetic testing, making genetic analysis more accessible to medical practices worldwide. However, challenges remain to establishing the validity of such tests. Moreover many patients harbor rare or novel variants and classification is likely to remain a bottleneck in broader deployment of genetic medicine.
The PhenX Toolkit: Standard Measures for Collaborative Research - Wayne HugginsHuman Variome Project
Introduction and Background: The Web-based PhenX Toolkit (consensus measures for Phenotypes and eXposures, https://www. phenxtoolkit.org/) is a catalog of standard measures designed to facilitate collaborative biomedical research. PhenX measures help ensure that phenotypes from different studies are collected and represented in a consistent format. This consistency can enable data comparability across sites in large cohorts (e.g., Precision Medicine Initiative) and facilitates combining data to validate clinically actionable variants, increase statistical power (e.g., studies of rare genetic conditions or gene-enviroment interactions), or compare treatments and outcomes between patients.
Human variome project quality assessment criteria for variation databases - M...Human Variome Project
Numerous databases containing information about DNA, RNA and protein variations are available. Gene-specific variant databases (locus specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database [1]. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. Instructions are available for the developed quality criteria and how implementation of the quality scheme can be achieved ([1], http://www.humanvariomeproject.org/finish/19/255.html). Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance.
Reference: [1] Vihinen, M., Hancock, J. M., Maglott, D. R., Landrum, M. J., Schaafsma, C. P., Taschner, P. Human Variome Project quality assessment scheme for variation databases. Hum. Mutat. (in press).
Mitochondrial diseases are characterized by a high clinical and genetic heterogeneity and a growing number of genes of mitochondrial disease has been identified. Mitochondrial diseases follow any mode of inheritance, due to the twofold genetic origin of RC components (nuclear DNA and mitochondrial DNA). 1 000 to 1 500 nuclear genes encode mitochondrial proteins. Approximately 250 of these genes have been reported as disease causing. These genes not only encode the various subunits of each respiratory chain complex, but also the ancillary proteins involved in the different stages of holoenzyme biogenesis, transcription, translation, chaperoning, addition of prosthetic groups and assembly of proteins, as well as the various enzymes involved in mtDNA maintenance. Some of these genes are associated with well defined syndromes but more and more are specific to one patient or family only, hampering to establish genotype-phenotype correlations. The clinical heterogeneity of these disorders makes the diagnosis difficult especially in the first years of the clinical course and other genetic diseases can present an overlapping phenotype. Therefore only the identification of the disease causing mutation allows to certainly establish the diagnosis of mitochondrial disease.
Dr. Rötig (PhD) is the head of the group working on mitochondrial diseases in Necker Hospital (Paris). This group has initially settled and integrated platform of clinic, biochemistry and molecular analysis to investigate patients with OXPHOS disease. The scientific field of this group is the identification of genes involved in mitochondrial disorders and the investigation of their pathophysiology. They have described the first non-neuromuscular presentation of mitochondrial diseases and characterized the very first mutations in nuclear genes resulting in defects of Krebs’s cycle or the respiratory chain.
Use of open, curated variant databases: ethics? Liability? - Bartha KnoppersHuman Variome Project
Translation of genomics into medicine and drug development requires comprehensive, high-quality, genomic variant databases. To support translation, there is a movement towards sharing clinical annotations of variants (e.g., benign, unknown, pathogenic) internationally via open access. Despite the growing popularity of variant databases, ethical issues and liability risks have received scant attention. Ethical priorities for variant databases include 1) competence – ensuring that data is responsibly managed, curated, and used; 2) confidentiality – ensuring appropriate safeguards for patient data; 3) communication – clearly describing the purpose, quality standards, and data handling practices to contributing patients and potential users; and 4) continuous oversight to adapt database governance in a rapidly evolving environment. How can database managers fulfill these obligations when these responsibilities are increasingly distributed along the clinical pipeline? Legal issues include medical liability based on potential harm to patients; liability based on third-party intellectual property or privacy rights in the data; and regulatory risks as variant data is integrated into genetic tests or devices. Can these risks can be managed through appropriate governance structures – including adequate consents, access processes, contributor agreements, and disclaimers – while still facilitating sharing and clinical use?
Checking the experts: compliance with author instructions regarding HGVS nome...Human Variome Project
We have investigated compliance with guidelines regarding HGVS nomenclature and/or submission of variants to public databases in the author instructions of several genetics and genomics journals. For this, we used a list of genetics and genomics journals created by the Human Variome Project (HVP) office (See http://www.humanvariomeproject.org/resources/genetics-and-genomics-journals.html). The HVP aims to work towards open and free sharing of high quality information on genetic variation and its effect on human health. The HVP Gene and Disease-specific Database Council has created example instructions for authors for genetics and genomic journals containing guidelines regarding HGVS nomenclature and submission of variants to public databases. The HVP office has sent this document to journal editors asking them to include similar requirements in their instructions for authors. The rationale is that better instructions will help improving the quality of variant descriptions in manuscripts and access to variant information in databases.
We have used this list to select journals including both requirements. We have investigated the January 2016 issue of several of them. A group of students has checked the publications first for the basic requirements: mentions of the reference sequence used to describe variants and the presence of the variants in public databases. The next step was to check variant and phenotype descriptions with specific attention for predicted protein effects. Our preliminary results suggest that predicted protein effects in publications cannot be verified in case of altered splice sites without supporting RNA-level evidence or in case of insertions of unspecified nucleotides. Although the underlying variants are likely to have disease-causing effects, lack of supporting evidence disqualifies this information for diagnostic use. In several cases, the RNA-level information was specified in the gene variant database submission, indicating that submission of variants to databases prior to manuscript acceptance might improve the quality of publications. We have observed that authors include statements suggesting variants have been submitted to databases, but the variants were not found. Reviewers and journal editors could help improving manuscript quality by insisting on compliance and enforcing the existing guidelines.
Inactivating mutations in TSC1 and TSC2 cause tuberous sclerosis complex (TSC). This disease is variable in severity and classically causes seizures, intellectual disability, behavioural difficulties, brain tumours, heart tumours, renal tumours and a facial rash. It occurs worldwide and is found in approximately one in 10,000 live births. It can be inherited as an autosomal dominant disorder, but about 70% of cases are new in the family and there is a significant risk of recurrence in a second child because of gonadal mosaicism in one of the parents. In some cases, mildly affected individuals are not recognised till the birth of a severely affected child. Genetic testing is needed both for diagnosis and for reproductive decisions, but in at least 10% of tests the causative change is not found. There are also many TSC variants of uncertain significance which are often unique.
The TSC1 and TSC2 databases (www.lovd.nl/TSC1 and www.lovd.nl/TSC2) attempt to record all TSC variants which have been reported, both in the contest of genetic testing for TSC and in other clinical conditions. Classification of the pathogenicity of these TSC variants in the TSC1 and TSC2 databases refers to their ability to cause TSC. Data from August 2015 show 889 different small variants for TSC1 of which 66% were pathological and 9% were not. For TSC2, there were 2522 different small variants of which 50% were pathological and 9% were not. All others were to some extent uncertain.
Recent advances in the quantity and quality in NGS, and the access to enormous amounts of population data have produced many challenging opportunities to improve variant classification. We have now formalised our decision-making on the pathogenicity of variants and this uses type and position, likely effect of variant, confidence in diagnosis, frequency of variant in different patients, family details including de novo reports, co-occurrence with known harmful variant, and where appropriate, an in vitro function assay. We now also screen our database automatically for duplicates in very large cohort datasets, most of which do not have extensive phenotype data. We will present our current practice for interpretation and would welcome comments.
The TSC variation databases are funded by the TSA and the TS Alliance.
The ClinGen Sequence Variant Interpretation Working Group: Refining Criteria ...Human Variome Project
A key barrier to the efficient clinical utilization of genome sequence data is the lack of a systematic approach for interpreting the pathogenicity of genetic variants, with resultant discordance among laboratories and researchers in classification. The ClinGen (Clinical Genome) Project has been funded by the United States National Institutes of Health with a goal of maximizing the clinical relevance of results from genetic testing. ClinGen has established a Sequence Variant Interpretation (SVI) Working Group to refine and standardize the approach to pathogenicity classification. Recently the American College of Medical Genetics and Genomics (ACMG) published guidelines [1] that emerged from a workgroup that represented the expert opinion of clinical laboratory directors and genetics clinicians. These guidelines were developed to help clinical laboratories that report results from sequencing of single genes, panels, exomes, and genomes. The ClinGen SVI has set short term and long term objectives for advancing the field of pathogenicity interpretation using the ACMG framework as a starting point. There is consensus within the field that correct classification of variants requires integrating multiple lines of evidence, including, clinico-pathologic, epidemiologic, bioinformatics (in silico), and in vitro data. How best to combine them is unclear. The ACMG framework described different categories of evidence and assigned preliminary assessments of what comprises weak or strong evidence favoring a variant’s pathogenicity or neutrality and preliminary rule- based algorithms of how to combine evidence. In the short term, the ClinGen SVI has set up sub-committees to apply more precision in defining criteria for pathogenicity and what comprises strong and weak evidence. In the long term, the ClinGen SVI looks to transition from qualitative descriptions to a quantitative system that can assign an empirically derived probability of pathogenicity for each variant. Preliminary analyses suggest that there will be general rules that apply to all genes, as well as specific approaches for different genetic disorders. Well characterized databases of variants for each genetic disorder will be critical to the process. Funded by the National Human Genome Research Institute through the following three grants: U41 HG006834-01A1, U01 HG007437- 01, U01 HG007436-01, and by the National Cancer Institute through contract HHSN261200800001E. Reference: 1. Richards S, Aziz N, Bale S, et al. Standards and Guidelines for the Interpretation of Sequence Variants: A Joint Consensus Recommendation of the American College of Medical Genetics and Genomics and the Association for Molecular Pathology. Genetics in Medicine2015;17(5):405-424.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
HVP Country Node: The Netherlands - Marielle van Gijn
1. SESSION VI
HVP Country Nodes
Thursday 2nd June
The Netherlands
Generade Center of Expertise Genomics (Peter Taschner)
Dutch Society of Genetic Diagnostic Laboratories (VKGL) (Marielle van Gijn)
2. Netherlands: a snapshot
• Clinical laboratories: 9
• Clinicians: ~43.000
• Genetic Counselors: ~160
• Genetic testing & genetic healthcare services:
– Genetic testing is funded by the health insurance
• National Human Genetics Society:
– Dutch society of Humane Genetics (NVHG)
– President Prof. dr. F. Baas,
– Next Meeting 06-10-2016
3. HVP Country Node: Organisation
• Year established: 2015
• Structure: no independent structure yet, operate in
connection with (national) societies (VKGL, NVHG) and
research organisations (Generade, LOVD-team LUMC)
• Funding: none
• Links with NHGS: part of VKGL datasharing working party
• Ethical framework, privacy, consent: to be established
according to Dutch/European guidelines and law
4. HVP Country Node: Technical
• National data repository: not yet, pending
• Data available nationally: locally in the 9 centers
• Data available internationally: LOVD and other disease specific
databases
• Available:
– Data Collection Policy: na
– Collection Agreement: na
– Data Access Policy: na
– Data Ownership Policy: na
5. Progress, Plans and Problems
• Progress in the past 12 months:
– All BRCA legacy data from the 9 centers was collected, curated and added
to the LOVD database
– Based on pilot studies from the VKGL data sharing working party:
Data share proposal addressing sharing variant classification and
frequency data.
• Grant (21keuro) from Dutch Biobanking and Biomolecular Research
Infrastructure project (BBMRI-NL)
• Additional funding from the Genetic Centers pending
• Plan for the next 12 months:
– Start a national database
• Problems/barriers we face:
– Lack of resources
– Lack of priority for genetic data sharing
– Lack of clear guidelines/law to address privacy issues