Ontologies are quickly becoming a core part of biomedical infrastructure, where they serve as a means to standardize terminology, to enable access to domain knowledge, to verify data consistency and to facilitate integrative analyses over heterogeneous biomedical data. Given the increased use of ontologies in scientific research, we must first consider the consistent evaluation of ontology-powered research so as to quantitatively evaluate the contribution of the ontology to the effort. Quantitative evaluation of research could then lead to systematic improvement of the application and performance of an ontology (as a key measures of quality) and enable the comparison of any ontology to the overall result. With the emergence of vast amounts of relatively schema-light biomedical Linked Open Data such as that provided by the open source Bio2RDF project, new opportunities arise for applying, evaluating and increasing the utility of ontologies in biomedical research.
Prof William Kosar: Letters of Credit as a Payment MethodWilliam Kosar
This is the 2nd lesson from a 5 day course on Letters of Credit (in English and Arabic) taught to Iraqi Private Commercial Bankers both at the Banking and Finance Academy in Erbil as well as the Banking Studies Center of the Central Bank of Iraq in Baghdad. .
Presentation given on May 6th at CIM 2013 Toronto.
A discussion on the reasons that lead to the generalized public distrust phenomena, how better risk assessments would help reduce distrust and support better decisions.
Prof William Kosar: Letters of Credit as a Payment MethodWilliam Kosar
This is the 2nd lesson from a 5 day course on Letters of Credit (in English and Arabic) taught to Iraqi Private Commercial Bankers both at the Banking and Finance Academy in Erbil as well as the Banking Studies Center of the Central Bank of Iraq in Baghdad. .
Presentation given on May 6th at CIM 2013 Toronto.
A discussion on the reasons that lead to the generalized public distrust phenomena, how better risk assessments would help reduce distrust and support better decisions.
Here are the results of our recent study about how people search online. We dive into this complex world of searching for people, which has become quite a hot topic thanks to the popularity of social networking websites.
The goals were:
- To discover how people search online for personal information on someone.
- Understand people’s impressions, likes and dislikes when using a social networking site or search engine when searching for someone.
This presentation was used in May 2011 at RIMS Conference, Mining and Metal Session.
Recent world-wide events have painfully shown many industries that natural hazards can impair ingress/egress capabilities in any business areas. Mining, passengers, automotive and event electronics companies have suffered major drawbacks from climate, seismic, fire tornadoes and other hazards.
The presentation shows that careful risk and crises evaluation, management and attentive mitigation can lead to higher survival rate and event to gain competitive edge on less prepared companies.
Presentation to small business operators about how to grow a business and the role of market research, innovation, tribes, insight, beginners mind, zen and doing stuff (rather than getting lost in the chaos)
OWLED2009: A platform for distributing and reasoning with OWL-EL knowledge ba...Michel Dumontier
Memory exhaustion is a common problem in tableau-based
OWL reasoners, when reasoning with large ontologies. One possible solution
is to distribute the reasoning task across multiple machines. In this
paper, we present, as preliminary work, a prototypical implementation
for distributing OWL-EL ontologies over a Peer-to-Peer network, and
reasoning with them in a distributed manner. The algorithms presented
are based on Distributed Hash Table (DHT), a common technique used
by Peer-to-Peer applications. The system implementation was developed
using the JXTA P2P platform and the Pellet OWL-DL reasoner. It remains
to demonstrate the efficiency of our method and implementation
with respect to stand alone reasoners and other distributed systems.
http://www.webont.org/owled/2009/papers/owled2009_submission_34.pdf
CDAO presentation.
The idea of the comparative analysis ontoloty has been presented worldwide, including: NESCent (USA), IGBMC (France), UFRJ (Brazil). Providing a semantic framework for evolutionary analysis in a high-throughtput way after the next and third generation sequencing is the way to approach evolutionary-based studies into genome-wide analysis. The darwinian core of reasoning also allows CDAO to be used with other entities.
ISMB/ECCB 2013 Keynote Goble Results may vary: what is reproducible? why do o...Carole Goble
Keynote given by Carole Goble on 23rd July 2013 at ISMB/ECCB 2013
http://www.iscb.org/ismbeccb2013
How could we evaluate research and researchers? Reproducibility underpins the scientific method: at least in principle if not practice. The willing exchange of results and the transparent conduct of research can only be expected up to a point in a competitive environment. Contributions to science are acknowledged, but not if the credit is for data curation or software. From a bioinformatics view point, how far could our results be reproducible before the pain is just too high? Is open science a dangerous, utopian vision or a legitimate, feasible expectation? How do we move bioinformatics from one where results are post-hoc "made reproducible", to pre-hoc "born reproducible"? And why, in our computational information age, do we communicate results through fragmented, fixed documents rather than cohesive, versioned releases? I will explore these questions drawing on 20 years of experience in both the development of technical infrastructure for Life Science and the social infrastructure in which Life Science operates.
Here are the results of our recent study about how people search online. We dive into this complex world of searching for people, which has become quite a hot topic thanks to the popularity of social networking websites.
The goals were:
- To discover how people search online for personal information on someone.
- Understand people’s impressions, likes and dislikes when using a social networking site or search engine when searching for someone.
This presentation was used in May 2011 at RIMS Conference, Mining and Metal Session.
Recent world-wide events have painfully shown many industries that natural hazards can impair ingress/egress capabilities in any business areas. Mining, passengers, automotive and event electronics companies have suffered major drawbacks from climate, seismic, fire tornadoes and other hazards.
The presentation shows that careful risk and crises evaluation, management and attentive mitigation can lead to higher survival rate and event to gain competitive edge on less prepared companies.
Presentation to small business operators about how to grow a business and the role of market research, innovation, tribes, insight, beginners mind, zen and doing stuff (rather than getting lost in the chaos)
OWLED2009: A platform for distributing and reasoning with OWL-EL knowledge ba...Michel Dumontier
Memory exhaustion is a common problem in tableau-based
OWL reasoners, when reasoning with large ontologies. One possible solution
is to distribute the reasoning task across multiple machines. In this
paper, we present, as preliminary work, a prototypical implementation
for distributing OWL-EL ontologies over a Peer-to-Peer network, and
reasoning with them in a distributed manner. The algorithms presented
are based on Distributed Hash Table (DHT), a common technique used
by Peer-to-Peer applications. The system implementation was developed
using the JXTA P2P platform and the Pellet OWL-DL reasoner. It remains
to demonstrate the efficiency of our method and implementation
with respect to stand alone reasoners and other distributed systems.
http://www.webont.org/owled/2009/papers/owled2009_submission_34.pdf
CDAO presentation.
The idea of the comparative analysis ontoloty has been presented worldwide, including: NESCent (USA), IGBMC (France), UFRJ (Brazil). Providing a semantic framework for evolutionary analysis in a high-throughtput way after the next and third generation sequencing is the way to approach evolutionary-based studies into genome-wide analysis. The darwinian core of reasoning also allows CDAO to be used with other entities.
ISMB/ECCB 2013 Keynote Goble Results may vary: what is reproducible? why do o...Carole Goble
Keynote given by Carole Goble on 23rd July 2013 at ISMB/ECCB 2013
http://www.iscb.org/ismbeccb2013
How could we evaluate research and researchers? Reproducibility underpins the scientific method: at least in principle if not practice. The willing exchange of results and the transparent conduct of research can only be expected up to a point in a competitive environment. Contributions to science are acknowledged, but not if the credit is for data curation or software. From a bioinformatics view point, how far could our results be reproducible before the pain is just too high? Is open science a dangerous, utopian vision or a legitimate, feasible expectation? How do we move bioinformatics from one where results are post-hoc "made reproducible", to pre-hoc "born reproducible"? And why, in our computational information age, do we communicate results through fragmented, fixed documents rather than cohesive, versioned releases? I will explore these questions drawing on 20 years of experience in both the development of technical infrastructure for Life Science and the social infrastructure in which Life Science operates.
An Approach for Knowledge Extraction Using Ontology Construction and Machine ...Waqas Tariq
In recent research, Ontology construction plays a major role for transforming raw texts into useful knowledge. The proposed method supports efficient retrieval with the help of ontology and applies combined techniques to train the data before taking into testing process. The proposed approach used the phrase-pairs to extract useful knowledge and utilized data mining techniques and neural network approach to express the knowledge well and also it improves the search speed and accuracy of information retrieval. This method avoids noise generation by analyzing the relevancy of tags to the retrieval process and shows somewhat better recall value compared to other methods. In this approach an optimized reasoner applied to reduce complexity in the key inference problem. The formulated ontology can help clearly expressing its meaning for various concepts and relations. Due to the increasing size of ontology repository, the matching process may take more time. To avoid this, this method forms a hierarchical structure with semantic interpretation of data. The system designed to eliminate domain-dependency with the help of dynamic labeling scheme using ontology as a base. In this paper, our proposed models were presented with ontology description using Ontology Web Language (OWL).
BioCuration 2019 - Evidence and Conclusion Ontology 2019 Updatedolleyj
The Evidence and Conclusion Ontology (ECO) describes types of evidence relevant to biological investigations. First developed in the early 2000s, ECO now consists of over 1700 defined classes and is used by a large, and growing, list of resources. ECO imports close to 1000 classes from the Ontology for Biomedical Investigations and the Gene Ontology for use in logical definitions. Historically, ECO terms have generally been categorized by either the biological context of the evidence (e.g. gene expression) or the technique used to generate the evidence (e.g. PCR-based evidence). The result is that sometimes terms that have related biological context are found under different unrelated nodes. To address this, we have been performing a rigorous review of the structure and logic of the branches of ECO. Working with additional input from collaborators through the issue tracker on GitHub, term labels, definitions, and relationships are being evaluated and updated. The goal of these changes is to increase the logical consistency of ECO, make it easier for users to find and understand terms, and allow for ECO to continue to grow and support its users. In addition to the structural review, we have been working with CollecTF to utilize ECO for automated text mining. To generate a curated corpus for this effort, we have been annotating ECO terms to sentences which contain evidence-based assertions about gene products, taxonomic entities, and sequence features. From this effort we have developed clearly-defined annotation guidelines that have been passed on to a team of undergraduates who are continuing the curation effort.
Annotations are limited to single sentences, or to two consecutive sentences, containing the evidence instance and assertion clause. The quality of the mapping to ECO
and the strength of the author’s assertion are also captured. ECO is freely available at http://evidenceontology.org/ and https://github.com/evidenceontology.
it's our presentation during the third international conference of information systems and technologies ICIST 2013 held at Tangier, Morocco in which we propose a new approach for human assessment of ontologies using an online questionnaire.
Ontology Evaluation Methods and Metrics - This is work I did while I was at The MITRE Corporation. I came up with a framework to support ontology evaluation for reuse that could also be used for ontology construction. I was the sole author of the approach, which was intended to begin a research program and a community of practice around it. It's been on hold and would like that to change. I'm now at the Tetherless World Constellation at Rensselaer Polytechnic Institute, if interested contact me there.
COMPUTATIONAL METHODS FOR FUNCTIONAL ANALYSIS OF GENE EXPRESSIONcsandit
Sequencing projects arising from high throughput technologies including those of sequencing DNA microarrays allowed to simultaneously measure the expression levels of millions of genes of a biological sample as well as annotate and identify the role (function) of those genes. Consequently, to better manage and organize this significant amount of information,
bioinformatics approaches have been developed. These approaches provide a representation and a more 'relevant' integration of data in order to test and validate the hypothesis of researchers throughout the experimental cycle. In this context, this article describes and discusses some of techniques used for the functional analysis of gene expression data.
Elsevier's Scopus.com upgraded the Journal Analyzer with Source Normalized Impact per Paper (SNIP), which measures a source's contextual impact, and SCImago Journal Rank (SJR), which measures the scientific prestige of scholarly sources.
These indicators will be applied to all journals indexed by Scopus and will be freely available to both subscribers and non-subscribers @ scopus.com and www.journalmetrics.com
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
knowledge graphs are an emerging paradigm to represent information. yet their discovery and reuse is hampered by insufficient or inadequate metadata. here, the COST ACTION Distributed Knowledge Graphs had a first workshop to develop a KG metadata schema. In this presentation, the progress and plans are discussed with the W3C Community Group on Knowledge Graph Construction.
Data-Driven Discovery Science with FAIR Knowledge GraphsMichel Dumontier
Data-Driven Discovery Science with FAIR Knowledge Graphs
Despite the existence of vast amounts of biomedical data, these remain difficult to find and to productively reuse in machine learning and other Artificial Intelligence technologies. In this talk, I will discuss the role of the FAIR Guiding Principles to make AI-ready biomedical data, and their representation as knowledge graphs not only enables powerful ontology-backed semantic queries, but also can be used to predict missing information, as well as to check the quality of knowledge collected.
The main idea of the talk is to introduce the FAIR principles (what they are and what they are not), and how their application with semantic web technologies (ontologies/linked data) creates improved possibilities for large scale data integration, answering sophisticated questions using automated reasoners, and predicting new relations/validating data using graph embeddings. The audience will gain insight into the state of the art in a carefully presented manner that introduces principles, approaches, and outcomes relevant to Health AI.
The FAIR (Findable, Accessible, Interoperable, Reusable) Guiding Principles light a path towards improving the discovery and reuse of digital objects (data, documents, software, web services, etc) by machines. Machine reusability is a crucial strategic component in building robust digital infrastructure that strengthens scholarship and opens new pathways for innovation on a truly global scale. However, as the FAIR principles do not specify any particular implementation, communities have the homework to devise, standardize and implement technical specifications to improve the ‘FAIRness’ of digital assets. In this seminar, I will focus on the history and state of the art in the FAIRness assessment, including manual, semi-automated and fully automated approaches, and how these can be used by developers and consumers alike. This seminar will serve as a springboard for community discussion and adoption of these services to incrementally and realistically improve the FAIRness of their resources.
The Role of the FAIR Guiding Principles for an effective Learning Health SystemMichel Dumontier
he learning health system (LHS) is an integrated social and technological system that embeds continuous improvement and innovation for the effective delivery of healthcare. A crucial part of the LHS lies in how the underlying information system will secure and take advantage of relevant knowledge assets towards supporting complex and unusual clinical decision making, facilitating public health surveillance, and aiding comparative effectiveness research. However, key knowledge assets remain difficult to obtain and reuse, particularly in a decentralized context. In this talk, I will discuss the role of the Findable, Accessible, Interoperable, and Reusable (FAIR) Guiding Principles towards the realization of the LHS, along with emerging technologies to publish and refine clinical research and knowledge derived therein.
Keynote given for 2021 Knowledge Representation for Health Care http://banzai-deim.urv.net/events/KR4HC-2021/
CIKM2020 Keynote: Accelerating discovery science with an Internet of FAIR dat...Michel Dumontier
Biomedicine has always been a fertile and challenging domain for computational discovery science. Indeed, the existence of millions of scientific articles, thousands of databases, and hundreds of ontologies, offer exciting opportunities to reuse our collective knowledge, were we not stymied by incompatible formats, overlapping and incomplete vocabularies, unclear licensing, and heterogeneous access points. In this talk, I will discuss our work to create computational standards, platforms, and methods to wrangle knowledge into simple, but effective representations based on semantic web technologies that are maximally FAIR - Findable, Accessible, Interoperable, and Reuseable - and to further use these for biomedical knowledge discovery. But only with additional crucial developments will this emerging Internet of FAIR data and services enable automated scientific discovery on a global scale.
bio:
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research focuses on the development of computational methods for scalable and responsible discovery science. Dr. Dumontier obtained his BSc (Biochemistry) in 1998 from the University of Manitoba, and his PhD (Bioinformatics) in 2005 from the University of Toronto. Previously a faculty member at Carleton University in Ottawa and Stanford University in Palo Alto, Dr. Dumontier founded and directs the interfaculty Institute of Data Science at Maastricht University to develop sociotechnological systems for responsible data science by design. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon 2020, the European Open Science Cloud, the US National Institutes of Health and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
This presentation was given on October 21, 2020 at CIKM2020.
The role of the FAIR Guiding Principles in a Learning Health SystemMichel Dumontier
The learning health system (LHS) is a concept for a socio-technological system that continuously improves the delivery of health care by coupling biomedical research with practice- and evidence- based medicine. Key aspects of the LHS are collecting, integrating, and analyzing data from different sources. While the increased digitalisation of healthcare is creating new data sources, these remain hard to find and use, let alone make use of as part of intelligent systems for the benefit of patients, healthcare providers, and researchers. This talk will examine recent developments towards making key parts of the LHS, such as clinical practice guidelines, Findable, Accessible, Interoperable, and Reusable (FAIR).
Acclerating biomedical discovery with an internet of FAIR data and services -...Michel Dumontier
With its focus on improving the health and well being of people, biomedicine has always been a fertile, if not challenging domain for computational discovery science. Indeed, the existence of millions of scientific articles, thousands of databases, and hundreds of ontologies, offer exciting opportunities to reuse our collective knowledge, were we not stymied by incompatible formats, overlapping and incomplete vocabularies, unclear licensing, and heterogeneous access points. In this talk, I will discuss our work to create computational standards, platforms, and methods to wrangle knowledge into simple, but effective representations based on semantic web technologies that are maximally FAIR - Findable, Accessible, Interoperable, and Reuseable - and to further use these for biomedical knowledge discovery. But only with additional crucial developments will this emerging Internet of FAIR data and services, which is built on Semantic Web technologies, be well positioned to support automated scientific discovery on a global scale.
Accelerating Biomedical Research with the Emerging Internet of FAIR Data and ...Michel Dumontier
ith its focus on improving the health and well being of people, biomedicine has always been a fertile, if not challenging domain for computational discovery science. Indeed, the existence of millions of scientific articles, thousands of databases, and hundreds of ontologies, offer exciting opportunities to reuse our collective knowledge, were we not stymied by incompatible formats, overlapping and incomplete vocabularies, unclear licensing, and heterogeneous access points. In this talk, I will discuss our work to create computational standards, platforms, and methods to wrangle knowledge into simple, but effective representations based on semantic web technologies that are maximally FAIR - Findable, Accessible, Interoperable, and Reuseable - and to further use these for biomedical knowledge discovery. But only with additional crucial developments will this emerging Internet of FAIR data and services enable automated scientific discovery on a global scale.
Are we FAIR yet? And will it be worth it?
The FAIR Principles propose essential characteristics that all digital resources (e.g. datasets, repositories, web services) should possess to be Findable, Accessible, Interoperable, and Reusable by both humans and machines. The Principles act as a guide that researchers and data stewards should expect from contemporary digital resources, and in turn, the requirements on them when publishing their own scholarly products. As interest in, and support for the Principles has spread, the diversity of interpretations has also broadened, with some resources claiming to already “be FAIR”.
This talk will elaborate on what FAIR is, what it entails, and how we should evaluate FAIRness. I will describe new social and technological infrastructure to support the creation and evaluation of FAIR resources, and how FAIR fits into institutional, national and international efforts. Finally, I will discuss the merits of the FAIR principles (and what we ask of people) in the context of strengthening data-driven scientific inquiry.Are we FAIR yet? And will it be worth it?
The FAIR Principles propose essential characteristics that all digital resources (e.g. datasets, repositories, web services) should possess to be Findable, Accessible, Interoperable, and Reusable by both humans and machines. The Principles act as a guide that researchers and data stewards should expect from contemporary digital resources, and in turn, the requirements on them when publishing their own scholarly products. As interest in, and support for the Principles has spread, the diversity of interpretations has also broadened, with some resources claiming to already “be FAIR”.
This talk will elaborate on what FAIR is, what it entails, and how we should evaluate FAIRness. I will describe new social and technological infrastructure to support the creation and evaluation of FAIR resources, and how FAIR fits into institutional, national and international efforts. Finally, I will discuss the merits of the FAIR principles (and what we ask of people) in the context of strengthening data-driven scientific inquiry.
Keynote given at NETTAB2018 - http://www.igst.it/nettab/2018/
The future of science and business - a UM Star LectureMichel Dumontier
I discuss how data science is affecting our way of life and how we at Maastricht University are preparing the next generation of leaders to address opportunities and challenges in responsible manner.
The FAIR Principles propose key characteristics that all digital resources (e.g. datasets, repositories, web services) should possess to be Findable, Accessible, Interoperable, and Reusable by people and machines. The Principles act as a guide that researchers should expect from contemporary digital resources, and in turn, the requirements on them when publishing their own scholarly products. As interest in, and support for the Principles has spread, the diversity of interpretations has also broadened, with some resources claiming to already “be FAIR”. This talk will elaborate on what FAIR is, why we need it, what it entails, and how we should evaluate FAIRness. I will describe new social and technological infrastructure to support the creation and evaluation of FAIR resources, and how FAIR fits into institutional, national and international efforts. Finally, I will discuss the merits of the FAIR principles (and what we ask of people) in the context of strengthening data-driven scientific inquiry.
A talk prepared for Workshop Working on data stewardship? Meet your peers!
Datum: 03 OKT 2017
https://www.surf.nl/agenda/2017/10/workshop-working-on-data-stewardship-meet-your-peers/index.html
Towards metrics to assess and encourage FAIRnessMichel Dumontier
With an increased interest in the FAIR metrics, there is need to develop tools and appraoches that can assess the FAIRness of a digital resource. This talk begins to explore some ideas in this space, and invites people to participate in a working group focused on the development, application, and evaluation of FAIR metric efforts.
A presentation to the New Year's Event for Maastricht University's Knowledge Engineering @ Work Program. https://www.maastrichtuniversity.nl/news/kework-first-10-students-academic-workstudy-track-graduate
These lecture slides, by Dr Sidra Arshad, offer a quick overview of physiological basis of a normal electrocardiogram.
Learning objectives:
1. Define an electrocardiogram (ECG) and electrocardiography
2. Describe how dipoles generated by the heart produce the waveforms of the ECG
3. Describe the components of a normal electrocardiogram of a typical bipolar leads (limb II)
4. Differentiate between intervals and segments
5. Enlist some common indications for obtaining an ECG
Study Resources:
1. Chapter 11, Guyton and Hall Textbook of Medical Physiology, 14th edition
2. Chapter 9, Human Physiology - From Cells to Systems, Lauralee Sherwood, 9th edition
3. Chapter 29, Ganong’s Review of Medical Physiology, 26th edition
4. Electrocardiogram, StatPearls - https://www.ncbi.nlm.nih.gov/books/NBK549803/
5. ECG in Medical Practice by ABM Abdullah, 4th edition
6. ECG Basics, http://www.nataliescasebook.com/tag/e-c-g-basics
Flu Vaccine Alert in Bangalore Karnatakaaddon Scans
As flu season approaches, health officials in Bangalore, Karnataka, are urging residents to get their flu vaccinations. The seasonal flu, while common, can lead to severe health complications, particularly for vulnerable populations such as young children, the elderly, and those with underlying health conditions.
Dr. Vidisha Kumari, a leading epidemiologist in Bangalore, emphasizes the importance of getting vaccinated. "The flu vaccine is our best defense against the influenza virus. It not only protects individuals but also helps prevent the spread of the virus in our communities," he says.
This year, the flu season is expected to coincide with a potential increase in other respiratory illnesses. The Karnataka Health Department has launched an awareness campaign highlighting the significance of flu vaccinations. They have set up multiple vaccination centers across Bangalore, making it convenient for residents to receive their shots.
To encourage widespread vaccination, the government is also collaborating with local schools, workplaces, and community centers to facilitate vaccination drives. Special attention is being given to ensuring that the vaccine is accessible to all, including marginalized communities who may have limited access to healthcare.
Residents are reminded that the flu vaccine is safe and effective. Common side effects are mild and may include soreness at the injection site, mild fever, or muscle aches. These side effects are generally short-lived and far less severe than the flu itself.
Healthcare providers are also stressing the importance of continuing COVID-19 precautions. Wearing masks, practicing good hand hygiene, and maintaining social distancing are still crucial, especially in crowded places.
Protect yourself and your loved ones by getting vaccinated. Together, we can help keep Bangalore healthy and safe this flu season. For more information on vaccination centers and schedules, residents can visit the Karnataka Health Department’s official website or follow their social media pages.
Stay informed, stay safe, and get your flu shot today!
- Video recording of this lecture in English language: https://youtu.be/lK81BzxMqdo
- Video recording of this lecture in Arabic language: https://youtu.be/Ve4P0COk9OI
- Link to download the book free: https://nephrotube.blogspot.com/p/nephrotube-nephrology-books.html
- Link to NephroTube website: www.NephroTube.com
- Link to NephroTube social media accounts: https://nephrotube.blogspot.com/p/join-nephrotube-on-social-media.html
Title: Sense of Taste
Presenter: Dr. Faiza, Assistant Professor of Physiology
Qualifications:
MBBS (Best Graduate, AIMC Lahore)
FCPS Physiology
ICMT, CHPE, DHPE (STMU)
MPH (GC University, Faisalabad)
MBA (Virtual University of Pakistan)
Learning Objectives:
Describe the structure and function of taste buds.
Describe the relationship between the taste threshold and taste index of common substances.
Explain the chemical basis and signal transduction of taste perception for each type of primary taste sensation.
Recognize different abnormalities of taste perception and their causes.
Key Topics:
Significance of Taste Sensation:
Differentiation between pleasant and harmful food
Influence on behavior
Selection of food based on metabolic needs
Receptors of Taste:
Taste buds on the tongue
Influence of sense of smell, texture of food, and pain stimulation (e.g., by pepper)
Primary and Secondary Taste Sensations:
Primary taste sensations: Sweet, Sour, Salty, Bitter, Umami
Chemical basis and signal transduction mechanisms for each taste
Taste Threshold and Index:
Taste threshold values for Sweet (sucrose), Salty (NaCl), Sour (HCl), and Bitter (Quinine)
Taste index relationship: Inversely proportional to taste threshold
Taste Blindness:
Inability to taste certain substances, particularly thiourea compounds
Example: Phenylthiocarbamide
Structure and Function of Taste Buds:
Composition: Epithelial cells, Sustentacular/Supporting cells, Taste cells, Basal cells
Features: Taste pores, Taste hairs/microvilli, and Taste nerve fibers
Location of Taste Buds:
Found in papillae of the tongue (Fungiform, Circumvallate, Foliate)
Also present on the palate, tonsillar pillars, epiglottis, and proximal esophagus
Mechanism of Taste Stimulation:
Interaction of taste substances with receptors on microvilli
Signal transduction pathways for Umami, Sweet, Bitter, Sour, and Salty tastes
Taste Sensitivity and Adaptation:
Decrease in sensitivity with age
Rapid adaptation of taste sensation
Role of Saliva in Taste:
Dissolution of tastants to reach receptors
Washing away the stimulus
Taste Preferences and Aversions:
Mechanisms behind taste preference and aversion
Influence of receptors and neural pathways
Impact of Sensory Nerve Damage:
Degeneration of taste buds if the sensory nerve fiber is cut
Abnormalities of Taste Detection:
Conditions: Ageusia, Hypogeusia, Dysgeusia (parageusia)
Causes: Nerve damage, neurological disorders, infections, poor oral hygiene, adverse drug effects, deficiencies, aging, tobacco use, altered neurotransmitter levels
Neurotransmitters and Taste Threshold:
Effects of serotonin (5-HT) and norepinephrine (NE) on taste sensitivity
Supertasters:
25% of the population with heightened sensitivity to taste, especially bitterness
Increased number of fungiform papillae
Couples presenting to the infertility clinic- Do they really have infertility...Sujoy Dasgupta
Dr Sujoy Dasgupta presented the study on "Couples presenting to the infertility clinic- Do they really have infertility? – The unexplored stories of non-consummation" in the 13th Congress of the Asia Pacific Initiative on Reproduction (ASPIRE 2024) at Manila on 24 May, 2024.
NVBDCP.pptx Nation vector borne disease control programSapna Thakur
NVBDCP was launched in 2003-2004 . Vector-Borne Disease: Disease that results from an infection transmitted to humans and other animals by blood-feeding arthropods, such as mosquitoes, ticks, and fleas. Examples of vector-borne diseases include Dengue fever, West Nile Virus, Lyme disease, and malaria.
Ozempic: Preoperative Management of Patients on GLP-1 Receptor Agonists Saeid Safari
Preoperative Management of Patients on GLP-1 Receptor Agonists like Ozempic and Semiglutide
ASA GUIDELINE
NYSORA Guideline
2 Case Reports of Gastric Ultrasound
HOT NEW PRODUCT! BIG SALES FAST SHIPPING NOW FROM CHINA!! EU KU DB BK substit...GL Anaacs
Contact us if you are interested:
Email / Skype : kefaya1771@gmail.com
Threema: PXHY5PDH
New BATCH Ku !!! MUCH IN DEMAND FAST SALE EVERY BATCH HAPPY GOOD EFFECT BIG BATCH !
Contact me on Threema or skype to start big business!!
Hot-sale products:
NEW HOT EUTYLONE WHITE CRYSTAL!!
5cl-adba precursor (semi finished )
5cl-adba raw materials
ADBB precursor (semi finished )
ADBB raw materials
APVP powder
5fadb/4f-adb
Jwh018 / Jwh210
Eutylone crystal
Protonitazene (hydrochloride) CAS: 119276-01-6
Flubrotizolam CAS: 57801-95-3
Metonitazene CAS: 14680-51-4
Payment terms: Western Union,MoneyGram,Bitcoin or USDT.
Deliver Time: Usually 7-15days
Shipping method: FedEx, TNT, DHL,UPS etc.Our deliveries are 100% safe, fast, reliable and discreet.
Samples will be sent for your evaluation!If you are interested in, please contact me, let's talk details.
We specializes in exporting high quality Research chemical, medical intermediate, Pharmaceutical chemicals and so on. Products are exported to USA, Canada, France, Korea, Japan,Russia, Southeast Asia and other countries.
New Directions in Targeted Therapeutic Approaches for Older Adults With Mantl...i3 Health
i3 Health is pleased to make the speaker slides from this activity available for use as a non-accredited self-study or teaching resource.
This slide deck presented by Dr. Kami Maddocks, Professor-Clinical in the Division of Hematology and
Associate Division Director for Ambulatory Operations
The Ohio State University Comprehensive Cancer Center, will provide insight into new directions in targeted therapeutic approaches for older adults with mantle cell lymphoma.
STATEMENT OF NEED
Mantle cell lymphoma (MCL) is a rare, aggressive B-cell non-Hodgkin lymphoma (NHL) accounting for 5% to 7% of all lymphomas. Its prognosis ranges from indolent disease that does not require treatment for years to very aggressive disease, which is associated with poor survival (Silkenstedt et al, 2021). Typically, MCL is diagnosed at advanced stage and in older patients who cannot tolerate intensive therapy (NCCN, 2022). Although recent advances have slightly increased remission rates, recurrence and relapse remain very common, leading to a median overall survival between 3 and 6 years (LLS, 2021). Though there are several effective options, progress is still needed towards establishing an accepted frontline approach for MCL (Castellino et al, 2022). Treatment selection and management of MCL are complicated by the heterogeneity of prognosis, advanced age and comorbidities of patients, and lack of an established standard approach for treatment, making it vital that clinicians be familiar with the latest research and advances in this area. In this activity chaired by Michael Wang, MD, Professor in the Department of Lymphoma & Myeloma at MD Anderson Cancer Center, expert faculty will discuss prognostic factors informing treatment, the promising results of recent trials in new therapeutic approaches, and the implications of treatment resistance in therapeutic selection for MCL.
Target Audience
Hematology/oncology fellows, attending faculty, and other health care professionals involved in the treatment of patients with mantle cell lymphoma (MCL).
Learning Objectives
1.) Identify clinical and biological prognostic factors that can guide treatment decision making for older adults with MCL
2.) Evaluate emerging data on targeted therapeutic approaches for treatment-naive and relapsed/refractory MCL and their applicability to older adults
3.) Assess mechanisms of resistance to targeted therapies for MCL and their implications for treatment selection
Tom Selleck Health: A Comprehensive Look at the Iconic Actor’s Wellness Journeygreendigital
Tom Selleck, an enduring figure in Hollywood. has captivated audiences for decades with his rugged charm, iconic moustache. and memorable roles in television and film. From his breakout role as Thomas Magnum in Magnum P.I. to his current portrayal of Frank Reagan in Blue Bloods. Selleck's career has spanned over 50 years. But beyond his professional achievements. fans have often been curious about Tom Selleck Health. especially as he has aged in the public eye.
Follow us on: Pinterest
Introduction
Many have been interested in Tom Selleck health. not only because of his enduring presence on screen but also because of the challenges. and lifestyle choices he has faced and made over the years. This article delves into the various aspects of Tom Selleck health. exploring his fitness regimen, diet, mental health. and the challenges he has encountered as he ages. We'll look at how he maintains his well-being. the health issues he has faced, and his approach to ageing .
Early Life and Career
Childhood and Athletic Beginnings
Tom Selleck was born on January 29, 1945, in Detroit, Michigan, and grew up in Sherman Oaks, California. From an early age, he was involved in sports, particularly basketball. which played a significant role in his physical development. His athletic pursuits continued into college. where he attended the University of Southern California (USC) on a basketball scholarship. This early involvement in sports laid a strong foundation for his physical health and disciplined lifestyle.
Transition to Acting
Selleck's transition from an athlete to an actor came with its physical demands. His first significant role in "Magnum P.I." required him to perform various stunts and maintain a fit appearance. This role, which he played from 1980 to 1988. necessitated a rigorous fitness routine to meet the show's demands. setting the stage for his long-term commitment to health and wellness.
Fitness Regimen
Workout Routine
Tom Selleck health and fitness regimen has evolved. adapting to his changing roles and age. During his "Magnum, P.I." days. Selleck's workouts were intense and focused on building and maintaining muscle mass. His routine included weightlifting, cardiovascular exercises. and specific training for the stunts he performed on the show.
Selleck adjusted his fitness routine as he aged to suit his body's needs. Today, his workouts focus on maintaining flexibility, strength, and cardiovascular health. He incorporates low-impact exercises such as swimming, walking, and light weightlifting. This balanced approach helps him stay fit without putting undue strain on his joints and muscles.
Importance of Flexibility and Mobility
In recent years, Selleck has emphasized the importance of flexibility and mobility in his fitness regimen. Understanding the natural decline in muscle mass and joint flexibility with age. he includes stretching and yoga in his routine. These practices help prevent injuries, improve posture, and maintain mobilit
ARTIFICIAL INTELLIGENCE IN HEALTHCARE.pdfAnujkumaranit
Artificial intelligence (AI) refers to the simulation of human intelligence processes by machines, especially computer systems. It encompasses tasks such as learning, reasoning, problem-solving, perception, and language understanding. AI technologies are revolutionizing various fields, from healthcare to finance, by enabling machines to perform tasks that typically require human intelligence.
Evaluation of ontology-powered scientific research as a means to assess and improve ontology quality
1. Evaluation of ontology-powered scientific
research as a means to assess and improve
ontology quality
Michel Dumontier, Ph.D.
Associate Professor of Bioinformatics
Department of Biology, School of Computer Science, Institute of
Biochemistry, Carleton University
Ottawa Institute of Systems Biology
Ottawa-Carleton Institute of Biomedical Engineering
Professeur Associé, Université Laval
Chair, W3C Semantic Web for Health Care and Life Sciences Interest Group
1 Ontolog Summit 2013::Dumontier:March 21, 2013
2. Why should users care about
what terms an ontology contains
and how it is structured?
How should ontology designers
evaluate their research?
2 Ontolog Summit 2013::Dumontier:March 21, 2013
3. Use of ontologies in biomedical
investigations
• In 1998, researchers involved in annotating fruit fly, mouse and
yeast genomes came together to build the Gene Ontology (GO) - a
controlled vocabulary to annotate genes (gene products) with
– Molecular function
– Cellular compartment
– Biological process
• Back in 2006, the cost of developing the GO was estimated to be
>$16M
• Thousands of genomes have been annotated with nearly 30,000
terms.
• Hundreds of tools have been devised to mine this information in
order to help elucidate organismal capability and limitations, and to
interpret the results of experiments
3 Ontolog Summit 2013::Dumontier:March 21, 2013
4. Gene Set Enrichment Analysis
• Goal: identify a set of terms that are significantly
enriched for a set of genes identified through
some experiment
• Compare the set of annotations for target genes
against all other plausible genes (Fisher’s exact
test).
• Depends on
– # and structure of terms in the ontology
– # of annotations using ontology terms
4 Ontolog Summit 2013::Dumontier:March 21, 2013
6. What’s the impact of changes in
the gene ontology and annotations
on gene set enrichment analysis?
Erik L. Clarke, Benjamin M. Good, and Andrew I. Su. A Task-Based Approach For Gene
Ontology Evaluation. Bio-ontologies 2012 SIG.
6 Ontolog Summit 2013::Dumontier:March 21, 2013
7. Top 10 most enriched terms
differ in subsequent years
2006 2012
System development Synaptic transmission
Cell-cell signaling System development
Cell communication Response to interferon-γ
Microtubule-based process Secretion by cell
Nervous system development Secretion
Inositol lipid-mediated signaling Chemotaxis
Phosphatidylinositol-mediated signaling Taxis
Regulation of catalytic activity Blood coagulation
Regulation of cell cycle Coagulation
Intracellular protein transport Cellular response to interferon-γ
Erik L. Clarke, Benjamin M. Good, and Andrew I. Su. A Task-Based Approach For Gene
Ontology Evaluation. Bio-ontologies 2012 SIG.
7 Ontolog Summit 2013::Dumontier:March 21, 2013
8. Significance of any given term
changes with time
Angiogenesis only becomes
significant after 2007.
Eight terms only become
significant after 2006.
Conclude: enrichment analysis
using human Gene Ontology
annotations improved
significantly since 2002
P-Values of Angiogenesis (red) and Ten Top
Terms (grey) in 2012 for GDS1962
The blue line is the significance threshold (p-
value < 0.01).
Erik L. Clarke, Benjamin M. Good, and Andrew I. Su. A Task-Based Approach For Gene
Ontology Evaluation. Bio-ontologies 2012 SIG.
8 Ontolog Summit 2013::Dumontier:March 21, 2013
9. A Task-Based Approach For Gene
Ontology Evaluation
• Ontology-based research is not future proof.
• Re-analysis of past experiments may yield new
and important results. However, it may also
remove previously significant results
• Suggests that continuous evaluation of research
results needs to occur.
• We need to understand how changes in
ontologies affect our research results
9 Ontolog Summit 2013::Dumontier:March 21, 2013
10. Evaluation of Ontology Research
• Considerable debate about the importance and
effectiveness of metrics to evaluate results of ontology
research
• What constitutes a (novel) research result?
– Capability to do X via some method
– Improved capability to do X, assessed by methodological
comparison
• Challenges in ontology design
– Coverage of domain and degree of formalization are limiting
factors
– A combination of factors are likely required to predict the
capability of an ontology for an arbitrary scenario.
Hoehndorf R, Dumontier M, Gkoutos GV. Evaluation of research in
biomedical ontologies.. Brief Bioinform. 2012
10 Ontolog Summit 2013::Dumontier:March 21, 2013
11. Quantifying Ontology Research
Application Evaluation Description
Community User-study From textual descriptions to any aspect of formalization,
agreement [% agreement, generate confidence measures that indicate the degree to
statistic] which a significant number [>15] of people agree.
Consistent data User-study Use an ontology to annotate the types, attributes and
annotation [% agreement, relations in a dataset
statistic]
Data Analysis [precision, Establish agreement on the points of integration and/or
integration recall, F-measure] provide an analysis of integrated data set, compare to use
cases or gold standard.
Query Test suite [# of tests Evaluate the extent to which the ontology can be used to
answering passed, precision, answer questions of relevance to the domain. Use or
recall, f-measure, jointly establish a gold standard with other communities.
complexity class]
Data Test suite [# of tests Evaluate the extent to which the ontology can be used to
consistency passed, identify inconsistent knowledge.
contradictions found,
complexity class]
Novel scientific Case-specific Evaluate the extent to which novel relations can be
results validation [p-value, f- extracted against some gold standard.
measure, ROC
11
AUC] Ontolog Summit 2013::Dumontier:March 21, 2013
12. Quantifying Ontology Research
• Community agreement
– Assess the degree to which a community agrees about any
aspect of an ontology, for example:
• Evaluate alternate textual definitions,
• Associate and evaluate synonyms, hyponyms
• Associate and evaluate mereological, subsumption and other
relations
– Quantitatively asses with user-study [% agreement, statistic]
– Example: 39% chance that GO curators select the same GO
term to annotate text; 19% chance they will annotate a term from
the same GO lineage and 43% chance to extract a term from a
new/different lineage. [1]
[1] Evaluation of GO-based functional similarity measures using S. cerevisiae protein interaction
and expression profile data. BMC Bioinformatics 2008, 9:472 doi:10.1186/1471-2105-9-472
12 Ontolog Summit 2013::Dumontier:March 21, 2013
13. • 68 volunteers linked 661 terms to each other and to a
pre-existing upper ontology by adding 245 hyponym
relationships and 340 synonym relationships
– Judged terms to be sensible, nonsense, or outside their
expertise
Less than 50% of terms
had 100% agreement.
Another 30% had 70-
90% agreement.
Would you include the
remaining 20% in your
ontology?
13 Ontolog Summit 2013::Dumontier:March 21, 2013
14. Used volunteers to judge the correctness of automatically inferred
subsumption relationships, generated from an automatic mapping of
MeSH to OWL (expect ~40% incorrect subclass relations)
- 130 subclass relations tested with 25 volunteers
confidence
weighted
response
15 Ontolog Summit 2013::Dumontier:March 21, 2013
15. Ontology-based
Data Integration, Consistency Checking and Discovery
• Checking the consistency of semantic annotations [1]
– Formalized semantic annotations in SBML models as OWL axioms.
Automated reasoning uncovered inconsistencies in 16 models.
• e.g. alpha-D-glucose phosphate is not the required ATP in an ATP-dependent
reaction (required GO + ChEBI + disjoint + existential + universal quantification)
• Finding significant biomedical associations [2]
– found significant associations between genes, drugs, diseases and
pathways using Drugbank, PharmGKB, CTD, PID across categories
of drugs (ChEBI, ATC, MeSH) and diseases (DO, MeSH)
– 22,653 pathway-disease type associations (6304 over; 16,349 under)
• carcinosarcoma (DOID:4236) and Zidovudine Pathway (PharmGKB:PA165859361)
– 13,826 pathway-chemical type associations (12,564 over; 1262 under)
• drug clopidogrel (CHEBI:37941) with Endothelin signaling pathway
(PharmGKB:PA164728163);
http://pharmgkb-owl.googlecode.com
1. Integrating systems biology models and biomedical ontologies. BMC Systems Biology. 2011. 5 : 124
2. Identifying aberrant pathways through integrated analysis of knowledge in pharmacogenomics. Bioinformatics. 2012. in press
16 Ontolog Summit 2013::Dumontier:March 21, 2013
16. HyQue
HyQue is the Hypothesis query and evaluation system
• A platform for knowledge discovery
• Facilitates hypothesis formulation and evaluation
• Leverages Semantic Web technologies to provide access to
facts, expert knowledge and web services
• Conforms to a simplified event-based model
• Supports evaluation against positive and negative findings
• Transparent and reproducible evidence prioritization
• Provenance of across all elements of hypothesis testing
– trace a hypothesis to its evaluation, including the data and rules used
Evaluating scientific hypotheses using the SPARQL Inferencing Notation. Extended Semantic Web Conference
(ESWC 2012). Heraklion, Crete. May 27-31, 2012.
HyQue: evaluating hypotheses using Semantic Web technologies. J Biomed Semantics. 2011 May 17;2 Suppl 2:S3.
17 Ontolog Summit 2013::Dumontier:March 21, 2013
18. At the heart of Linked Data for the Life Sciences
chemicals/drugs/formulations, genom
es/genes/proteins, domains
Interactions, complexes & pathways
animal models and phenotypes
Disease, genetic markers, treatments
Terminologies & publications
• Free and open source
• Based on Semantic Web standards
• Billions of interlinked statements from
dozens of conventional and high value
datasets
• Partnerships with EBI, NCBI, DBCLS,
NCBO, OpenPHACTS, and commercial tool
providers
19 Ontolog Summit 2013::Dumontier:March 21, 2013
19. Customization of rules and rulesets may lead to
different evidence-based evaluations
20 Ontolog Summit 2013::Dumontier:March 21, 2013
20. Summary
• Quantitative comparison and evaluation is at the heart of
the scientific enterprise.
• Scientists that make use of ontologies should control for
and quantitatively assess the contribution of any
ontology component.
• Ontology designers must include quantitative evaluation
to sustain any claims about community agreement,
semantic annotation, consistency checking, query
answering, or enabling new scientific results.
• We can build on knowledge sharing platforms like
Bio2RDF and hypothesis testing platforms like HyQue to
undertake and evaluate ontology-based research.
21 Ontolog Summit 2013::Dumontier:March 21, 2013
Zidovudine is a nucleoside reverse transcriptase inhibitor (NRTI) administered to patients suffering from serious manifestations of HIV infectionswith acquired immunodeciency syndrome (AIDS) or AIDS-related complex (ARC) (Arts et al., 1998; Lewis et al., 2001). Known side effects of Zidovudineinclude fatigue, headache, and myalgia as well as malaise and anorexia which clearly demonstrate the association of Zidovudinewith Mood disorders (Frissenet al., 1994; Max and Sherer, 2000).Clopidogrel is a thienopyridine-derived anti-platelet drug that inhibits platelet aggregation and prolongs bleeding time. inhibition of platelet activation due to clopidogrel's antagonism effect on the platelets‘ adenosine diphosphate (ADP) receptors; inhibits the serotonin and endothelin-1 mediated vascular smooth muscle contraction and inhibit smooth muscle cell mitogenesis.
The Bio2RDF project transforms silos of life science data into a globally distributed network of linked data for biological knowledge discovery.