- Prescription-Event Monitoring (PEM) is a non-interventional observational cohort technique used to study the safety of new medications prescribed by general practitioners. It involves collecting data on all clinical events reported by patients after being prescribed a new drug.
- PEM provides clinically useful safety information as it establishes incidence densities for all reported events during treatment with the monitored drug based on data collected from the first 5,000-18,000 prescriptions. This allows for comparisons of event rates before and after drug use.
- While PEM provides nationally representative data on new drugs in real-world settings, it also has disadvantages like an inability to measure compliance or determine use of non-prescription medications.
breif notes on what is pharmacoepidemiology, why do we need pharmacoepidemiology, whats is its aim and its main applications, advantages and disadvantages
breif notes on what is pharmacoepidemiology, why do we need pharmacoepidemiology, whats is its aim and its main applications, advantages and disadvantages
Bayesian theory was developed to improve forecast accuracy by combining subjective prediction with improvement from newly collected data.
Bayesian probability is used to improve forecasting in medicine.
Bayesian theory provides a method to weigh the prior information (e.g. physical diagnosis) and new information (e.g. results from laboratory tests) to estimate a new probability for predicting the disease.
various measures for the measurement of outcome such as incidence prevalence and other drug us measures are briefly discussed here with suitable examples and equations
adaptive methods are doing with feedback in population pharmacokinetics---- clinical pharmacokinetics and therapeutic drug monitoring-- fifth pharm D notes
Genetic polymorphism in drug transport and drug targets.pavithra vinayak
Genetic polymorphism in drug transport and targets.--pharmacogenetics
DRUG TRANSPORTER
Two types of transporter :
•ATP binding Cassette (ABC) – Found in ABCB, ABCD and ABCG family. Associated with multidrug resistance (MDR) of tumor cells causing treatment failure in cancer.
•Solute Carrier (SLC) – Transport varieties of solute include both charged or uncharged
P-glycoprotein
• ATP binding cassette subfamily B member- 1 (ABCB 1)
• Multidrug resistance protein 1 (MDR1)
• Transport various molecules, including xenobiotic, across cell membrane
• Extensively distributed and expressed throughout the body
Mechanism of Pglycoprotein
Substrate bind to P-gp form the inner leaflet of the membrane
ATP binds at the inner side of the protein
ATP is hydrolyzed to produce ADP and energy
A vaccine is a biological preparation that improves immunity to a particular Disease.
No vaccine is completely safe or completely effective, while all known vaccine adverse events are minor and self limited, some vaccine have been associated with rare but serious health effects.
In this presentation i have tried to explain in detail about the measurements of the outcomes which are used in epidemiology such as prevalence, incidence, fatality rate, crude death rate etc.
Population pharmacokinetics is the study of the sources and correlates of variability in drug concentrations among individuals who are the target patient population receiving clinically relevant doses of a drug of interest
Bayesian theory in population pharmacokinetics--
1) INTRODUCTION TO BAYESIAN THEORY
2)BAYESIAN PROBABILITY TO DOSING OF DRUGS
3)APPLICATIONS AND USES OF BAYESIAN THEORY IN APPLIED PHARMACOKINETICS:
therapeutic drug monitoring and clinical pharmacokinetics-fifth pharm d notes
Challenges in implementation of GCP guidelines: By RxVichuZ!RxVichuZ
This work deals with Challenges in the Implementation of GCP guidelines. Its based on Clinical Research subject.
Do go through it.
Regards,
@ RxVichu! :)
Post marketing studies of drug effects must then generally include at least 10,000 exposed persons in a cohort study, or enroll diseased patients from a population of equivalent size for a case–control study. A study of this size would be 95% certain of observing at least one case of any adverse effect that occurs with an incidence of 3 per 10 000 or greater (see Chapter 3). However, studies this large are expensive and difficult to perform. Yet, these studies often need to be conducted quickly, to address acute and serious regulatory, commercial, and/or public health crises. For all of these reasons, the past two decades have seen a growing use of computerized databases containing medical care data, so called “automated databases,” as potential data sources for pharmacoepidemiology studies.
Bayesian theory was developed to improve forecast accuracy by combining subjective prediction with improvement from newly collected data.
Bayesian probability is used to improve forecasting in medicine.
Bayesian theory provides a method to weigh the prior information (e.g. physical diagnosis) and new information (e.g. results from laboratory tests) to estimate a new probability for predicting the disease.
various measures for the measurement of outcome such as incidence prevalence and other drug us measures are briefly discussed here with suitable examples and equations
adaptive methods are doing with feedback in population pharmacokinetics---- clinical pharmacokinetics and therapeutic drug monitoring-- fifth pharm D notes
Genetic polymorphism in drug transport and drug targets.pavithra vinayak
Genetic polymorphism in drug transport and targets.--pharmacogenetics
DRUG TRANSPORTER
Two types of transporter :
•ATP binding Cassette (ABC) – Found in ABCB, ABCD and ABCG family. Associated with multidrug resistance (MDR) of tumor cells causing treatment failure in cancer.
•Solute Carrier (SLC) – Transport varieties of solute include both charged or uncharged
P-glycoprotein
• ATP binding cassette subfamily B member- 1 (ABCB 1)
• Multidrug resistance protein 1 (MDR1)
• Transport various molecules, including xenobiotic, across cell membrane
• Extensively distributed and expressed throughout the body
Mechanism of Pglycoprotein
Substrate bind to P-gp form the inner leaflet of the membrane
ATP binds at the inner side of the protein
ATP is hydrolyzed to produce ADP and energy
A vaccine is a biological preparation that improves immunity to a particular Disease.
No vaccine is completely safe or completely effective, while all known vaccine adverse events are minor and self limited, some vaccine have been associated with rare but serious health effects.
In this presentation i have tried to explain in detail about the measurements of the outcomes which are used in epidemiology such as prevalence, incidence, fatality rate, crude death rate etc.
Population pharmacokinetics is the study of the sources and correlates of variability in drug concentrations among individuals who are the target patient population receiving clinically relevant doses of a drug of interest
Bayesian theory in population pharmacokinetics--
1) INTRODUCTION TO BAYESIAN THEORY
2)BAYESIAN PROBABILITY TO DOSING OF DRUGS
3)APPLICATIONS AND USES OF BAYESIAN THEORY IN APPLIED PHARMACOKINETICS:
therapeutic drug monitoring and clinical pharmacokinetics-fifth pharm d notes
Challenges in implementation of GCP guidelines: By RxVichuZ!RxVichuZ
This work deals with Challenges in the Implementation of GCP guidelines. Its based on Clinical Research subject.
Do go through it.
Regards,
@ RxVichu! :)
Post marketing studies of drug effects must then generally include at least 10,000 exposed persons in a cohort study, or enroll diseased patients from a population of equivalent size for a case–control study. A study of this size would be 95% certain of observing at least one case of any adverse effect that occurs with an incidence of 3 per 10 000 or greater (see Chapter 3). However, studies this large are expensive and difficult to perform. Yet, these studies often need to be conducted quickly, to address acute and serious regulatory, commercial, and/or public health crises. For all of these reasons, the past two decades have seen a growing use of computerized databases containing medical care data, so called “automated databases,” as potential data sources for pharmacoepidemiology studies.
PowerPoint Presentation from May 2011 Personal Validation and Entity Resolution Conference. Presenters: Marianne Winglee, Richard Valliant, Fritz Scheuren.
Throughout 2015 the Canadian Cancer Survivor Network (CCSN) will offer a series of webinars designed to provide you with information to help build your knowledge and understanding of medical marijuana use in Canada.
The first webinar in this series features a presentation and Q&A session with Dr. Paul Daeninck , MD, MSc, FRCPC.
A discussion on the research paper 'An Efficient Approximate Protocol for Privacy-Preserving Association Rule Mining' by 'Murat Kantarcioglu, Robert Nix , and Jaideep Vaidya'
Privacy Preserved Distributed Data Sharing with Load Balancing SchemeEditor IJMTER
Data sharing services are provided under the Peer to Peer (P2P) environment. Federated
database technology is used to manage locally stored data with a federated DBMS and provide unified
data access. Information brokering systems (IBSs) are used to connect large-scale loosely federated data
sources via a brokering overlay. Information brokers redirect the client queries to the requested data
servers. Privacy preserving methods are used to protect the data location and data consumer. Brokers are
trusted to adopt server-side access control for data confidentiality. Query and access control rules are
maintained with shared data details under metadata. A Semantic-aware index mechanism is applied to
route the queries based on their content and allow users to submit queries without data or server
information.
Distributed data sharing is managed with Privacy Preserved Information Brokering (PPIB)
scheme. Attribute-correlation attack and inference attacks are handled by the PPIB. PPIB overlay
infrastructure consisting of two types of brokering components, brokers and coordinators. The brokers
acts as mix anonymizer are responsible for user authentication and query forwarding. The coordinators
concatenated in a tree structure, enforce access control and query routing based on the automata.
Automata segmentation and query segment encryption schemes are used in the Privacy-preserving
Query Brokering (QBroker). Automaton segmentation scheme is used to logically divide the global
automaton into multiple independent segments. The query segment encryption scheme consists of the
preencryption and postencryption modules.
The PPIB scheme is enhanced to support dynamic site distribution and load balancing
mechanism. Peer workloads and trust level of each peer are integrated with the site distribution process.
The PPIB is improved to adopt self reconfigurable mechanism. Automated decision support system for
administrators is included in the PPIB.
Computer validation of e-source and EHR in clinical trials-KuchinkeWolfgang Kuchinke
Clinical Trials in the Learning Health System (LHS): Computer System Validation of eSource and EHR Data.
The question that was addressed: How to make a clinical trial data management system that uses EHR data, Patient Reported Outcome (PRO) and eSource data as part of the Learning Health System compliant with regulations and with Good Clinical Practice (GCP)?
The Learning Health System (LHS) connects health care with translational and clinical research. It generates new medical knowledge as a by-product of the care process and its aim is to improve health and safety of patients. The LHS generates and applies knowledge. For this purpose, clinical research, which is research involving humans, must be part of the LHS. Two general types of research exists: observational studies and clinical trials.
Clinical data drive the LHS, because results from randomized controlled trials are seen as “gold standard” for medical evidence. For this reason the concept of using data gathered directly from the patient care environment has enormous potential for accelerating the rate at which useful knowledge is generated.
All computer systems involved in clinical trials must undergo Computer System Validation (CSV). For this process, a legal framework for the TRANSFoRm project was developed. It was used for data privacy analysis of the data flow in two research use cases: an epidemiological cohort study on Diabetes and a randomised clinical trial about different GORD treatment regimes.
Computerized system validation is the documented process to produce evidence that a computerized system does exactly what it is designed to do in a consistent and reproducible manner. The validation of electronic source data in clinical trials presents many challenges because of the blurring of the border between care and research. Here we present our approach for the validation of eSource data capture and the developed documentation for the CSV of the complete data flow in the LHS developed by the TRANSFoRm project. An important part hereby played the GORD Valuation Study.
Computer System Validation - privacy zones, eSource and EHR data in clinical ...Wolfgang Kuchinke
Clinical Trials in the Learning Health System (LHS): Computer System Validation of eSource and EHR Data.
The question that was addressed: How to make a clinical trial data management system that uses EHR data, Patient Reported Outcome (PRO) and eSource data as part of the Learning Health System compliant with regulations and with Good Clinical Practice (GCP)?
The Learning Health System (LHS) connects health care with translational and clinical research. It generates new medical knowledge as a by-product of the care process and its aim is to improve health and safety of patients. The LHS generates and applies knowledge. For this purpose, clinical research, which is research involving humans, must be part of the LHS. Two general types of research exists: observational studies and clinical trials.
Clinical data drive the LHS, because results from randomized controlled trials are seen as “gold standard” for medical evidence. For this reason the concept of using data gathered directly from the patient care environment has enormous potential for accelerating the rate at which useful knowledge is generated.
All computer systems involved in clinical trials must undergo Computer System Validation (CSV). For this process, a legal framework for the TRANSFoRm project was developed. It was used for data privacy analysis of the data flow in two research use cases: an epidemiological cohort study on Diabetes and a randomised clinical trial about different GORD treatment regimes.
Computerized system validation is the documented process to produce evidence that a computerized system does exactly what it is designed to do in a consistent and reproducible manner. The validation of electronic source data in clinical trials presents many challenges because of the blurring of the border between care and research. Here we present our approach for the validation of eSource data capture and the developed documentation for the CSV of the complete data flow in the LHS developed by the TRANSFoRm project. An important part hereby played the GORD Valuation Study.
Computer System Validation with privacy zones, e-source and clinical trials b...Wolfgang Kuchinke
Clinical Trials in the Learning Health System: Computer System Validation of eSource and EHR Data. Basic question is how to make a clinical trial data management system that uses EHR data, Patient Reported Outcome (PRO) and eSource data as part of the Learning Health System compliant with regulations and with Good Clinical Practice (GCP)? Computer System Validation (CSV) is a requirement for all computer systems involved in clinical trials for drug submission. It consists of documented processes to produce evidence that a computerized system does exactly what it is designed to do in a consistent and reproducible manner. Validation begins with the system requirements definition and continues until system retirement. For example, the components of a clinical trials
framework used in our case are: Patient eligibility checks and enrolment, pre-population of eCRFs with data from EHRs, PROM data collection by patients, storing of a copy of study data in the EHR, and validation of the Study System that coordinates all study and data collection events.
eSource direct data entry in clinical trials and GCP requirements. It is the duty of physicians who are involved in medical research to protect the privacy and confidentiality of personal information of research subjects. Any eSource system should be fully compliant with the provisions of applicable data protection legislation. This creates the need to develop and implement processes that ensure the continuous control of the investigators over these data. This has to be the focus of CSV. Clinical Data drive the LHS. The results from randomized controlled trials are seen as the “gold standard” for medical evidence, but such trials are often performed outside the usual system of care and recruit highly selected populations. For this reason, the concept of using data gathered directly from the patient care environment has enormous potential for accelerating the rate at which useful knowledge is generated.
This leads to the requirement for validating electronic source data in clinical trials. This includes validation for clinical data that is either captured from the subject directly or from the subject’s medical records. The problem is the correct and appropriate system validation of electronic source data. The main componenets of CSV are the Validation Master Plan), User Requirements Specification, Hardware Requirements Specification, Design qualification, Installation qualification, Operational qualification, Performance qualification.
Any instrument used to capture source data should ensure that the data are captured as specified within the protocol. Source data should be accurate, legible, contemporaneous, original, attributable, complete and consistent. An audit trail should be maintained as part of the source documents for the original creation and subsequent modification of all source data.
SDTM Training for personnel with Junior and Intermediate level Clinical Trial Experience. Covers summary of most domains. Salient features include order of domain creation, importance of making programming Data/Metadata Driven, Nature of Clinical Raw Data, Summary of the Clinical Trial process with regards to the data flow to arrive at the Study data to be submitted to regulatory authorities like FDA, Importance of deriving ADAM from SDTM and not directly from raw data, Information has been put together from variety of sources including my own programming work.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Provenance abstraction for implementing security: Learning Health System and ...Vasa Curcin
Discussion of provenance usage in the Learning Health System paradigm, as implemented in the TRANSFoRm project, with focus on security requirements and how they can be addressed using provenance graph abstraction.
Professor Martin Wiseman presentation on The Continuous Update Project: Novel approach to reviewing mechanistic evidence on diet, nutrition, physical activity and cancer at FENS European Nutrition Conference, 20-23 October 2015 Berlin (Germany).
Challenges and Opportunities Around Integration of Clinical Trials DataCitiusTech
Conducting a Clinical Trial is a complex process, consisting of activities such as protocol preparation, site selection, approval of various authorities, meticulous collection and management of data, analysis and reporting of the data collected
Each activity is benefited from the development of point applications which ease the process of data collection, reporting and decision making. The recent advancements in mobile technologies and connectivity has enabled the generation and exchange of a lot more data than previously anticipated. However, the lack of interoperability and proper planning to leverage this data, still acts as a roadblock in allowing organizations truly harness their data assets. This document will help life sciences IT professionals and decision makers understand challenges and opportunities around clinical data integration
Data Standards and Interoperability in Clinical Research and Data ManagementClinosolIndia
Data standards and interoperability play a crucial role in clinical research and data management. They ensure that data collected from various sources can be effectively shared, integrated, and analyzed across different systems and organizations. Here's an overview of data standards and interoperability in the context of clinical research and data management
Chapter 4 Knowledge Discovery, Data Mining, and Practice-Based Evi.docxchristinemaritza
Chapter 4 Knowledge Discovery, Data Mining, and Practice-Based Evidence
Mollie R. Cummins
Ginette A. Pepper
Susan D. Horn
The next step to comparative effectiveness research is to conduct more prospective large-scale observational cohort studies with the rigor described here for knowledge discovery and data mining (KDDM) and practice-based evidence (PBE) studies.
Objectives
At the completion of this chapter the reader will be prepared to:
1.Define the goals and processes employed in knowledge discovery and data mining (KDDM) and practice-based evidence (PBE) designs
2.Analyze the strengths and weaknesses of observational designs in general and of KDDM and PBE specifically
3.Identify the roles and activities of the informatics specialist in KDDM and PBE in healthcare environments
Key Terms
Comparative effectiveness research, 69
Confusion matrix, 62
Data mining, 61
Knowledge discovery and data mining (KDDM), 56
Machine learning, 56
Natural language processing (NLP), 58
Practice-based evidence (PBE), 56
Preprocessing, 56
Abstract
The advent of the electronic health record (EHR) and other large electronic datasets has revolutionized efficient access to comprehensive data across large numbers of patients and the concomitant capacity to detect subtle patterns in these data even with missing or less than optimal data quality. This chapter introduces two approaches to knowledge building from clinical data: (1) knowledge discovery and data mining (KDDM) and (2) practice-based evidence (PBE). The use of machine learning methods in retrospective analysis of routinely collected clinical data characterizes KDDM. KDDM enables us to efficiently and effectively analyze large amounts of data and develop clinical knowledge models for decision support. PBE integrates health information technology (health IT) products with cohort identification, prospective data collection, and extensive front-line clinician and patient input for comparative effectiveness research. PBE can uncover best practices and combinations of treatments for specific types of patients while achieving many of the presumed advantages of randomized controlled trials (RCTs).
Introduction
Leaders need to foster a shared learning culture for improving healthcare. This extends beyond the local department or institution to a value for creating generalizable knowledge to improve care worldwide. Sound, rigorous methods are needed by researchers and health professionals to create this knowledge and address practical questions about risks, benefits, and costs of interventions as they occur in actual clinical practice. Typical questions are as follows:
•Are treatments used in daily practice associated with intended outcomes?
•Can we predict adverse events in time to prevent or ameliorate them?
•What treatments work best for which patients?
•With limited financial resources, what are the best interventions to use for specific types of patients?
•What types of indi ...
Prix Galien International 2024 Forum ProgramLevi Shapiro
June 20, 2024, Prix Galien International and Jerusalem Ethics Forum in ROME. Detailed agenda including panels:
- ADVANCES IN CARDIOLOGY: A NEW PARADIGM IS COMING
- WOMEN’S HEALTH: FERTILITY PRESERVATION
- WHAT’S NEW IN THE TREATMENT OF INFECTIOUS,
ONCOLOGICAL AND INFLAMMATORY SKIN DISEASES?
- ARTIFICIAL INTELLIGENCE AND ETHICS
- GENE THERAPY
- BEYOND BORDERS: GLOBAL INITIATIVES FOR DEMOCRATIZING LIFE SCIENCE TECHNOLOGIES AND PROMOTING ACCESS TO HEALTHCARE
- ETHICAL CHALLENGES IN LIFE SCIENCES
- Prix Galien International Awards Ceremony
TEST BANK for Operations Management, 14th Edition by William J. Stevenson, Ve...kevinkariuki227
TEST BANK for Operations Management, 14th Edition by William J. Stevenson, Verified Chapters 1 - 19, Complete Newest Version.pdf
TEST BANK for Operations Management, 14th Edition by William J. Stevenson, Verified Chapters 1 - 19, Complete Newest Version.pdf
micro teaching on communication m.sc nursing.pdfAnurag Sharma
Microteaching is a unique model of practice teaching. It is a viable instrument for the. desired change in the teaching behavior or the behavior potential which, in specified types of real. classroom situations, tends to facilitate the achievement of specified types of objectives.
Ethanol (CH3CH2OH), or beverage alcohol, is a two-carbon alcohol
that is rapidly distributed in the body and brain. Ethanol alters many
neurochemical systems and has rewarding and addictive properties. It
is the oldest recreational drug and likely contributes to more morbidity,
mortality, and public health costs than all illicit drugs combined. The
5th edition of the Diagnostic and Statistical Manual of Mental Disorders
(DSM-5) integrates alcohol abuse and alcohol dependence into a single
disorder called alcohol use disorder (AUD), with mild, moderate,
and severe subclassifications (American Psychiatric Association, 2013).
In the DSM-5, all types of substance abuse and dependence have been
combined into a single substance use disorder (SUD) on a continuum
from mild to severe. A diagnosis of AUD requires that at least two of
the 11 DSM-5 behaviors be present within a 12-month period (mild
AUD: 2–3 criteria; moderate AUD: 4–5 criteria; severe AUD: 6–11 criteria).
The four main behavioral effects of AUD are impaired control over
drinking, negative social consequences, risky use, and altered physiological
effects (tolerance, withdrawal). This chapter presents an overview
of the prevalence and harmful consequences of AUD in the U.S.,
the systemic nature of the disease, neurocircuitry and stages of AUD,
comorbidities, fetal alcohol spectrum disorders, genetic risk factors, and
pharmacotherapies for AUD.
Title: Sense of Taste
Presenter: Dr. Faiza, Assistant Professor of Physiology
Qualifications:
MBBS (Best Graduate, AIMC Lahore)
FCPS Physiology
ICMT, CHPE, DHPE (STMU)
MPH (GC University, Faisalabad)
MBA (Virtual University of Pakistan)
Learning Objectives:
Describe the structure and function of taste buds.
Describe the relationship between the taste threshold and taste index of common substances.
Explain the chemical basis and signal transduction of taste perception for each type of primary taste sensation.
Recognize different abnormalities of taste perception and their causes.
Key Topics:
Significance of Taste Sensation:
Differentiation between pleasant and harmful food
Influence on behavior
Selection of food based on metabolic needs
Receptors of Taste:
Taste buds on the tongue
Influence of sense of smell, texture of food, and pain stimulation (e.g., by pepper)
Primary and Secondary Taste Sensations:
Primary taste sensations: Sweet, Sour, Salty, Bitter, Umami
Chemical basis and signal transduction mechanisms for each taste
Taste Threshold and Index:
Taste threshold values for Sweet (sucrose), Salty (NaCl), Sour (HCl), and Bitter (Quinine)
Taste index relationship: Inversely proportional to taste threshold
Taste Blindness:
Inability to taste certain substances, particularly thiourea compounds
Example: Phenylthiocarbamide
Structure and Function of Taste Buds:
Composition: Epithelial cells, Sustentacular/Supporting cells, Taste cells, Basal cells
Features: Taste pores, Taste hairs/microvilli, and Taste nerve fibers
Location of Taste Buds:
Found in papillae of the tongue (Fungiform, Circumvallate, Foliate)
Also present on the palate, tonsillar pillars, epiglottis, and proximal esophagus
Mechanism of Taste Stimulation:
Interaction of taste substances with receptors on microvilli
Signal transduction pathways for Umami, Sweet, Bitter, Sour, and Salty tastes
Taste Sensitivity and Adaptation:
Decrease in sensitivity with age
Rapid adaptation of taste sensation
Role of Saliva in Taste:
Dissolution of tastants to reach receptors
Washing away the stimulus
Taste Preferences and Aversions:
Mechanisms behind taste preference and aversion
Influence of receptors and neural pathways
Impact of Sensory Nerve Damage:
Degeneration of taste buds if the sensory nerve fiber is cut
Abnormalities of Taste Detection:
Conditions: Ageusia, Hypogeusia, Dysgeusia (parageusia)
Causes: Nerve damage, neurological disorders, infections, poor oral hygiene, adverse drug effects, deficiencies, aging, tobacco use, altered neurotransmitter levels
Neurotransmitters and Taste Threshold:
Effects of serotonin (5-HT) and norepinephrine (NE) on taste sensitivity
Supertasters:
25% of the population with heightened sensitivity to taste, especially bitterness
Increased number of fungiform papillae
MANAGEMENT OF ATRIOVENTRICULAR CONDUCTION BLOCK.pdfJim Jacob Roy
Cardiac conduction defects can occur due to various causes.
Atrioventricular conduction blocks ( AV blocks ) are classified into 3 types.
This document describes the acute management of AV block.
Couples presenting to the infertility clinic- Do they really have infertility...Sujoy Dasgupta
Dr Sujoy Dasgupta presented the study on "Couples presenting to the infertility clinic- Do they really have infertility? – The unexplored stories of non-consummation" in the 13th Congress of the Asia Pacific Initiative on Reproduction (ASPIRE 2024) at Manila on 24 May, 2024.
Explore natural remedies for syphilis treatment in Singapore. Discover alternative therapies, herbal remedies, and lifestyle changes that may complement conventional treatments. Learn about holistic approaches to managing syphilis symptoms and supporting overall health.
2. EVOLUTION OF PEM
Pre-marketing clinical trials are effective in
studying the efficacy of medicine but they have
limitations in defining the clinically necessary safety
of drugs. They are:-
• Small number of patients.
• The study products may received for short durations
(only a single dose), which may not be able to detect
rare ADR’s.
• Pre-marketing developing programs are dynamic.
• Special population are excluded.
2
3. • The contribution of the spontaneous reporting system in
detecting hazards such as the oculomucocutaneous
syndrome with practolol led Inman to establish the
system of Prescription-Event Monitoring (PEM) at the
Drug Safety Research Unit (DSRU) at Southampton in
1981.
• In New Zealand, the medicines adverse reactions
committee (MARC) is responsible for conducting such
studies for academic purposes and the programme is
known as the Intensive medicine monitoring programme
(IMMP).
3
4. WHAT IS PEM?
• A non interventional observational cohort
technique, which involves health professionals
submitting data on all clinical events reported by
a patient subsequent to the prescribing of a new
drug.
• It is a method of studying the safety of new
medications that are used by general practitioners.
• In PEM, the exposure data are national in scope
throughout the collection period and unaffected
by the kind of selection and exclusion criteria that
characterise clinical trials data. 4
6. Here patients being prescribed monitored
drugs, which include virtually all New
Chemical Entities are studied. The criteria for
study drug are:
• NCE
• New Pharmacological Principle
• Predicted wide spread use
• Suspected problems
• Identified but unquantified risks
6
7. • The Information on the 1st
5000-18000
prescriptions for that drug are then obtained.
• Prescribers are contacted with a questionnaire to
determine subsequent events or clinical
outcomes.
• Experiences with the drugs can then be examined
and the incidence of various events can be
estimated.
• Comparisons are made between periods before &
after drug use.
e.g.: The occurrence of Jaundice with Erythromycin
Estolate was found be such method of study 7
8. In one such study conducted by MARC, a Cohort of
3926 patients taking perhexiline & 2837 taking
labetolol, 25% of all patients discontinued taking
their drug under the study.
ADRs were the reason for stopping in 20% & 43%,
for each drug, respectively.
• PEM provides clinically useful information because
it establishes,
• From these data, Incidence densities are calculated
for all events reported during the treatment with the
monitored drug.
8
9. • Incidence density
– IDt = No of events during treatment for period ‘t’ X1000
No of patient-months of treatment for period ‘t’
Numerator = No. of reports of each event
Denominator = No. of patients exposed to the drug
A definite time frame = The period of treatment for
each patient
• These Incidence Densities/Incidence rates are
ranked in order of frequency
• These ranked lists indicate both the nature &
relative frequency of the events reported when these
drugs are used in general practice 9
10. • For an example, a study was carried out to assess
the sedation properties of 4 anti-histaminics in the
market loratadine, cetrizine, fexofenadine ancd
acrivastatine:
10
11. • Results: The odds ratios (adjusted for age and sex) for the
incidence of sedation were 0.63 (95% confidence interval
0.36 to 1.11; P = 0.1) for fexofenadine; 2.79 (1.69 to 4.58; P
< 0.0001) for acrivastine, and 3.53 (2.07 to 5.42; P <
0.0001) for cetirizine compared with loratadine. No
increased risk of accident or injury was evident with any of
the four drugs.
• Conclusions Although the risk of sedation was low with all
four drugs, fexofenadine and loratadine may be more
appropriate for people working in safety critical jobs.
• This study not only showed the sedative effects of the anti-
histaminics, and compared them, it also gave an idea about
the incidence of other ADRs associated with the 4 drugs.
• In the UK, PEM studies for response rates for over 60 drugs
have been carried out and documented. 11
12. ADVANTAGES
• Calculation of incidence density
• Carried out on a national scale
• Comparison of ‘reasons for withdrawal’ and incidence
density
• Outcome of exposed pregnancies
• Signal generation and exploration
• Delayed reactions can be detected
• Disease investigation 12
13. DISADVANTAGES
• No method of measuring compliance
• No method to determine the non-prescription
medication
• Non-return of green forms
• Does not extend to hospital monitoring
• Data collection is an operational difficulty 13
15. HISTORY
• The term record linkage was first used by the chief of
the U.S. National Office of Vital Statistics, Dr. Halbert
L. Dunn in a talk given in Canada in 1946.
• Dr. Dunn advocated the use of a unique number (e.g.
birth registration number).
15
16. • Historically record linkage was assigned to clerks who
would search and review lists to bring together the
appropriate pairs of records for comparison, seek
additional information when there were questionable
matches, and finally make decisions regarding the
linkages based on established rules.
16
17. HISTORY
• Formal development of a theory of record linkage
started with the pioneering work of Fellegi and Sunter
(1969).
• Several people have worked on extending or
modifying their procedure (Jaro 1989; Winkler 1994).
17
19. What is Record Linkage?
• Record linkage is the process of bringing together two or
more records relating to the same individual (person),
family or entity (e.g. event, object, geography, business
etc).
• To find syntactically distinct data entries that refer to the
same entity in two or more input files.
• Part of the data cleaning process, which is a crucial first
step in the knowledge discovery process .
19
22. DETERMINISTIC RECORD
LINKAGE
• A pair of records is said to be a link if the two
records agree exactly on each element within a
collection of identifiers called the match key.
• ALL or NONE
• For example, when comparing two records on last
name, street name, year of birth, and street
number, the pair of records is deemed to be a link
only if the names agree on all characters, the years
of birth are the same, and the street numbers are
identical. 22
23. PROBABILISTIC RECORD
LINKAGE
• Formalized by Fellegi and Sunter [1969].
• Pairs of records are classified as links, possible links, or
non-links.
• Here, we consider the probability of a match in the given
observed data.
• In probability matching, a threshold of likelihood is set
(which can be varied in different circumstances) above
which a pair of records is accepted as a match, relating to
the same person, and below which the match is rejected.
23
25. STANDARDIZATION
• In every data there exist many manual errors and non-
matching abbreviations etc which may present themselves
as separate data without actually being so
• First step
• To clean and standardise the data
• E.g. : For input data belonging to Mr. William Marcus
Smith, entries could have been made by different
individuals as :
– Smith W. M.
– William M. Smith
– W.M. Smith
– W.M. Smithe etc 25
26. BLOCKING
• In order to reduce the search space (i.e. the number
of record pairs to be compared).
• To group similar records together, called blocks or
clusters.
• The data sets are split into smaller blocks and only
records within the same blocks are compared.
• E.g. instead of making detailed comparisons of all
90 billion pairs from two lists of 300,000 records
representing all businesses in a State of the U.S., it
may be sufficient to consider the set of 30 million
pairs that agree on U.S. Postal ZIP code.
26
27. MATCHING
Exact Matching Statistical Matching
• Linkage of data for the same
unit (e.g., establishment)
from different files.
• Uses identifiers such as
name, address, or tax unit
number
• Attempts to link files that
may have few units in
common
• Linkages are based on
similar characteristics rather
than unique identifying
information
27
28. Requirements for defining a RLS
The types of linkages required,
Whether the linkages is
performed in batch and/or
interactive mode,
The security provisions for
confidential data files,
The speed of operation needed,
The volume of records that can
be linked with the system,
The initial cost of software
including licensing and
maintenance costs,
Whether the software is
bundled with other software
packages,
The simplicity and flexibility
in defining the rules used for
linkages,
The accuracy and statistical
defensibility of the product,
The availability of
documentation and training,
and
The maintenance and support
of the software. 28
30. USES
• The system is used to improve data quality and coverage,
for long term medical follow up of cohorts, for creating
patient-oriented rather than event-oriented data, for
building new data sources, and for a range of other
statistical purposes.
• It helps create statistically relevant source of ‘new’
information.
• Answers research questions relating to genetics,
occupational and environmental health and medical
research. 30
31. DRAWBACKS
• Issues of privacy and confidentiality
• Policies for conducting studies using such
systems must be transparent
31
32. APPLICATIONS
• Duplication in data in minimized
• Powerful tool for generating more value out of
existing databases
• Large projects regarding the census of an
entire country can be planned
• More detailed information can be obtained
• Becomes easier to follow cohorts
32