The document discusses finding lead compounds for drug design and development. It provides the following key points:
- A lead compound is a compound that shows potential therapeutic activity and is used as a starting point for drug optimization. Lead compounds can be found through screening natural or synthetic compounds or via molecular modeling.
- Lead compounds can come from various sources, including natural ligands, existing drugs, and high-throughput screening. High-throughput screening involves screening large chemical libraries against biological targets using automated assays.
- Several approaches are used to search for hit compounds from libraries, including traditional library screening, fragment-based screening, and virtual screening techniques like docking and pharmacophore modeling. Hit compounds then undergo further evaluation and optimization to
The document discusses various topics related to drug discovery through bioinformatics and computational approaches. It begins by discussing comparative genomics and using knowledge about model organisms to identify similar biological areas and pathways in other species. It also discusses topics like high-throughput screening of large libraries, the definitions of targets, hits and leads in drug discovery, and approaches like using RNAi and phenotypic screening in model organisms. Finally, it discusses computational methods that can be used throughout the drug discovery process, including for target identification and validation, virtual screening, assessing drug-likeness of compounds, and describing compounds using structural and physicochemical descriptors.
The document discusses various considerations for identifying central nervous system (CNS) drugs, including bioavailability. It defines bioavailability as the amount of drug available in the body to act at the target. Three key points are made: 1) Drugs must reach the CNS target area in sufficient amounts during the appropriate time window to have efficacy, otherwise bioavailability limits efficacy; 2) Molecular properties influence absorption, distribution, metabolism and excretion of drugs, impacting bioavailability; 3) Case studies show how changes in CNS bioavailability and metabolism can impact drug safety and efficacy.
This document discusses various topics related to drug discovery through bioinformatics. It begins by describing how genome-wide RNAi screening in the nematode C. elegans can be used to identify genes involved in biological pathways related to diseases like type-2 diabetes. It then discusses topics like structural genomics, target identification and validation, high-throughput screening approaches and facilities, sources for screening libraries, criteria for hit and lead compounds, and computational methods used in hit identification and optimization like pharmacophore modeling and evaluating compounds against the "rule of five". Descriptors that can be used for characterizing compounds are also listed.
The document outlines the schedule and content for a bioinformatics course. It includes 10 lessons covering topics like biological databases, sequence alignments, database searching, phylogenetics, and protein structure. It also mentions that the final exam will include randomly generated images from a set of 713 images.
This document provides an overview of ChEMBL, a large database of medicinal chemistry data maintained by EMBL-EBI. It describes the types of data contained in ChEMBL, including over 1.6 million compounds, 10,000 targets, and 12 million bioactivities extracted from literature. ChEMBL aims to comprehensively catalogue historical drug discovery successes and failures to identify patterns and support drug discovery. All data is freely available under an open license.
The document discusses various topics related to drug discovery including target identification and validation, high-throughput screening, hit and lead identification, computational approaches like docking and de novo design, and clinical trial phases. It provides definitions for key terms like target, screening, hit, and lead. It also discusses sources for screening libraries and describes factors to consider for an optimal drug target.
The drug discovery process involves several steps:
1) Hits from high-throughput screening are identified, which may have many potential scaffolds.
2) Hit-to-lead involves synthesizing many compounds to determine structure-activity relationships and improve properties.
3) Lead optimization aims to increase potency, selectivity, and in vivo efficacy while maintaining favorable properties. Efficient synthesis and parallel chemistry methods are important throughout the process.
This document discusses various topics related to drug discovery through bioinformatics and computational approaches. It covers target identification and validation, high-throughput screening, developing hits into leads, evaluating drug-likeness of compounds using rules like Lipinski's Rule of Five, and using computational descriptors for virtual screening. The goal is to discuss how computational tools can help streamline the drug discovery process by aiding in target selection and validation, compound screening and optimization of leads.
The document discusses various topics related to drug discovery through bioinformatics and computational approaches. It begins by discussing comparative genomics and using knowledge about model organisms to identify similar biological areas and pathways in other species. It also discusses topics like high-throughput screening of large libraries, the definitions of targets, hits and leads in drug discovery, and approaches like using RNAi and phenotypic screening in model organisms. Finally, it discusses computational methods that can be used throughout the drug discovery process, including for target identification and validation, virtual screening, assessing drug-likeness of compounds, and describing compounds using structural and physicochemical descriptors.
The document discusses various considerations for identifying central nervous system (CNS) drugs, including bioavailability. It defines bioavailability as the amount of drug available in the body to act at the target. Three key points are made: 1) Drugs must reach the CNS target area in sufficient amounts during the appropriate time window to have efficacy, otherwise bioavailability limits efficacy; 2) Molecular properties influence absorption, distribution, metabolism and excretion of drugs, impacting bioavailability; 3) Case studies show how changes in CNS bioavailability and metabolism can impact drug safety and efficacy.
This document discusses various topics related to drug discovery through bioinformatics. It begins by describing how genome-wide RNAi screening in the nematode C. elegans can be used to identify genes involved in biological pathways related to diseases like type-2 diabetes. It then discusses topics like structural genomics, target identification and validation, high-throughput screening approaches and facilities, sources for screening libraries, criteria for hit and lead compounds, and computational methods used in hit identification and optimization like pharmacophore modeling and evaluating compounds against the "rule of five". Descriptors that can be used for characterizing compounds are also listed.
The document outlines the schedule and content for a bioinformatics course. It includes 10 lessons covering topics like biological databases, sequence alignments, database searching, phylogenetics, and protein structure. It also mentions that the final exam will include randomly generated images from a set of 713 images.
This document provides an overview of ChEMBL, a large database of medicinal chemistry data maintained by EMBL-EBI. It describes the types of data contained in ChEMBL, including over 1.6 million compounds, 10,000 targets, and 12 million bioactivities extracted from literature. ChEMBL aims to comprehensively catalogue historical drug discovery successes and failures to identify patterns and support drug discovery. All data is freely available under an open license.
The document discusses various topics related to drug discovery including target identification and validation, high-throughput screening, hit and lead identification, computational approaches like docking and de novo design, and clinical trial phases. It provides definitions for key terms like target, screening, hit, and lead. It also discusses sources for screening libraries and describes factors to consider for an optimal drug target.
The drug discovery process involves several steps:
1) Hits from high-throughput screening are identified, which may have many potential scaffolds.
2) Hit-to-lead involves synthesizing many compounds to determine structure-activity relationships and improve properties.
3) Lead optimization aims to increase potency, selectivity, and in vivo efficacy while maintaining favorable properties. Efficient synthesis and parallel chemistry methods are important throughout the process.
This document discusses various topics related to drug discovery through bioinformatics and computational approaches. It covers target identification and validation, high-throughput screening, developing hits into leads, evaluating drug-likeness of compounds using rules like Lipinski's Rule of Five, and using computational descriptors for virtual screening. The goal is to discuss how computational tools can help streamline the drug discovery process by aiding in target selection and validation, compound screening and optimization of leads.
Daniel Romo is moving his research group from Texas A&M University to Baylor University in August 2015. His group conducts synthetic, biological, and biosynthetic studies of bioactive natural products with a focus on developing novel synthetic strategies and understanding the mode of action of natural products through activity-based proteomic profiling and molecular studies. Current projects include the total synthesis of oxazolomycin and gracillins and investigating the mechanisms of rameswaralide, ophiobolin, and agelastatin A.
Tino Miller is a versatile chemist with over 8 years of postgraduate experience in research organizations. He has broad experience in organic, analytical, radiochemistry, medicinal, and process chemistry. His skills include synthesis, analysis, and purification of organic compounds as well as regulatory compliance. He has worked on drug development projects for diseases such as cancer and schistosomiasis. Miller's experience includes positions at St. Jude Children's Research Hospital, Iowa State University, Mayo Clinic, and the University of North Florida.
Mel Reichman on Pool Shark’s Cues for More Efficient Drug DiscoveryJean-Claude Bradley
Mel Reichman, senior investigator and director of the LIMR Chemical Genomics Center at the Lankenau Institute for Medical Research presents at the chemistry department at Drexel University on November 12, 2009.
Modern drug discovery by high-throughput screening (HTS) begins with testing hundreds of thousands of compounds in biological assays. The confirmed hit rate for typical HTS is less than 0.5%; therefore, 99.5% of the costs of HTS are for generating null data. Orthogonal convolution of compound libraries (OCL) is 500% more efficient than present HTS practice. The OCL method combines 10 compounds per well. An advantage of this method is that each compound is represented twice in two separately arrayed pools. The potential for the approach to better enable academic centers of excellence to validate medicinally relevant biological targets is discussed.
The document describes research on designing novel pyrimidine derivatives as Tankyrase inhibitors for treating colorectal cancer. Key steps included pharmacophore modeling to identify important structural features, virtual screening of databases to find potential hits, and 3D-QSAR analysis to guide molecule design. Several molecules were designed and docked into the Tankyrase enzyme active site. In vitro cytotoxicity tests on some synthesized derivatives showed inhibitory activity against MCF-7 cancer cells in the low micromolar range, though non-toxic to normal cells. The research thus identified new molecules with potential as Tankyrase inhibitor anticancer agents.
This document provides an overview of drug design and discovery. It discusses several approaches to drug design including natural sources, chemical modification, screening, and rational drug design. Rational drug design uses computer-aided techniques like molecular modeling to design drugs that optimally interact with biological targets linked to disease. The development of new drugs is a long multi-step process taking 10-12 years and involving target identification, preclinical testing, and human clinical trials. Computational methods like quantitative structure-activity relationships (QSAR) and molecular modeling are now widely used in drug design to accelerate the process.
This document provides an overview of drug design and discovery. It discusses the traditional approaches of natural sources, chemical modification, screening, and rational drug design. The drug discovery process is outlined, taking 10-12 years and involving multiple disciplines. Advances in genomics, high-throughput screening, molecular modeling, and other technologies are impacting the process and potentially reducing time to market. Molecular modeling and computational tools play an important role in rational drug design approaches.
The document discusses herbal and drug design technology. It covers the process of drug discovery including target identification, lead identification, lead optimization, preclinical and clinical testing. It also discusses sources of lead compounds including natural products, observed drug side effects, and in silico computer-assisted drug design. Advances in molecular biology, genomics, high throughput screening and computer-aided drug design have revolutionized the field. Herbal drug design involves a multidisciplinary approach combining botanical, phytochemical and biological techniques to discover drugs from plants.
The document discusses lead identification in drug development. It defines a lead compound as one that shows desired pharmaceutical activity and could potentially be developed into a drug. The document outlines the content to be presented, including an introduction to lead identification, what a lead is, properties of leads, and methods for identifying leads. Key methods discussed are random screening, non-random screening, high-throughput screening, and structure-based drug design.
Computer aided drug design uses computational approaches to aid in the drug discovery process. There are several key approaches including ligand based approaches which identify characteristics of known active ligands, target based approaches which use information about the biological target, and structure based drug design which utilizes 3D structural information. The main steps in drug design include target identification and validation, lead identification and optimization, and preclinical and clinical trials. Computational tools are used throughout the process for tasks like molecular docking, ADMET prediction, and structure activity relationship analysis.
The document discusses lead identification and optimization in drug design. It describes the general drug discovery process which includes target validation, assay development, high-throughput screening, hit to lead identification, and lead optimization stages. Lead optimization is one of the most important steps and involves modifying lead compounds to improve potency, selectivity, and pharmacokinetic parameters. Structure-based and ligand-based drug design approaches are used, along with in silico tools to predict properties like toxicity and ensure drug-likeness. Key steps in structure-based design include identifying the binding site and growing fragments in an iterative process until an optimized lead is obtained.
Access to Research
Date 11-08-2018
Venue Conference HAll NIAS IISc campus
Conference and workshops for clinical practitioners to introduce them to modern tools and an alternative approach to modern scientific research.
Purpose
1. Build a network of physicians across the country
2 Train physicians to analyse clinical data and restructure it to make it compatible with research standards
3. Introduce modern tools to understand the mechanism of actions of medicine
4. Introduce artificial intelligence and machine learning to clinical practitioners to support decision-making processes
Access to Science
Clinical experience and traditional knowledge are important sources of data that affect decision making processes in modern healthcare systems. This data should be made accessible for scientific evaluation and validation to improve healthcare worldwide. The Open Source Pharma Foundation believes that clinical practitioners from various disciplines should have the right to access research so that they can help identify problems, contribute their scientific knowledge, and support the discovery ecosystem.
Background
The majority of medical practitioners working on the ground level with patients do not take part in open clinical research worldwide. However, the data collected and owned by them plays an important role in establishing better discovery pathways. Through this workshop, we seek to open opportunities to enhance health care systems around the world and to overcome the following challenges faced by medical practitioners.
1. Regulatory limitations
2. Academic limitations
3. Time constraints
4. Lack of access to modern tools
Drug Discovery Today: Fighting TB with Technologyrendevilla
This document discusses desktop drug discovery and development using computational methods. It covers rational drug design approaches like computer-aided drug design (CADD), targeting identification and validation, lead discovery and optimization, and preclinical testing using molecular modeling and simulation. Specific examples are provided of structure-based drug design against targets for tuberculosis and the preclinical evaluation of candidate compounds.
LEAD IDENTIFICATION BY SUHAS PATIL (S.K.)suhaspatil114
This document provides an overview of lead identification in drug discovery. It discusses various methods for identifying lead compounds, including combinatorial chemistry, high-throughput screening, and in silico lead discovery techniques. Combinatorial chemistry allows for the rapid production and screening of large compound libraries. High-throughput screening assays test large numbers of compounds against biological targets using automated technologies. In silico methods like molecular docking use computer simulations to predict how compounds may bind and interact with targets. The goal is to find initial "hit" compounds that can then be optimized into drug candidates.
1) Computer-aided drug design (CADD) uses computational methods and resources to facilitate the design and discovery of new therapeutic solutions.
2) CADD can be used at various stages of drug discovery, including hit identification, hit-to-lead optimization, and lead optimization.
3) The objectives of CADD are to speed up the drug screening process and enable more rational and targeted drug design and testing.
The Open Source Drug Discovery (OSDD) strategy uses an open innovation model with a porous-walled funnel to facilitate the free flow of ideas and projects. It brings in more contributors to look at projects and enables redundancies and parallelization. OSDD acts as a facilitator to marry academic and delivery-focused approaches and provides expertise, discovery platforms, and coordination of activities from both individual and centrally coordinated projects. OSDD has established multiple platforms for drug discovery including compound management, screening, target validation, and mechanistic studies. It has an extensive portfolio involving over 180 principal investigators from over 100 institutions working on projects ranging from whole cell screening to structure-based drug design.
Rapid In Vivo Assessment of Bioactivity in Zebrafish: High Content Data for P...OSU_Superfund
Dr. Robert Tanguay's presentation on April 30, 2014 with the 21st Century Toxicology Seminar Series of the California Dept. of Pesticide Regulation. https://www.facebook.com/media/set/?set=a.766268766739722.1073741858.440748475958421&type=3&uploaded=5
For more information about the research of Robert Tanguay, visit the Superfund Research Program: http://superfund.oregonstate.edu and the Environmental Health Science Center: http://ehsc.oregonstate.edu
This document describes a study that developed a photoacoustic spectroscopy method to noninvasively monitor endogenous methane production in small laboratory animals and humans. The method was used to measure whole-body methane emission in mice and rats under normal conditions, after antibiotic treatment to reduce gut methanogens, and after lipopolysaccharide administration. Single-breath methane analyses were also performed on human participants. The study aimed to establish photoacoustic spectroscopy as a reliable tool for monitoring in vivo methane dynamics in response to various treatments.
Daniel Romo is moving his research group from Texas A&M University to Baylor University in August 2015. His group conducts synthetic, biological, and biosynthetic studies of bioactive natural products with a focus on developing novel synthetic strategies and understanding the mode of action of natural products through activity-based proteomic profiling and molecular studies. Current projects include the total synthesis of oxazolomycin and gracillins and investigating the mechanisms of rameswaralide, ophiobolin, and agelastatin A.
Tino Miller is a versatile chemist with over 8 years of postgraduate experience in research organizations. He has broad experience in organic, analytical, radiochemistry, medicinal, and process chemistry. His skills include synthesis, analysis, and purification of organic compounds as well as regulatory compliance. He has worked on drug development projects for diseases such as cancer and schistosomiasis. Miller's experience includes positions at St. Jude Children's Research Hospital, Iowa State University, Mayo Clinic, and the University of North Florida.
Mel Reichman on Pool Shark’s Cues for More Efficient Drug DiscoveryJean-Claude Bradley
Mel Reichman, senior investigator and director of the LIMR Chemical Genomics Center at the Lankenau Institute for Medical Research presents at the chemistry department at Drexel University on November 12, 2009.
Modern drug discovery by high-throughput screening (HTS) begins with testing hundreds of thousands of compounds in biological assays. The confirmed hit rate for typical HTS is less than 0.5%; therefore, 99.5% of the costs of HTS are for generating null data. Orthogonal convolution of compound libraries (OCL) is 500% more efficient than present HTS practice. The OCL method combines 10 compounds per well. An advantage of this method is that each compound is represented twice in two separately arrayed pools. The potential for the approach to better enable academic centers of excellence to validate medicinally relevant biological targets is discussed.
The document describes research on designing novel pyrimidine derivatives as Tankyrase inhibitors for treating colorectal cancer. Key steps included pharmacophore modeling to identify important structural features, virtual screening of databases to find potential hits, and 3D-QSAR analysis to guide molecule design. Several molecules were designed and docked into the Tankyrase enzyme active site. In vitro cytotoxicity tests on some synthesized derivatives showed inhibitory activity against MCF-7 cancer cells in the low micromolar range, though non-toxic to normal cells. The research thus identified new molecules with potential as Tankyrase inhibitor anticancer agents.
This document provides an overview of drug design and discovery. It discusses several approaches to drug design including natural sources, chemical modification, screening, and rational drug design. Rational drug design uses computer-aided techniques like molecular modeling to design drugs that optimally interact with biological targets linked to disease. The development of new drugs is a long multi-step process taking 10-12 years and involving target identification, preclinical testing, and human clinical trials. Computational methods like quantitative structure-activity relationships (QSAR) and molecular modeling are now widely used in drug design to accelerate the process.
This document provides an overview of drug design and discovery. It discusses the traditional approaches of natural sources, chemical modification, screening, and rational drug design. The drug discovery process is outlined, taking 10-12 years and involving multiple disciplines. Advances in genomics, high-throughput screening, molecular modeling, and other technologies are impacting the process and potentially reducing time to market. Molecular modeling and computational tools play an important role in rational drug design approaches.
The document discusses herbal and drug design technology. It covers the process of drug discovery including target identification, lead identification, lead optimization, preclinical and clinical testing. It also discusses sources of lead compounds including natural products, observed drug side effects, and in silico computer-assisted drug design. Advances in molecular biology, genomics, high throughput screening and computer-aided drug design have revolutionized the field. Herbal drug design involves a multidisciplinary approach combining botanical, phytochemical and biological techniques to discover drugs from plants.
The document discusses lead identification in drug development. It defines a lead compound as one that shows desired pharmaceutical activity and could potentially be developed into a drug. The document outlines the content to be presented, including an introduction to lead identification, what a lead is, properties of leads, and methods for identifying leads. Key methods discussed are random screening, non-random screening, high-throughput screening, and structure-based drug design.
Computer aided drug design uses computational approaches to aid in the drug discovery process. There are several key approaches including ligand based approaches which identify characteristics of known active ligands, target based approaches which use information about the biological target, and structure based drug design which utilizes 3D structural information. The main steps in drug design include target identification and validation, lead identification and optimization, and preclinical and clinical trials. Computational tools are used throughout the process for tasks like molecular docking, ADMET prediction, and structure activity relationship analysis.
The document discusses lead identification and optimization in drug design. It describes the general drug discovery process which includes target validation, assay development, high-throughput screening, hit to lead identification, and lead optimization stages. Lead optimization is one of the most important steps and involves modifying lead compounds to improve potency, selectivity, and pharmacokinetic parameters. Structure-based and ligand-based drug design approaches are used, along with in silico tools to predict properties like toxicity and ensure drug-likeness. Key steps in structure-based design include identifying the binding site and growing fragments in an iterative process until an optimized lead is obtained.
Access to Research
Date 11-08-2018
Venue Conference HAll NIAS IISc campus
Conference and workshops for clinical practitioners to introduce them to modern tools and an alternative approach to modern scientific research.
Purpose
1. Build a network of physicians across the country
2 Train physicians to analyse clinical data and restructure it to make it compatible with research standards
3. Introduce modern tools to understand the mechanism of actions of medicine
4. Introduce artificial intelligence and machine learning to clinical practitioners to support decision-making processes
Access to Science
Clinical experience and traditional knowledge are important sources of data that affect decision making processes in modern healthcare systems. This data should be made accessible for scientific evaluation and validation to improve healthcare worldwide. The Open Source Pharma Foundation believes that clinical practitioners from various disciplines should have the right to access research so that they can help identify problems, contribute their scientific knowledge, and support the discovery ecosystem.
Background
The majority of medical practitioners working on the ground level with patients do not take part in open clinical research worldwide. However, the data collected and owned by them plays an important role in establishing better discovery pathways. Through this workshop, we seek to open opportunities to enhance health care systems around the world and to overcome the following challenges faced by medical practitioners.
1. Regulatory limitations
2. Academic limitations
3. Time constraints
4. Lack of access to modern tools
Drug Discovery Today: Fighting TB with Technologyrendevilla
This document discusses desktop drug discovery and development using computational methods. It covers rational drug design approaches like computer-aided drug design (CADD), targeting identification and validation, lead discovery and optimization, and preclinical testing using molecular modeling and simulation. Specific examples are provided of structure-based drug design against targets for tuberculosis and the preclinical evaluation of candidate compounds.
LEAD IDENTIFICATION BY SUHAS PATIL (S.K.)suhaspatil114
This document provides an overview of lead identification in drug discovery. It discusses various methods for identifying lead compounds, including combinatorial chemistry, high-throughput screening, and in silico lead discovery techniques. Combinatorial chemistry allows for the rapid production and screening of large compound libraries. High-throughput screening assays test large numbers of compounds against biological targets using automated technologies. In silico methods like molecular docking use computer simulations to predict how compounds may bind and interact with targets. The goal is to find initial "hit" compounds that can then be optimized into drug candidates.
1) Computer-aided drug design (CADD) uses computational methods and resources to facilitate the design and discovery of new therapeutic solutions.
2) CADD can be used at various stages of drug discovery, including hit identification, hit-to-lead optimization, and lead optimization.
3) The objectives of CADD are to speed up the drug screening process and enable more rational and targeted drug design and testing.
The Open Source Drug Discovery (OSDD) strategy uses an open innovation model with a porous-walled funnel to facilitate the free flow of ideas and projects. It brings in more contributors to look at projects and enables redundancies and parallelization. OSDD acts as a facilitator to marry academic and delivery-focused approaches and provides expertise, discovery platforms, and coordination of activities from both individual and centrally coordinated projects. OSDD has established multiple platforms for drug discovery including compound management, screening, target validation, and mechanistic studies. It has an extensive portfolio involving over 180 principal investigators from over 100 institutions working on projects ranging from whole cell screening to structure-based drug design.
Rapid In Vivo Assessment of Bioactivity in Zebrafish: High Content Data for P...OSU_Superfund
Dr. Robert Tanguay's presentation on April 30, 2014 with the 21st Century Toxicology Seminar Series of the California Dept. of Pesticide Regulation. https://www.facebook.com/media/set/?set=a.766268766739722.1073741858.440748475958421&type=3&uploaded=5
For more information about the research of Robert Tanguay, visit the Superfund Research Program: http://superfund.oregonstate.edu and the Environmental Health Science Center: http://ehsc.oregonstate.edu
This document describes a study that developed a photoacoustic spectroscopy method to noninvasively monitor endogenous methane production in small laboratory animals and humans. The method was used to measure whole-body methane emission in mice and rats under normal conditions, after antibiotic treatment to reduce gut methanogens, and after lipopolysaccharide administration. Single-breath methane analyses were also performed on human participants. The study aimed to establish photoacoustic spectroscopy as a reliable tool for monitoring in vivo methane dynamics in response to various treatments.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
2. The lead compound
• A compound demonstrating a property likely to be therapeutically useful
• The level of activity and target selectivity are not crucial
• Used as the starting point for drug design and development
• Found by design (molecular modelling or NMR) or by screening compounds (natural or
synthetic)
• Need to identify a suitable test in order to find a lead compound
• Active principle - a compound that is isolated from a natural extract and which is
principally responsible for the extract’s pharmacological activity. Often used as a lead
compound.
3. Lead Compounds from a Variety of Sources
4. Natural Ligands
5. Existing Drugs
6. High Throughput Screening (HTS)
N
S
O
N
H
R
O
O
OH
Penicillin
O
OH
O
O
O
O
O
O
OH
O
O
O
O
H
NH
O
H
Taxol
N
H
N
N
N
O
O
S
N
O O
N Viagra
1. Chance Discovery
2. Natural Products
3. Clinical Observation
4. 4. Natural Ligands
O
H
O
H
OH
N
H
R
R=H adrenaline
R=Me noradrenaline
Catechol
bioisostere
(toxicity)
Increased size
(selectivity and duration)
Catechol
bioisostere
(toxicity)
Increased size
(selectivity and duration)
N
H
O
H
OH
N
H
O
O
H
Formoterol
AstraZeneca
O
H
OH
N
H
O
H
Salbutamol
GlaxoSmithKline
5. N
O
O
N
N
H
O
O
Cialis
Eli Lilly
N
H
N
N
N
O
O
S
N
O O
N
Levitra
Bayer
5. Existing Drugs
Also known as the “Me-Too” or “Me-Better” Approach
Issues: short duration
Multiple side effects and
incompatibility with other drugs
N
H
N
N
N
O
O
S
N
O O
N
Viagra
Pfizer
Fewer side effects and
incompatibility with other drugs
36h duration (“the weekend pill”)
BEWARE: Patent Issues!!
6. 6. High Throughput Screening (HTS)
• Validated, tractable targets
• target selection for HTS
• Industrialised process
• HTS assay technologies
and automation
• Chemical diversity
• sample selection for HTS
How?
“An industrialised process which brings together validated,
tractable targets and chemical diversity to rapidly identify
novel lead compounds for early phase drug discovery”
50-70% of new drug projects originate from a HTS
9. Drug Candidate
safety testing
Animal Studies
- relevant species
- transgenic KO/KI
mice
- conditional KOs
- agonists/antagonists
- antibodies
- antisense
- RNAi
Studies of
Disease Mechanisms
Human Studies
Phases I,II, III
Target
-receptor; -ion channel; -transporter;
-enzyme; - signalling molecule
Lead Search
-Develop assays (use of automation)
-Chemical diversity
-Highly iterative process
Molecular Studies
The Drug Discovery Process
Lead optimization
-selectivity
-efficacy in animal models
-tolerability: AEs mechanism-
based or structure-based?
-pharmacokinetics
-highly iterative process
Drug Approval
and Registration
Target selection &
validation
Discovery Development
15. CCE – Common Combinatorial Reactions
• Amide Coupling
R
3
O
H
O
N
R
2
H
R
1
N
R
2
R
1
R
3
O
+
HATU, Et3N
NMP
• Sulphonamide Formation
N
R
2
H
R
1
N
R
2
S
R
1
R
3
O O
S
Cl R
3
O O
+
Et3N
NMP
• Reductive Amination
N
R
2
H
R
1
N
R
2
R
1
R
3
H R
3
O
+
AcOH, NMP
Na(AcO)3BH
N N
N
N
O
N
N
+
PF6
-
N
O
HATU
NMP
20. Example 1: Inhibition of Stromelysin
Stromelysin is a zinc-dependent protease that is responsible for breaking down and
re-forming connective tissues, including collagen.
23. Library for the Protein kinase inhibitor
N
H
NOCH3
O
N OCH3
O
N OCH3
OH
N OCH3
S
N OCH3
N OCH3
OH
OH
N
N
OCH3
Cl
O
N OCH3
Cl
Cl
N OCH3
N OCH3
OH
N
S N
O
O
N
OCH3
O
N OCH3
OH
1
2
3 4
5
6
9 10
7
8
11 12
D.J.Maly, I.C.Choong and J.A, Ellman,Proc.Natl.Acad.Sci.USA,2000,97,2419-2424
Example 3. Protein kinase inhibitor
24. Library for the Protein kinase inhibitor
D.J.Maly, I.C.Choong and J.A, Ellman,Proc.Natl.Acad.Sci.USA,2000,97,2419-2424
Ki =41µM
N
H
NOCH3
O
N OCH3
O
N OCH3
OH
N OCH3
S
N OCH3
N OCH3
OH
OH
N
N
OCH3
Cl
O
N OCH3
Cl
Cl
N OCH3
N OCH3
OH
N
S N
O
O
N
OCH3
O
N OCH3
OH
1
2
3 4
5
6
9 10
7
8
11 12
25. Library of protein kinase inhibitor
D.J.Maly, I.C.Choong and J.A, Ellman,Proc.Natl.Acad.Sci.USA,2000,97,2419-2424
F
F
F
F
F
N OCH3
O
Br
Br
O
N OCH3
N
OH
N OCH3
N
N OCH3
O
N OCH3
HO
HO
N OCH3
O
O
O
N OCH3
S
N OCH3
N OCH3
S
N
OCH3
H3CO
C6H5H2CO
N
OCH3
N
OCH3
16
17
18 19
20
13
14 15
21 22 23 24
Ki= 40M
26. Fragment-Based Design : Protein Kinase Inhibition
D.J.Maly, I.C.Choong and J.A, Ellman,Proc.Natl.Acad.Sci.USA,2000,97,2419-2424
Compound IC50µM
C-Src Fyn Lyn Lck
[7] 41± 5 >1000 >1000 >1000
[16] 40± 16 64± 50 400± 170 >500
[7,16] 0.064± 0.038 5.0± 2.4 13 ± 2.4 >250
N
N OCH3
N
N
O
N
O
HO
OH
OH
OH
N
H3CO
7 16
27. Entry
1
2
3
4
6
7
8
Compound
7,16, n=2
Linker c-Src IC50, M
(CH2)n
7,16, n=3
7,16, n=4
7,16, n=5
7,16, n=6
7,16, cis
7,16, trans (1R,2R)
(CH2)n
(CH2)n
(CH2)n
(CH2)n
0.0640.038
1.10.2
6.53.0
6.50.8
5.32.1
1.20.6
0.620.02
N
N N
O
HO
OH
O
linker
Correlation of linker structure with IC50 values for c-Src Inhibition
29. 3. Virtual Screening
Lipinski’s rule screening
Pharmacophore screening
ADMET based screening
Docking, consensus scoring,
visual inspection
Schematic illustration virtual screening strategy
Hit Compound
IC50 prediction, in silico ADME
prediction, in silico toxicity
prediction (TOPKAT)
30. 2D/3D Substructure Searching
Selection of compounds based on
chemical similarity to known active
compounds using some similarity
measure
a. Compound similarity
32. Filtering Hits to Lead Compounds
1.Pharmacodynamics and Pharmacokinetics
2. Biological Assays
3. Lipinski’s Rules and Related Indices
4. Etc.
33. Classic medicinal chemistry
Improve the classic properties of
o Potency
o Selectivity
o Toxic side effect
o Pharmacokinetic
Computer-Aided Drug Discovery:
o Docking,
o 2D-, 3D-, 4D- QSAR
o 3D-QSAR Pharmacophore
o Dynamic simulation
o The drug discovery pipeline
o ADMET profiling
o TOXICITY Profiling
Leads Optimization
34. Structure-based
know receptor,
don’t known ligands
There are two major types of drug design. The first is referred to as Structure-based
drug design and the second, Ligand-based drug design
What will be happy in there???
Ligand-based
don’t know receptor,
known ligands
Protein/ligand interactions
structure/biophysics
docking Statistical analysis (eq. QSAR) of
what group(s) are important for
biological activity
Structure modeling
(Homology/experimental X-
ray/NMR/neutron)
Get a structure