This document discusses the role and methods of systems biology in drug discovery and development. It covers key topics such as:
- The challenges of interpreting large omics data sets and how systems biology aims to integrate multi-omics data.
- Examples of how systems biology approaches like computational modeling can be used in target discovery, understanding drug mechanisms of action, predicting drug combinations, and more.
- How systems biology methods that combine experimental data with modeling are being applied across various stages of the drug development process from preclinical research to determining side effects.
Systems biology is the computational and mathematical modeling of complex biological systems. It is a biology-based interdisciplinary field of study that focuses on complex interactions within biological systems, using a holistic approach (holism instead of the more traditional reductionism) to biological research.
Sequence alignment involves identifying corresponding portions of biological sequences, such as DNA, RNA, and proteins, in order to analyze similarities and differences at the level of individual bases or amino acids. This can provide insights into structural, functional, and evolutionary relationships. Sequence alignment has many applications, including searching databases for similar sequences, constructing phylogenetic trees, and predicting protein structure. It works by designing an optimal correspondence between sequences that preserves the order of residues while maximizing matches and minimizing mismatches. Quantitative measures of sequence similarity, such as Hamming distance and Levenshtein distance, calculate the number of differences between aligned sequences.
This document provides an introduction and overview of the field of bioinformatics. It discusses how bioinformatics combines computer science and biology to analyze large amounts of biological data. Specifically, it mentions that bioinformatics uses algorithms and techniques from computer science to solve complex biological problems related to areas like molecular biology, genomics, drug discovery, and more. It also outlines some of the key applications of bioinformatics like sequence analysis, protein structure prediction, genome annotation, and comparative genomics. Finally, it provides brief descriptions of important biological databases and resources that bioinformaticians use to store and analyze genomic and protein sequence data.
Databases pathways of genomics and proteomics Sachin Kumar
The document discusses several databases related to human metabolism and pharmacology. It describes the contents and purpose of each database, including the Human Metabolome Database (HMDB), KEGG, MetaCyc, PubChem, ChEBI, DrugBank, the Therapeutic Target Database (TTD), PharmGKB, and Chemical Entities of Biological Interest (ChEBI). These databases contain chemical, clinical, molecular biology, pathway, and genomic data on human metabolites, drugs, and targets.
Homology modeling is a technique used to predict the 3D structure of a protein based on the alignment of its amino acid sequence to known protein structures. It relies on the observation that structure is more conserved than sequence during evolution. The key steps in homology modeling include: 1) identifying a template structure through sequence alignment tools like BLAST, 2) correcting any errors in the initial alignment, 3) generating the protein backbone based on the template structure, 4) modeling any loops or missing regions, 5) adding side chains, 6) optimizing the model structure energetically, and 7) validating that the final model matches the template structure and has correct stereochemistry. Homology modeling is useful for applications like structure-based drug design
Force fields are mathematical functions used to describe potential energy in molecular modeling simulations. Common classical force fields include AMBER, CHARMM, GROMACS, GROMOS, and MMFF. AMBER was developed at UCSF and has parameter sets for proteins, nucleic acids, small molecules. GROMACS is a molecular dynamics software that supports different force fields like AMBER and CHARMM. GROMOS is a united atom force field optimized for alkanes. MMFF is derived from quantum calculations and experimental data for drug-like molecules. CHARMM was developed at Harvard and has broad coverage of biomolecules and organic compounds.
INTRODUCTION
A PERFECT THERAPEUTIC DRUG
DRUG DISCOVERY- HISTORY
MODERN DRUG DISCOVERY
BIOINFORATICS IN DRUG DISCOVERY
DRUG DISCOVERY BASED ON BIOINFORMATIC TOOLS
BIOINFORMATICS IN COMPUTER-AIDED DRUG DISCOVERY
ECONOMICS OF DRUG DISCOVERY
CONCLUSION
REFERENCES
This document provides an outline for a presentation on biological networks, including introducing biological networks, describing their basic components and types, methods for predicting and building networks, sources of interaction data, tools for network visualization and analysis, and a demonstration of building, visualizing and analyzing biological networks using Cytoscape. The presentation covers topics like nodes and edges in networks, features used to analyze networks, methods for predicting networks from sequences and omics data, integrated databases for interaction data, and popular tools for searching, visualizing and performing network analysis.
Systems biology is the computational and mathematical modeling of complex biological systems. It is a biology-based interdisciplinary field of study that focuses on complex interactions within biological systems, using a holistic approach (holism instead of the more traditional reductionism) to biological research.
Sequence alignment involves identifying corresponding portions of biological sequences, such as DNA, RNA, and proteins, in order to analyze similarities and differences at the level of individual bases or amino acids. This can provide insights into structural, functional, and evolutionary relationships. Sequence alignment has many applications, including searching databases for similar sequences, constructing phylogenetic trees, and predicting protein structure. It works by designing an optimal correspondence between sequences that preserves the order of residues while maximizing matches and minimizing mismatches. Quantitative measures of sequence similarity, such as Hamming distance and Levenshtein distance, calculate the number of differences between aligned sequences.
This document provides an introduction and overview of the field of bioinformatics. It discusses how bioinformatics combines computer science and biology to analyze large amounts of biological data. Specifically, it mentions that bioinformatics uses algorithms and techniques from computer science to solve complex biological problems related to areas like molecular biology, genomics, drug discovery, and more. It also outlines some of the key applications of bioinformatics like sequence analysis, protein structure prediction, genome annotation, and comparative genomics. Finally, it provides brief descriptions of important biological databases and resources that bioinformaticians use to store and analyze genomic and protein sequence data.
Databases pathways of genomics and proteomics Sachin Kumar
The document discusses several databases related to human metabolism and pharmacology. It describes the contents and purpose of each database, including the Human Metabolome Database (HMDB), KEGG, MetaCyc, PubChem, ChEBI, DrugBank, the Therapeutic Target Database (TTD), PharmGKB, and Chemical Entities of Biological Interest (ChEBI). These databases contain chemical, clinical, molecular biology, pathway, and genomic data on human metabolites, drugs, and targets.
Homology modeling is a technique used to predict the 3D structure of a protein based on the alignment of its amino acid sequence to known protein structures. It relies on the observation that structure is more conserved than sequence during evolution. The key steps in homology modeling include: 1) identifying a template structure through sequence alignment tools like BLAST, 2) correcting any errors in the initial alignment, 3) generating the protein backbone based on the template structure, 4) modeling any loops or missing regions, 5) adding side chains, 6) optimizing the model structure energetically, and 7) validating that the final model matches the template structure and has correct stereochemistry. Homology modeling is useful for applications like structure-based drug design
Force fields are mathematical functions used to describe potential energy in molecular modeling simulations. Common classical force fields include AMBER, CHARMM, GROMACS, GROMOS, and MMFF. AMBER was developed at UCSF and has parameter sets for proteins, nucleic acids, small molecules. GROMACS is a molecular dynamics software that supports different force fields like AMBER and CHARMM. GROMOS is a united atom force field optimized for alkanes. MMFF is derived from quantum calculations and experimental data for drug-like molecules. CHARMM was developed at Harvard and has broad coverage of biomolecules and organic compounds.
INTRODUCTION
A PERFECT THERAPEUTIC DRUG
DRUG DISCOVERY- HISTORY
MODERN DRUG DISCOVERY
BIOINFORATICS IN DRUG DISCOVERY
DRUG DISCOVERY BASED ON BIOINFORMATIC TOOLS
BIOINFORMATICS IN COMPUTER-AIDED DRUG DISCOVERY
ECONOMICS OF DRUG DISCOVERY
CONCLUSION
REFERENCES
This document provides an outline for a presentation on biological networks, including introducing biological networks, describing their basic components and types, methods for predicting and building networks, sources of interaction data, tools for network visualization and analysis, and a demonstration of building, visualizing and analyzing biological networks using Cytoscape. The presentation covers topics like nodes and edges in networks, features used to analyze networks, methods for predicting networks from sequences and omics data, integrated databases for interaction data, and popular tools for searching, visualizing and performing network analysis.
This document summarizes different computational methods for protein structure prediction, including homology modeling, fold recognition, threading, and ab initio modeling. Homology modeling relies on identifying proteins with similar sequences and known structures. Fold recognition and threading can be used when there are no homologs, to identify proteins with the same overall fold but different sequences. Ab initio modeling uses physics-based modeling and protein fragments to predict structure from sequence alone, and has challenges due to the vast number of possible conformations.
This document discusses systems biology and provides examples of regulatory networks and dynamics modeling in systems biology. It summarizes that systems biology aims to understand biological processes using a systems-level approach by integrating 'omics data, quantitative analysis, and computational modeling to study biological systems at various scales, from pathways to whole organisms. It also notes the rapid expansion of the field since 2000 and discusses current and future directions, including data integration, modeling dynamics, placing networks in spatial and temporal contexts, and applications to medicine.
Sequence alignment involves arranging DNA, RNA, or protein sequences to identify similar regions and discover functional, structural, and evolutionary relationships. It compares a reference sequence to a query sequence. Alignments reveal regions of similarity that unlikely occurred by chance and may indicate common ancestry. Global alignment looks for conserved regions across full sequences while local alignment finds local matches between subsequences. Pairwise alignment involves two sequences while multiple sequence alignment handles three or more. Dynamic programming and word methods are common algorithmic approaches to sequence alignment.
Systems biology & Approaches of genomics and proteomicssonam786
This presentation provides the basic understanding of varous genomics and proteomics techniques.Systems biology studies life as a system .It includes the study of living system using various omic technologies .
Predicting drug-target interactions (DTI) is an essential part of the drug discovery process, which is an expensive process in terms of time and cost. Therefore, reducing DTI cost could lead to reduced healthcare costs for a patient. In addition, a precisely learned molecule representation in a DTI model could contribute to developing personalized medicine, which will help many patient cohorts. In this paper, we propose a new molecule representation based on the self-attention mechanism, and a new DTI model using our molecule representation. The experiments show that our DTI model outperforms the state of the art by up to 4.9% points in terms of area under the precision-recall curve. Moreover, a study using the DrugBank database proves that our model effectively lists all known drugs targeting a specific cancer biomarker in the top-30 candidate list.
Molecular modelling techniques help scientists visualize molecules and discover new drug compounds. They use computational methods to mimic molecular behavior without physical experiments. Molecular modelling includes molecular mechanics, which calculates molecular energies and motions using parameters like potential energy surfaces and force fields, and quantum mechanics, which provides nuclear positions and distributions based on electron and nuclear interactions using equations like the Schrodinger equation. Key steps in molecular modelling for drug design include generating lead molecules, minimizing molecular energies, analyzing conformations, and developing pharmacophore models of receptor sites.
This document discusses protein microarrays, which allow high-throughput analysis of thousands of protein interactions. It describes the basic principles and experimental process of protein microarrays, including sample preparation, printing, incubation, washing, and data analysis. Protein microarrays have applications in detecting protein binding properties, profiling antibody specificity, studying post-translational modifications, and identifying biomarkers for clinical research applications like cancer. While powerful for proteomics research, protein microarrays also have some limitations like high costs.
It encloses a brief description of flux balance analysis tools, flux measuring software, methods, advantages and comparable applications to the other software's and analysis techniques and discussion so on steady - constraint based analysis modelling, reconstruction of metabolic pathways and different constraints. etc.
I spoke on "Big Data in Biology". The talk basically concentrates on how biology has affected big data and how big data has become a key player in biology. I have also covered how DNA storage can address long term archival storage.
The document discusses various computational methods for predicting the three-dimensional structure of proteins from their amino acid sequences. It describes homology modeling, which predicts structures based on known protein structural templates that share sequence homology. It also covers threading/fold recognition and ab initio modeling, which predict structures without templates by using physicochemical principles or energy minimization approaches. Key steps and programs used in each method are outlined.
HERE IN THIS PRESENTATION HY HOMOLOGY MODELING IS EXPLAIN , WITH EXAMPLES OF PROTEIN PRIMARY AND SECONDARY, SHOWING THE IMAGES FORM WHICH MAKES EASY TO UNDERSTAND
Introduction to sequence alignment partiiSumatiHajela
This document provides an introduction to sequence alignment and discusses gaps and gap penalties. It defines a match and gap in sequence alignment and how substitutions, deletions and insertions are represented. It describes different types of gaps including constant, linear, affine, convex and profile-based variable penalties. Highlights include that gaps allow alignment extension and introduce uncertainty, so penalties are used. Examples demonstrate assigning regular and affine gap penalties.
Protein threading is a protein structure prediction method that involves "threading" or placing an amino acid sequence into known protein structure templates to find the best matching fold. The key steps are:
1) A query sequence is threaded into structural positions of templates from a structure library to find sequence-structure alignments
2) Alignments are scored and optimized using an objective function accounting for residue interactions and preferences
3) The highest scoring template is selected as the predicted structure, though loop regions are often not accurately predicted
The Protein Data Bank (PDB) is a database for the three-dimensional structural data of large biological molecules, such as proteins and nucleic acids. This presentation deals with what, why, how, where and who of PDB. In this presentation we have also included briefing about various file formats available in PDB with emphasis on PDB file format
Protein microarray Preparation of protein microarray Different methods of arr...naveed ul mushtaq
Protein microarray
Preparation of protein microarray
Different methods of arraying the proteins.FUNCTIONAL PROTEIN MICROARRAYSAnalytical microarrays:-
3.REVERSE PHASE PROTEIN MICROARRAY APPLICATIONS:-
1. Bioinformatics uses computer science and information technology to analyze biological data and assist with drug discovery. It helps identify drug targets and design drug candidates.
2. The drug design process involves identifying a disease target, studying compounds of interest, detecting molecular disease bases, rational drug design, refinement, and testing. Bioinformatics tools assist with each step.
3. CADD uses computational methods to simulate drug-receptor interactions and is heavily dependent on bioinformatics tools and databases. It supports techniques like virtual screening, sequence analysis, homology modeling, and physicochemical modeling to aid drug development.
Data mining involves using machine learning and statistical methods to discover patterns in large datasets and is useful in bioinformatics for analyzing biological data. Bioinformatics analyzes data from sequences, molecules, gene expressions, and pathways. Data mining can help understand these rapidly growing biological datasets. Common data mining tools in bioinformatics include BLAST for sequence comparisons, Entrez for integrated database searching, and ORF Finder for identifying open reading frames. Data mining approaches are well-suited to the enormous volumes of data in bioinformatics databases.
Automated sequencing of genomes require automated gene assignment
Includes detection of open reading frames (ORFs)
Identification of the introns and exons
Gene prediction a very difficult problem in pattern recognition
Coding regions generally do not have conserved sequences
Much progress made with prokaryotic gene prediction
Eukaryotic genes more difficult to predict correctly
This document summarizes different levels of computer simulations used in pharmacokinetics and pharmacodynamics:
1. Level 1 involves simulating the whole organism using systems of differential equations to model pharmacokinetic-pharmacodynamic relationships. These models can generate synthetic clinical trial data.
2. Level 2 simulates isolated tissues and organs using more detailed distributed parameter models to better represent physiological processes than lumped parameter whole-body models.
3. Level 3 simulates cells using complex models of intracellular processes, signaling networks, and membrane transport, though cellular mechanisms are still not fully known.
4. Level 4 involves computational design of proteins and genes, with the challenge of integrating information across multiple structural levels
computer simulation in pharmacokinetics and pharmacodynamicsSUJITHA MARY
This document discusses the use of computer simulation in pharmacokinetics and pharmacodynamics at four different levels: whole organism, isolated tissues/organs, cellular, and protein/gene levels. At each level, mathematical models are used to represent biological processes and predict behavior over time. The goal is to better understand drug behavior and improve drug development by replacing animal and human trials with computer simulations. Challenges include integrating data from different structural levels and ensuring high quality input data.
This document summarizes different computational methods for protein structure prediction, including homology modeling, fold recognition, threading, and ab initio modeling. Homology modeling relies on identifying proteins with similar sequences and known structures. Fold recognition and threading can be used when there are no homologs, to identify proteins with the same overall fold but different sequences. Ab initio modeling uses physics-based modeling and protein fragments to predict structure from sequence alone, and has challenges due to the vast number of possible conformations.
This document discusses systems biology and provides examples of regulatory networks and dynamics modeling in systems biology. It summarizes that systems biology aims to understand biological processes using a systems-level approach by integrating 'omics data, quantitative analysis, and computational modeling to study biological systems at various scales, from pathways to whole organisms. It also notes the rapid expansion of the field since 2000 and discusses current and future directions, including data integration, modeling dynamics, placing networks in spatial and temporal contexts, and applications to medicine.
Sequence alignment involves arranging DNA, RNA, or protein sequences to identify similar regions and discover functional, structural, and evolutionary relationships. It compares a reference sequence to a query sequence. Alignments reveal regions of similarity that unlikely occurred by chance and may indicate common ancestry. Global alignment looks for conserved regions across full sequences while local alignment finds local matches between subsequences. Pairwise alignment involves two sequences while multiple sequence alignment handles three or more. Dynamic programming and word methods are common algorithmic approaches to sequence alignment.
Systems biology & Approaches of genomics and proteomicssonam786
This presentation provides the basic understanding of varous genomics and proteomics techniques.Systems biology studies life as a system .It includes the study of living system using various omic technologies .
Predicting drug-target interactions (DTI) is an essential part of the drug discovery process, which is an expensive process in terms of time and cost. Therefore, reducing DTI cost could lead to reduced healthcare costs for a patient. In addition, a precisely learned molecule representation in a DTI model could contribute to developing personalized medicine, which will help many patient cohorts. In this paper, we propose a new molecule representation based on the self-attention mechanism, and a new DTI model using our molecule representation. The experiments show that our DTI model outperforms the state of the art by up to 4.9% points in terms of area under the precision-recall curve. Moreover, a study using the DrugBank database proves that our model effectively lists all known drugs targeting a specific cancer biomarker in the top-30 candidate list.
Molecular modelling techniques help scientists visualize molecules and discover new drug compounds. They use computational methods to mimic molecular behavior without physical experiments. Molecular modelling includes molecular mechanics, which calculates molecular energies and motions using parameters like potential energy surfaces and force fields, and quantum mechanics, which provides nuclear positions and distributions based on electron and nuclear interactions using equations like the Schrodinger equation. Key steps in molecular modelling for drug design include generating lead molecules, minimizing molecular energies, analyzing conformations, and developing pharmacophore models of receptor sites.
This document discusses protein microarrays, which allow high-throughput analysis of thousands of protein interactions. It describes the basic principles and experimental process of protein microarrays, including sample preparation, printing, incubation, washing, and data analysis. Protein microarrays have applications in detecting protein binding properties, profiling antibody specificity, studying post-translational modifications, and identifying biomarkers for clinical research applications like cancer. While powerful for proteomics research, protein microarrays also have some limitations like high costs.
It encloses a brief description of flux balance analysis tools, flux measuring software, methods, advantages and comparable applications to the other software's and analysis techniques and discussion so on steady - constraint based analysis modelling, reconstruction of metabolic pathways and different constraints. etc.
I spoke on "Big Data in Biology". The talk basically concentrates on how biology has affected big data and how big data has become a key player in biology. I have also covered how DNA storage can address long term archival storage.
The document discusses various computational methods for predicting the three-dimensional structure of proteins from their amino acid sequences. It describes homology modeling, which predicts structures based on known protein structural templates that share sequence homology. It also covers threading/fold recognition and ab initio modeling, which predict structures without templates by using physicochemical principles or energy minimization approaches. Key steps and programs used in each method are outlined.
HERE IN THIS PRESENTATION HY HOMOLOGY MODELING IS EXPLAIN , WITH EXAMPLES OF PROTEIN PRIMARY AND SECONDARY, SHOWING THE IMAGES FORM WHICH MAKES EASY TO UNDERSTAND
Introduction to sequence alignment partiiSumatiHajela
This document provides an introduction to sequence alignment and discusses gaps and gap penalties. It defines a match and gap in sequence alignment and how substitutions, deletions and insertions are represented. It describes different types of gaps including constant, linear, affine, convex and profile-based variable penalties. Highlights include that gaps allow alignment extension and introduce uncertainty, so penalties are used. Examples demonstrate assigning regular and affine gap penalties.
Protein threading is a protein structure prediction method that involves "threading" or placing an amino acid sequence into known protein structure templates to find the best matching fold. The key steps are:
1) A query sequence is threaded into structural positions of templates from a structure library to find sequence-structure alignments
2) Alignments are scored and optimized using an objective function accounting for residue interactions and preferences
3) The highest scoring template is selected as the predicted structure, though loop regions are often not accurately predicted
The Protein Data Bank (PDB) is a database for the three-dimensional structural data of large biological molecules, such as proteins and nucleic acids. This presentation deals with what, why, how, where and who of PDB. In this presentation we have also included briefing about various file formats available in PDB with emphasis on PDB file format
Protein microarray Preparation of protein microarray Different methods of arr...naveed ul mushtaq
Protein microarray
Preparation of protein microarray
Different methods of arraying the proteins.FUNCTIONAL PROTEIN MICROARRAYSAnalytical microarrays:-
3.REVERSE PHASE PROTEIN MICROARRAY APPLICATIONS:-
1. Bioinformatics uses computer science and information technology to analyze biological data and assist with drug discovery. It helps identify drug targets and design drug candidates.
2. The drug design process involves identifying a disease target, studying compounds of interest, detecting molecular disease bases, rational drug design, refinement, and testing. Bioinformatics tools assist with each step.
3. CADD uses computational methods to simulate drug-receptor interactions and is heavily dependent on bioinformatics tools and databases. It supports techniques like virtual screening, sequence analysis, homology modeling, and physicochemical modeling to aid drug development.
Data mining involves using machine learning and statistical methods to discover patterns in large datasets and is useful in bioinformatics for analyzing biological data. Bioinformatics analyzes data from sequences, molecules, gene expressions, and pathways. Data mining can help understand these rapidly growing biological datasets. Common data mining tools in bioinformatics include BLAST for sequence comparisons, Entrez for integrated database searching, and ORF Finder for identifying open reading frames. Data mining approaches are well-suited to the enormous volumes of data in bioinformatics databases.
Automated sequencing of genomes require automated gene assignment
Includes detection of open reading frames (ORFs)
Identification of the introns and exons
Gene prediction a very difficult problem in pattern recognition
Coding regions generally do not have conserved sequences
Much progress made with prokaryotic gene prediction
Eukaryotic genes more difficult to predict correctly
This document summarizes different levels of computer simulations used in pharmacokinetics and pharmacodynamics:
1. Level 1 involves simulating the whole organism using systems of differential equations to model pharmacokinetic-pharmacodynamic relationships. These models can generate synthetic clinical trial data.
2. Level 2 simulates isolated tissues and organs using more detailed distributed parameter models to better represent physiological processes than lumped parameter whole-body models.
3. Level 3 simulates cells using complex models of intracellular processes, signaling networks, and membrane transport, though cellular mechanisms are still not fully known.
4. Level 4 involves computational design of proteins and genes, with the challenge of integrating information across multiple structural levels
computer simulation in pharmacokinetics and pharmacodynamicsSUJITHA MARY
This document discusses the use of computer simulation in pharmacokinetics and pharmacodynamics at four different levels: whole organism, isolated tissues/organs, cellular, and protein/gene levels. At each level, mathematical models are used to represent biological processes and predict behavior over time. The goal is to better understand drug behavior and improve drug development by replacing animal and human trials with computer simulations. Challenges include integrating data from different structural levels and ensuring high quality input data.
1) The document discusses the basics of drug design including defining the disease process, identifying targets for drug design like enzymes, receptors and nucleic acids, and the different approaches of ligand-based drug design and structure-based drug design.
2) It also covers important techniques in drug design like computer-aided drug design using computational methods, quantitative structure-activity relationships (QSAR), and the uses of computer graphics in molecular modeling and dynamics simulations.
3) Important experimental techniques discussed are x-ray crystallography and NMR spectroscopy that provide structural information for target biomolecules essential for structure-based drug design.
Unveiling the role of network and systems biology in drug discoverychengcheng zhou
This document reviews recent advances in network and systems biology applied to human health and drug discovery. It discusses how these approaches consider biological targets in their physiological context without losing molecular details. Network biology will be central to developing polypharmacology strategies for complex multi-factorial diseases by altering entire pathways rather than single proteins. Predictive toxicology and drug repurposing are areas where network and systems biology strategies could have an immediate impact on drug discovery.
This document discusses computer simulation in pharmacokinetics and pharmacodynamics. It begins by defining computer simulation and explaining its importance in the biomedical field. It then describes the four levels of simulation: [1] simulation of the whole organism, [2] isolated tissues and organs, [3] the cell, and [4] proteins and genes. For whole organism simulation, it discusses physiologically-based pharmacokinetic (PBPK) models and pharmacokinetic-pharmacodynamic (PK/PD) models. The key steps in building PK/PD models are also outlined.
Cheminformatics plays a key role in modern drug discovery by helping chemists organize and analyze the vast amounts of chemical data being produced. It combines fields like chemistry, biology, and informatics to transform data into knowledge. Specifically, cheminformatics aids in tasks like identifying drug targets, finding lead compounds, optimizing leads, and conducting pre-clinical trials through methods such as high-throughput screening, structure-activity modeling, and predictive toxicity analysis. It also provides tools for tasks like drawing and searching chemical structures in databases.
Computer simulation involves creating computer models to simulate real-world systems. There are four levels of simulation in pharmacokinetics and pharmacodynamics: 1) Whole organism simulation using PK/PD or PBPK models, 2) Isolated tissue and organ simulation, 3) Cellular simulation, and 4) Protein and gene simulation. PBPK models in particular are used to predict absorption, distribution, metabolism, and excretion of drugs in the human body based on physiological and drug properties.
Bioinformatics role in Pharmaceutical industriesMuzna Kashaf
Bioinformatics plays a key role in the pharmaceutical industry by enabling target identification of diseases, rational drug design, compound refinement, and other processes. It facilitates identifying target diseases and compounds, detecting molecular bases of diseases, designing drugs, refining compounds, and testing drug solubility and effects. Bioinformatics supports various stages of drug development including formulation, crystallization determination, polymer modeling, and testing before human use. Its integration into the pharmaceutical industry supports drug discovery, healthcare advances, and realizing the promises of projects like the Human Genome Project.
National Resource for Networks Biology's TR&D Theme 1: In this theme, we will develop a series of tools and methodologies for conducting differential analyses of biological networks perturbed under multiple conditions. The novel algorithmic methodologies enable us to make use of high-throughput proteomic level data to recover biological networks under specific biological perturbations. The software tools developed in this project enable researchers to further predict, analyze, and visualize the effects of these perturbations and alterations, while enabling researchers to aggregate additional information regarding the known roles of the involved interactions and their participants.
The document discusses the applications of bioinformatics in drug discovery. It describes how bioinformatics supports computer-aided drug design through computational methods to simulate drug-receptor interactions. It also discusses how virtual high-throughput screening can identify compounds that strongly bind to protein targets. The document outlines the key steps in drug design, including identifying the disease target, studying lead compounds, rational drug design techniques, and testing drugs. It emphasizes that bioinformatics can predict important drug characteristics like absorption and toxicity to save costs during development.
This document discusses the use of artificial intelligence in drug discovery and development. It begins by defining artificial intelligence, machine learning, and deep learning. It then provides examples of how AI is currently used in various stages of the drug development process, including identifying molecular targets, finding hit compounds, optimizing lead compounds, predicting toxicity, and drug repurposing. It also discusses startups applying AI to drug discovery. Finally, it notes some limitations and drawbacks of using AI, such as potential bias in algorithms.
This document describes two challenges presented as part of the DREAM initiative to evaluate methods for parameter estimation and network topology inference from experimental data. In the first challenge, participants were given the topology of a 9-gene network and asked to estimate 45 kinetic parameters. In the second challenge, participants were given an incomplete 11-gene network and asked to identify 3 missing links and associated parameters. Participants could purchase simulated experimental data using a credit system, allowing iterative experimental design. While parameter estimation was accomplished well using fluorescence data, topology inference was more difficult. Aggregating submissions produced better solutions than individual methods.
Computational (In Silico) Pharmacology.pdfssuser515ca21
This document provides an overview of computational pharmacology and its applications. It discusses molecular modeling and simulation techniques like molecular docking, dynamics simulations, and QSAR modeling. It also covers pharmacokinetic and pharmacodynamic modeling to predict how drugs move through and act on the body. Computational pharmacology uses these in silico methods to better understand drug effects at a cellular level without extensive experimentation.
Statistical modeling in pharmaceutical research and development.ANJALI
Statistical modeling in pharmaceutical research and development. This modelling is used in pharmaceutical industries to overcome the challenges related to pharmaceutical formulation, to reduce cost and increase quality and speed of pharmaceutical products.
Bioinformatics plays an important role in drug discovery and development by enabling target identification, rational drug design, compound refinement, and other processes. Key applications of bioinformatics include virtual screening of large compound libraries to identify potential drug leads, homology modeling of protein structures to inform drug design, and similarity searches to find analogs of existing drug molecules. The overall drug development process involves studying the disease, identifying drug targets, designing compounds, testing and refining candidates, and conducting clinical trials. Computational techniques expedite many steps but experimental validation is still needed.
Computational modeling of drug distribution jaatinpubg
This document discusses computational modeling techniques for predicting drug distribution properties. It covers two main modeling approaches: quantitative approaches like pharmacophore modeling and docking to study drug-target interactions, and qualitative approaches like QSAR and QSPR studies that use multivariate analysis to correlate molecular descriptors with properties. Key aspects of drug distribution addressed include volume of distribution, plasma protein binding, and blood-brain barrier permeability. The challenges of developing accurate predictive models for these properties are also noted.
Bioinformatic tools can be applied throughout the drug design process to reduce costs and time. High-throughput screening allows testing of millions of compounds against protein targets. Computer modeling predicts compound activity and allows virtual screening. Molecular modeling visualizes compound-protein interactions to understand mechanisms of action. In silico models predict absorption, distribution, metabolism, and excretion to evaluate drug properties without animal testing. Bioinformatics databases provide protein and compound structure information to inform drug target and lead identification. Together these tools automate and accelerate key steps in drug design and development.
This document discusses how bioinformatics tools can be used in drug design. It describes several approaches: chemical modification of existing drugs, receptor-based design by determining receptor structures, and ligand-based design using known active ligands. It also discusses identifying disease targets, refining drug structures, detecting drug binding sites using protein modeling, and rational drug design techniques like virtual screening. QSAR methods relate compound structures to activities, while molecular modeling and docking simulate drug-receptor interactions to aid design. Informatics plays a key role in storing and analyzing the large amounts of data generated.
Computer simulation in pharmacokinetics and pharmacodynamicsMOHAMMAD ASIM
Computer simulation can be used at four levels in pharmacokinetics and pharmacodynamics: (1) whole organism level using lumped-parameter or physiologically-based pharmacokinetic models, (2) isolated tissue and organ level using distributed blood tissue exchange models, (3) cellular level modeling intracellular and membrane processes, and (4) protein and gene level including computational protein design and models of conditions like HIV viral load.
Similar to Advanced Systems Biology Methods in Drug Discovery (20)
This document discusses liquid membranes for transport of solutes. It begins by introducing the basic components and arrangements of liquid membranes. It then discusses various transport mechanisms that can control solute flux, including solution-diffusion, diffusion regimes, and chemical reaction kinetics. Models are presented for different types of liquid membranes including emulsion, supported, and bulk liquid membranes. Various applications are mentioned for separating metals, biomolecules, and other compounds using these different liquid membrane configurations.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
2. • 1.-Introduction: What is system biolgy
• The Omics revolution
• From Omics to systems biology
• 2.-System biology methods
• 2.1-Challenges and perspectives
• 2.2-System biology and drug discovery
• 3.-System biology in the pharmaceutical
industry
• 3.1-The role of systems biology in target discovery
• 3.2-Drug Mechanism of Action
• 3.3-Drug combinations, Computational modeling
• 3.4-Preclinical research
• 3.5- Examples of computational models relevant to human
disease biology
• 3.6- Uses for, and challenges of, each systems biology approach:
omics, complex cell systems and modeling
• 3.7-Development cycle of integrated in silico models using
component level and system response data.
INDEX
• 4.-Systems Biology Methods in Drug Discovery
• 4.1-New directions: System biology and system chemistry
• 4.2-Structural Systems Pharmacology
• 4.3-Drug Efficacy through Systems Biology
• 4.4-Drug-to-Target and Target Profiling Technologies
• 4.5-Therapeutic Perfomance Mapping System (TPMS)
• 5-Drug development
• 5.1-Drug repositioning
• 5.2-Determining side-effects
• 5.3-Conclusion
3. 1.-INTRODUZIONE
Nowadays we have large
amount of omics
molecular data at the
level of genome,
transcriptome, proteome,
and metabolome.
The field of 'omics'
currently polarizes the
community of
biologists
The omics driver:
DNA secuence data
are doubling every 5
months
4. Its not about number the complexity is increasing
1.1-The omics revolution
Metogenomics new discoveriese
5. 1.2-From Omics to system biology
System biology has emerged:
We need to know structure o
the biological systems, how
these interacting components
can produce complex system
behaviors
The current challenge of
interpreting the
overwhelming amount of
genome-scale data on a
systems level.
We use visualization of
omics data for systems
biology
6. Systems Biology approaches look to integrate
1-Across levels of structure and scale, building blocks
and functional modules to the large-scale of
organization of the system
2-Across phases of processes, inking the insights
from the many "omics" that have emerged from
technologtical advances
3.-A tight linking of experiment und modeling
processes, involving both data-driven and hypothesis-
driven phases
4.-A multi-disciplinary collaboration to provide
insights from the natural sciences, mathematics,
computer science, engineering and medicine
2.-SYSTEM BIOLOGY
METHODS
7. Some key challenges facing systems biology today are:
1. -Invention of new experimental methods (based on eg microfluidics, nanotechnology, femtochemistry) to provide
high-quality and comprehensive data needed for modeling and simulation,
2.-Development of modeling and computational approaches to handle large complex systems, which have highly
diversified components and encompass multiple scales (in space and time)
3.-Establishment of "systems engineering-oriented" ways of collaboration among experimentalists, among modelers
and between both groups, an important aspect of which would be to establish standards and platforms for data
schemes and software tools used
4.-Education of scientists -across all the disciplines-with the right balance of experimental and modeling
understanding and skills.
2.1.-Challenges and perspectives
8. Systems biology has the potential to impact the
entire drug discovery and development process,
as it looks to bridge the molecular and the
physiological
Industrial research labs plan to use system biology for
example for developing biomarkers for efficacy and
toxicology could lead to more efficient preclinical
developments approaches in screening for combination
drug targets elucidating side effects in drugs
find alternative indications for drugs already on the market
3.2-System biology and drug discovery
9. 3.- SYSTEMS BIOLOGY IN THE PHARMACEUTICAL INDUSTRY
Practical approach to systems biology at
the cell signaling network and cell-cell
interaction scales.
Generation of incorporate biological
complexity at multiple levels: multiple
interacting active pathways, multiple
intercommunicating cell types and
multiple different environments
10. 3.1-The role of systems biology in target discovery
Identify genes or
proteins whose
modulation will halt
or significantly alter
their progression
11. Network and system biology
strategies are more likely to have
an immediate contribution:
predictive toxicology and drug
repurposing
If successful, these models could help to iden
fy potential drug targets that are likely to trig
er severe adverse reactions at early stages of
the discovery process, and to rationally desig
the toxicity tests needed to check the safety
other drug targets under the area of influen
of a certain red node
12. 3.2-Drug Mechanism of Action
Drugs act by affecting the function of
different molecules of the human’s body, or
its invading pathogens in case of infectious
diseases. The pathway leading from the
molecules with which a drug directly
interacts (its ‘targets’) to the ones that cause
its effects is named its ‘mechanism of action
Gaining this knowledge is one of the
leading topics of the pharmaceutical
industry, because the mechanism of
action is fundamental to understand the
precise actions of a drug
14. 3.3-Drug combinations, Computational modeling
Integration of these systems approaches
will enable faster discovery and translation
of clinically relevant drug combinations
Mechanism of action discovery traditionally
required serendipitous or resource-extensive
approaches, but systems biology allows us to
compile all the scientific knowledge generated over
the years and draw conclusions from all of it at a
time, thanks to the use of powerful algorithms and
computers.
In this way, we find the most probable mechanism of
action from any suggested set of ‘stimuli’ and
‘responses
15. Future strategy for drug combination predictions with parallel
integration of computational modeling, preclinical testing, and
clinical trials
Future combinatorial drug
discovery approaches will benefit
from tighter integration of gene
signatures and phenotypic screen
data with computational models
For clinical application, patient
gene signatures can be clustered
with gene expression signatures
from previously modeled cell lines
All this is algorithmically comput
what allows us to provide seve
nice features
16. 3.4-Preclinical Research
The Systems Biology group’s role is to develop a
detailed understanding.This includes research to
identification of molecular targets or other
intervention points, model development, and
discovery of predictive biomarkers and outcome
measures
17. 3.5-Examples of computational models relevant to human disease biology
See http://www.cellml.org/examples/repository/ for more examples.Computer models: from pathways to disease physiolo
18. 3.6- Uses for, and challenges of, each systems biology approach: omics, complex ce
systems and modeling
aApproach cannot address issue, -; approach can address issue, +; approach can address issue under certain conditions, +/-.
Omics: large-scale data generation and mining
19. 3.7-Development cycle of integrated in silico models using component
level and system response data
Models are iteratively tested and improved by
comparison of predictions with systems
level responses measured experimentally through
traditional assays or from profiles generated from
complex, activated human cell mixtures under a set
of different environmental conditions. Component
level 'omics' data can provide a scaffold, limiting the
range of possible models at the molecular level
20. 4.-Systems Biology Methods in Drug Discovery
Leveraging complexity in cell systems biology for drug discovery
The combination of multiple
cell types and multiple
pathways activated elicits
complex network regulation
and emergent properties th
enhance the sensitivity and
ability of the systems to
discriminate unique drug an
gene effects.
21. Several complex human cell 'systems‘ are
interrogated with genes or drugs of interest
and the effects on the levels of selected
protein readouts are determined, generating
a profile that serves as a multisystem
signature of the function of the test agent
Statistical measures of
profile similarity can be used
to cluster genes or drugs by
function, and to generate
graphical representations of
their functional relationships
with each other
In the example shown, BioMAP
clustering defines two functional
activity classes among structurally
related p38 MAPK inhibitors.
22. 4.1-New directionsfor drug discovery
The CGBVS ligand-protein interaction discovery method
(center) is a change in exploring the interface between
chemistry and biology, not requiring the protein three-
dimensional structure required in traditional SBVS (right),
nor limited in scope to a single protein as is the case in
LBVS (left). In CGBVS, protein subsequences and chemical
descriptions of topology and other physicochemical
properties are combined for each known interaction and
non-interaction. The set of (non-)interactions is used to
build a predictive model that can rank novel ligand-protein
interactions for prioritization in bioassay experiments.
23. Complexity of Interaction Networks between
chemical and biological spaces
The need to shift the drug design strategy from “one-
ligand-one-protein” to “many-to-many.” The node color
indicates the classes that compounds and GPCRs belong
to (blue, amines; red, peptides; yellow, prostanoids;
green, nucleotides). The links colored from green to
yellow to red indicate increasing confidence in the GPCR-
ligand interaction, with a number of interclass GPCR-
ligand interactions exhibiting high predictive confidence.
Similar to the CGBVS method that can utilize protein and ligand
promiscuity, the incorporation of multiple interactions
Will provide constraints to guide future generations of
molecule design for producing medicines that are
more personal and contain fewer side effects
24. The structural characterization of cellular netwo
helps explaining drug mechanisms of action and
expanded the universe of therapeutic strategie
revealing novel classes of targetable entities bet
suited to fight complex diseases
Polypharmacology involves the modulation of
various proteins to target the network more
efficiently
Targeting PPI interfaces: a drug can alter PPIs
by inhibiting them through competitive binding
at the interface or by stabilizing them
Allostery: PPIs can be also modulated through
conformational changes induced in distal sites of
the protein
4.2-Structural Systems Pharmacology
25. Structural Systems Pharmacology: the role of 3D structures in next generation drug development
Mapping genetic variations
onto pharmacological targets
can rationalize interindividual
variability in drug response,
giving valuable hints to
advance towards personalized
medicine.
Genetic variations in
pharmacological targets can have
an important effect through the
direct or indirect perturbation of
the drug-binding cavity
26. 4.3-Drug Efficacy through Systems Biology
In vitro and in
vivo assays are
time-consuming
and expensive
TPMS technologyit is possible to assess the efficacy of either an experimental
or an already marketed compound by just knowing its targets. Through
artificial neural networks (ANNs), a type of Artificial Intelligence methods, we
are able to predict the efficacy of a drug in treating a disease by analysing the
effect of the modulation of its molecular targets over a molecularly defined
model of the disease.
We can predict the
target profile of a
drug based on their
structure and
biological and
clinical features
27. 4.4-Drug-to-Target and Target Profiling Technologies
State-of-the-art scientific knowledge
with various chemical modelling
approaches to identify potential targets
and off-targets of a compound.
28. 4.5-Therapeutic Perfomance Mapping System (TPMS)
TPMS integrates data from clinical
and preclinical studies about the
drug in mathematical models that
have been designed in agreement
with the currently available
biological knowledge. Then, the
system calculates the probability o
each of the predicted targets and
off-targets causing the clinical and
physiological manifestations
observed in studies
29. On one hand, it can simultaneously analys
thousands of compounds at a very low cos
compared to in vitroor in vivo experiments
On the other, it considers all the available
information about diseases and drugs and
puts it in the context of human physiology
achieving a holistic understanding of the
problem under study
30. 5-Drug development
Drug discovery and development is very checked road
Drug development costs broken down by
stage. Source: Paul S. (2010).
31. 5.1-Drug Repositioning
Too time- and cost-consuming, whereas the
second is inherently slow and can easily
overlook not-too-evident properties of the
compounds under study.
32. Drug candidates have frequently undergone exhaustive
testing before being approved for their original indication
34. 3.5-Conclusion
During drug development, million-dollar decisions are (and
must be) routinely made using flawed criteria based on
incomplete biological knowledge: for example, targets are
prioritized because they are upregulated at the gene level in
disease; compounds are selected to be biochemically
specific; animal models are considered essential
Better biology, preferably more relevant to human disease
and capable of being integrated into the drug discovery
process, is sorely needed to inform decision-making.
Although the systems biology approaches outlined here are
in their infancy, they are already contributing to meaningful
drug development decisions by accelerating hypothesis-
driven biology, by modeling specific physiologic problems in
target validation or clinical physiology and by providing
rapid characterization and interpretation of disease-
relevant cell and cell system level responses.
Markup languages for gene expression data, emerging ontologies for
sharing and integrating different kinds of omic and conventional biolo
data4 and the introduction of standardi- zed high-throughput systems
biology and associated informatics approaches represent important fi
steps on this path.