Toxicogenomics uses gene expression profiling to study how organisms respond to toxic compounds on a global scale. This new approach promises to greatly advance toxicology research. It may help identify toxic mechanisms earlier and assist in predicting compound toxicity. Challenges include interpreting large gene expression datasets and linking changes to specific toxic effects. Progress has been made using toxicogenomics to predict compound mode of action and toxicity pathways. Integrating gene expression data with traditional toxicology can help realize the full potential of this new approach.
ESTs are short sequences of DNA that represent genes expressed in certain tissues or organisms. They provide a quick and inexpensive way for scientists to discover new genes and map their positions in genomes. ESTs represent a snapshot of genes expressed in a tissue at a given time. Sequencing the beginning or end of cDNA clones produces 5' and 3' ESTs, which can help identify genes and study gene expression and regulation.
This document provides an overview of basic cell culture techniques. It discusses the history of cell culture, defining primary and secondary cell cultures. It describes different types of cell lines and how cells grow as monolayers or in suspension. The document outlines the key equipment needed for a cell culture laboratory, including biosafety cabinets, CO2 incubators, centrifuges, microscopes, and supplies. It emphasizes the importance of aseptic technique to prevent microbial contamination when working with cell cultures.
Chloroplasts are double-membrane organelles found in plant cells that contain chlorophyll and are the site of photosynthesis. Chloroplast DNA is circular and ranges in size from 120,000 to 170,000 base pairs. It contains approximately 120 genes, including genes that encode proteins involved in photosynthesis and the transcription and translation machinery. Chloroplast DNA replication is semi-conservative and there are typically multiple copies of the chloroplast genome within each chloroplast.
Cryopreservation involves storing biological material at ultra-low temperatures, usually in liquid nitrogen. This allows long-term preservation by stopping almost all metabolic activity in cells. Materials are frozen using slow freezing, rapid freezing, or stepwise freezing methods. They are then stored long-term at temperatures near -196°C. When needed, samples are thawed quickly in a warm water bath before use or analysis. Cryopreservation has many applications for preserving cells, tissues, blood, embryos and more.
This document describes the MTT assay, a colorimetric assay used to measure cell viability and cytotoxicity. The MTT assay works by using the enzyme mitochondrial dehydrogenase in living cells to reduce the yellow tetrazolium dye MTT to purple insoluble formazan. The amount of formazan produced is directly proportional to the number of viable cells. The document outlines the principle, reagents, procedure, troubleshooting, advantages, and disadvantages of the MTT assay. Commonly available MTT assay kits are also listed.
The document discusses development in Drosophila melanogaster (fruit flies). It describes how maternal molecules establish body axes in the early embryo before cell differentiation. Segmentation genes are then expressed in gradients controlled by these maternal factors and establish the basic body plan. These include gap, pair-rule and segment polarity genes. Finally, homeotic genes specify the identity of each body segment and control structure formation. This cascade of gene regulation results in the distinct segments that make up the adult fly body.
ESTs are short sequences of DNA that represent genes expressed in certain tissues or organisms. They provide a quick and inexpensive way for scientists to discover new genes and map their positions in genomes. ESTs represent a snapshot of genes expressed in a tissue at a given time. Sequencing the beginning or end of cDNA clones produces 5' and 3' ESTs, which can help identify genes and study gene expression and regulation.
This document provides an overview of basic cell culture techniques. It discusses the history of cell culture, defining primary and secondary cell cultures. It describes different types of cell lines and how cells grow as monolayers or in suspension. The document outlines the key equipment needed for a cell culture laboratory, including biosafety cabinets, CO2 incubators, centrifuges, microscopes, and supplies. It emphasizes the importance of aseptic technique to prevent microbial contamination when working with cell cultures.
Chloroplasts are double-membrane organelles found in plant cells that contain chlorophyll and are the site of photosynthesis. Chloroplast DNA is circular and ranges in size from 120,000 to 170,000 base pairs. It contains approximately 120 genes, including genes that encode proteins involved in photosynthesis and the transcription and translation machinery. Chloroplast DNA replication is semi-conservative and there are typically multiple copies of the chloroplast genome within each chloroplast.
Cryopreservation involves storing biological material at ultra-low temperatures, usually in liquid nitrogen. This allows long-term preservation by stopping almost all metabolic activity in cells. Materials are frozen using slow freezing, rapid freezing, or stepwise freezing methods. They are then stored long-term at temperatures near -196°C. When needed, samples are thawed quickly in a warm water bath before use or analysis. Cryopreservation has many applications for preserving cells, tissues, blood, embryos and more.
This document describes the MTT assay, a colorimetric assay used to measure cell viability and cytotoxicity. The MTT assay works by using the enzyme mitochondrial dehydrogenase in living cells to reduce the yellow tetrazolium dye MTT to purple insoluble formazan. The amount of formazan produced is directly proportional to the number of viable cells. The document outlines the principle, reagents, procedure, troubleshooting, advantages, and disadvantages of the MTT assay. Commonly available MTT assay kits are also listed.
The document discusses development in Drosophila melanogaster (fruit flies). It describes how maternal molecules establish body axes in the early embryo before cell differentiation. Segmentation genes are then expressed in gradients controlled by these maternal factors and establish the basic body plan. These include gap, pair-rule and segment polarity genes. Finally, homeotic genes specify the identity of each body segment and control structure formation. This cascade of gene regulation results in the distinct segments that make up the adult fly body.
The document discusses the field of proteomics, which involves the systematic analysis of proteins in cells under various conditions. It defines key terms like proteome and describes technologies used in proteomics like mass spectrometry, protein separation techniques, and protein analysis methods. The document also outlines several applications of proteomics in medicine, such as in diagnosing and studying diseases like diabetes and rheumatoid arthritis, as well as its use in analyzing changes that occur during the aging process.
This document discusses functional genomics and its approaches. It defines functional genomics as the worldwide experimental approach to access the function of genes by using information from structural genomics. The key functional genomics approaches discussed are transcriptomics, proteomics, metabolomics, interactomics, epigenetics, and nutrigenomics. Modern techniques discussed include expressed sequence tags (ESTs), serial analysis of gene expression (SAGE), and microarray analysis.
Livestock sector is an important sector in indian economy. To boost the productive performance of existing livestock population in india, biotechnolgy plays a key role to fullfill this.
Protein-protein interactions (PPIs) are important for many cellular functions. There are two main types of PPIs - transient interactions which are brief, and stable interactions which form multiprotein complexes. Crosslinking can capture both transient and stable PPIs by covalently binding interacting proteins. In vivo crosslinking studies PPIs in their native environment while in vitro crosslinking allows better reaction control. Pull-down assays use affinity purification to isolate stable protein complexes and identify binding partners of a bait protein. SDS-PAGE is commonly used to separate and visualize proteins isolated by techniques like pull-down.
This document discusses various methods of transfection, which is the process of introducing nucleic acids into cells. It describes both physical and chemical transfection methods. Physical methods include electroporation, microinjection, and cell squeezing, which introduce DNA directly into cells using physical forces. Chemical methods involve using reagents like cationic lipids, calcium phosphate, and cationic polymers to form complexes with DNA that are then taken up by cells. The document discusses the principles, advantages, and disadvantages of many common transfection methods.
A gene knockout is a genetic technique in which one of an organism's genes is made inoperative ("knocked out" of the organism). However, gene knockout can also refer to the gene that is knocked out or the organism that carries the gene knockout. Knockout organisms or simply knockouts are used to study gene function, usually by investigating the effect of gene loss. Researchers draw inferences from the difference between the knockout organism and normal individuals.
Genomic in situ hybridization (GISH) is a cytogenetic technique used to label parts of a genome or genomes within a cell. It was first used to discriminate between the genomes of an intergeneric hybrid of Hordeum chilense and Secale africanum. GISH involves labeling the whole DNA of an organism and using a probe to target the genome of another organism. Parts of the genome complementary to the probe will hybridize, allowing identification of parental genomes and introgressed regions. GISH provides advantages over fluorescence in situ hybridization (FISH) in investigating evolutionary relationships of crops and identifying inserted regions from alien species in parental genomes.
X-ray Crystallography & Its Applications in Proteomics Akash Arora
X-ray crystallography is a technique that uses X-rays to determine the atomic structure of crystals. It involves crystallizing molecules and bombarding them with X-rays, which produce a diffraction pattern. This pattern is used to deduce the molecular structure. X-ray crystallography has many applications in proteomics, including determining protein structures, studying protein interactions, and elucidating enzyme catalysis mechanisms. It provides atomic-level insights that advance understanding of protein function.
INTRODUCTION
A PERFECT THERAPEUTIC DRUG
DRUG DISCOVERY- HISTORY
MODERN DRUG DISCOVERY
BIOINFORATICS IN DRUG DISCOVERY
DRUG DISCOVERY BASED ON BIOINFORMATIC TOOLS
BIOINFORMATICS IN COMPUTER-AIDED DRUG DISCOVERY
ECONOMICS OF DRUG DISCOVERY
CONCLUSION
REFERENCES
pBluescript is an example of a combination between plasmids and phages (phagemids).
Phagemids represent a hybrid type of class of vectors that serve to produce single-stranded DNA.
Comparative genomics involves systematically comparing genome sequences from different organisms. It uses computer programs to identify homologous genomic regions and align sequences at the base-pair level. Comparing genomes at different phylogenetic distances can provide insights into gene structure/function, evolution, and characteristics unique to each organism. Key tools for comparative genomics include genome browsers, aligners, and databases that classify orthologous gene clusters conserved across species.
Cell lines are permanently established cell cultures that can proliferate indefinitely. They are derived from primary cell cultures isolated from animal or plant tissues. Cell lines may be normal or transformed and can have finite or continuous growth. Different cell lines have various applications including screening drugs, studying cell functions, and producing vaccines and therapeutic proteins. Selecting the appropriate cell line depends on factors like species, growth characteristics, and intended experimental purpose.
Introduction
What is cloning?
Why we want to do cloning?
History
Technique of cell cloning
Dolly – the sheep
Species cloned
Why persue animal cloning research?
Conclusion
Introduction
What is cloning?
Why we want to do cloning?
History
Technique of cell cloning
Dolly – the sheep
Species cloned
Why persue animal cloning research?
Conclusion
Genetic mapping is based on recombination frequencies between genetic loci during meiosis. Physical mapping determines the actual distances in base pairs between sequences on a chromosome using overlapping DNA fragments. Before whole genome sequencing, physical maps were created using techniques like restriction mapping of large-insert clones, probing genomic libraries with end fragments, and chromosome walking to build contigs of overlapping sequences. This allowed sequencing of individual fragments which could then be assembled into a complete genome sequence.
This is technique used widely for protein separation from a mixture and is very easy and less costly method. Slides cover all essential points about EMSA and it is quite interesting to know that how it detect and separate different proteins and their mobility shift assay.
This document discusses quantitative trait loci (QTL) mapping. It begins by explaining that quantitative traits are controlled by multiple genes and the regions of the genome that control these traits are called QTLs. It then describes the process of QTL mapping, which involves constructing a linkage map using molecular markers and identifying genomic regions associated with traits. The key steps involve developing a mapping population, generating a saturated linkage map, phenotyping the population, and using approaches like single marker analysis, interval mapping, and composite interval mapping to detect QTLs.
Docking is used to predict the binding of two molecules and evaluate their interaction energy. It involves representing molecules, exploring possible configurations, and ranking them by binding energy using a scoring system. There are two main categories: protein-protein docking treats both molecules as rigid, while protein-ligand docking allows flexibility in the ligand. AutoDock software is commonly used for docking via genetic algorithms and other search methods to minimize energy between a protein and ligand. Docking preparation involves adding hydrogens, assigning charges and merging atoms for both molecules. The results can provide insight into protein interactions and rational drug design.
This database contains structured controlled vocabularies (ontologies) for various knowledge domains related to plants and their associations. It includes ontologies for plant structure, growth stages, traits, phenotypes, molecular functions, biological processes, cellular components, environments, and a taxonomy ontology. Users can search this database to find information on rice genes, proteins, QTLs, and genetic maps. Different ontologies are meant for different purposes and do not overlap.
The document discusses the field of proteomics, which involves the systematic analysis of proteins in cells under various conditions. It defines key terms like proteome and describes technologies used in proteomics like mass spectrometry, protein separation techniques, and protein analysis methods. The document also outlines several applications of proteomics in medicine, such as in diagnosing and studying diseases like diabetes and rheumatoid arthritis, as well as its use in analyzing changes that occur during the aging process.
This document discusses functional genomics and its approaches. It defines functional genomics as the worldwide experimental approach to access the function of genes by using information from structural genomics. The key functional genomics approaches discussed are transcriptomics, proteomics, metabolomics, interactomics, epigenetics, and nutrigenomics. Modern techniques discussed include expressed sequence tags (ESTs), serial analysis of gene expression (SAGE), and microarray analysis.
Livestock sector is an important sector in indian economy. To boost the productive performance of existing livestock population in india, biotechnolgy plays a key role to fullfill this.
Protein-protein interactions (PPIs) are important for many cellular functions. There are two main types of PPIs - transient interactions which are brief, and stable interactions which form multiprotein complexes. Crosslinking can capture both transient and stable PPIs by covalently binding interacting proteins. In vivo crosslinking studies PPIs in their native environment while in vitro crosslinking allows better reaction control. Pull-down assays use affinity purification to isolate stable protein complexes and identify binding partners of a bait protein. SDS-PAGE is commonly used to separate and visualize proteins isolated by techniques like pull-down.
This document discusses various methods of transfection, which is the process of introducing nucleic acids into cells. It describes both physical and chemical transfection methods. Physical methods include electroporation, microinjection, and cell squeezing, which introduce DNA directly into cells using physical forces. Chemical methods involve using reagents like cationic lipids, calcium phosphate, and cationic polymers to form complexes with DNA that are then taken up by cells. The document discusses the principles, advantages, and disadvantages of many common transfection methods.
A gene knockout is a genetic technique in which one of an organism's genes is made inoperative ("knocked out" of the organism). However, gene knockout can also refer to the gene that is knocked out or the organism that carries the gene knockout. Knockout organisms or simply knockouts are used to study gene function, usually by investigating the effect of gene loss. Researchers draw inferences from the difference between the knockout organism and normal individuals.
Genomic in situ hybridization (GISH) is a cytogenetic technique used to label parts of a genome or genomes within a cell. It was first used to discriminate between the genomes of an intergeneric hybrid of Hordeum chilense and Secale africanum. GISH involves labeling the whole DNA of an organism and using a probe to target the genome of another organism. Parts of the genome complementary to the probe will hybridize, allowing identification of parental genomes and introgressed regions. GISH provides advantages over fluorescence in situ hybridization (FISH) in investigating evolutionary relationships of crops and identifying inserted regions from alien species in parental genomes.
X-ray Crystallography & Its Applications in Proteomics Akash Arora
X-ray crystallography is a technique that uses X-rays to determine the atomic structure of crystals. It involves crystallizing molecules and bombarding them with X-rays, which produce a diffraction pattern. This pattern is used to deduce the molecular structure. X-ray crystallography has many applications in proteomics, including determining protein structures, studying protein interactions, and elucidating enzyme catalysis mechanisms. It provides atomic-level insights that advance understanding of protein function.
INTRODUCTION
A PERFECT THERAPEUTIC DRUG
DRUG DISCOVERY- HISTORY
MODERN DRUG DISCOVERY
BIOINFORATICS IN DRUG DISCOVERY
DRUG DISCOVERY BASED ON BIOINFORMATIC TOOLS
BIOINFORMATICS IN COMPUTER-AIDED DRUG DISCOVERY
ECONOMICS OF DRUG DISCOVERY
CONCLUSION
REFERENCES
pBluescript is an example of a combination between plasmids and phages (phagemids).
Phagemids represent a hybrid type of class of vectors that serve to produce single-stranded DNA.
Comparative genomics involves systematically comparing genome sequences from different organisms. It uses computer programs to identify homologous genomic regions and align sequences at the base-pair level. Comparing genomes at different phylogenetic distances can provide insights into gene structure/function, evolution, and characteristics unique to each organism. Key tools for comparative genomics include genome browsers, aligners, and databases that classify orthologous gene clusters conserved across species.
Cell lines are permanently established cell cultures that can proliferate indefinitely. They are derived from primary cell cultures isolated from animal or plant tissues. Cell lines may be normal or transformed and can have finite or continuous growth. Different cell lines have various applications including screening drugs, studying cell functions, and producing vaccines and therapeutic proteins. Selecting the appropriate cell line depends on factors like species, growth characteristics, and intended experimental purpose.
Introduction
What is cloning?
Why we want to do cloning?
History
Technique of cell cloning
Dolly – the sheep
Species cloned
Why persue animal cloning research?
Conclusion
Introduction
What is cloning?
Why we want to do cloning?
History
Technique of cell cloning
Dolly – the sheep
Species cloned
Why persue animal cloning research?
Conclusion
Genetic mapping is based on recombination frequencies between genetic loci during meiosis. Physical mapping determines the actual distances in base pairs between sequences on a chromosome using overlapping DNA fragments. Before whole genome sequencing, physical maps were created using techniques like restriction mapping of large-insert clones, probing genomic libraries with end fragments, and chromosome walking to build contigs of overlapping sequences. This allowed sequencing of individual fragments which could then be assembled into a complete genome sequence.
This is technique used widely for protein separation from a mixture and is very easy and less costly method. Slides cover all essential points about EMSA and it is quite interesting to know that how it detect and separate different proteins and their mobility shift assay.
This document discusses quantitative trait loci (QTL) mapping. It begins by explaining that quantitative traits are controlled by multiple genes and the regions of the genome that control these traits are called QTLs. It then describes the process of QTL mapping, which involves constructing a linkage map using molecular markers and identifying genomic regions associated with traits. The key steps involve developing a mapping population, generating a saturated linkage map, phenotyping the population, and using approaches like single marker analysis, interval mapping, and composite interval mapping to detect QTLs.
Docking is used to predict the binding of two molecules and evaluate their interaction energy. It involves representing molecules, exploring possible configurations, and ranking them by binding energy using a scoring system. There are two main categories: protein-protein docking treats both molecules as rigid, while protein-ligand docking allows flexibility in the ligand. AutoDock software is commonly used for docking via genetic algorithms and other search methods to minimize energy between a protein and ligand. Docking preparation involves adding hydrogens, assigning charges and merging atoms for both molecules. The results can provide insight into protein interactions and rational drug design.
This database contains structured controlled vocabularies (ontologies) for various knowledge domains related to plants and their associations. It includes ontologies for plant structure, growth stages, traits, phenotypes, molecular functions, biological processes, cellular components, environments, and a taxonomy ontology. Users can search this database to find information on rice genes, proteins, QTLs, and genetic maps. Different ontologies are meant for different purposes and do not overlap.
The document summarizes the key features and navigation options available on GRAMENE, a database for comparative genomics in plants. It describes how users can search for genes, view genome maps and synteny between species, browse chromosomes at different scales, view gene and protein annotations, and export or retrieve sequence data. Navigation menus are available throughout to select different genomes, chromosomes, or zoom in on specific genomic regions of interest.
Genetic toxicology involves assessing the effects of physical and chemical agents on DNA and genetic processes in living cells. It examines the health impacts of genetic alterations in somatic and germ cells, mechanisms that induce alterations like DNA damage and repair, and formation of gene mutations. A variety of assays are used to detect genetic alterations, with goals of identifying mutagenic chemicals and repair mechanisms. These assays examine DNA damage, mutations in nonmammalian and mammalian models, and chromosomal aberrations. Germ cell mutagenesis is also evaluated through assays measuring gene mutations and chromosomal alterations.
Viruses are acellular organisms that can only replicate inside host cells. They contain either DNA or RNA as their genetic material but lack enzymes to synthesize proteins or metabolic machinery. A virus infects a host cell and uses the cell's machinery to produce copies of its genome and proteins which self-assemble into new virus particles. Viruses are distinguished from other parasites by their inability to grow outside of host cells and lack of cellular structure. They have played a major role in human diseases throughout history such as smallpox, measles, and influenza.
Viruses are small infectious agents that cannot replicate outside of host cells. They contain either DNA or RNA surrounded by a protein coat called a capsid. Some viruses have an additional outer envelope. Viruses come in different shapes and sizes, including spherical, helical, polyhedral and more complex structures. Viruses infect specific host cells by binding to cellular receptors and then using the host cell's machinery to replicate their genetic material and make new virus particles.
Hemophilia is a genetic disorder that impairs the body's ability to control blood clotting. It is caused by a defect in the genes that determine how the body produces clotting factor VIII or IX, which are located on the X chromosome. The main treatment for hemophilia is replacement therapy through slow infusions of concentrated clotting factor VIII or IX to replace the missing or low factors. Other treatments include antifibrinolytic medicines to prevent blood clots from breaking down and desmopressin, a man-made hormone that stimulates the release of stored factor VIII.
DNA sequencing is the process of determining the order of nucleotides in a DNA molecule. The Sanger method, developed in 1977, was the most widely used sequencing technique for 25 years. It utilizes chain termination with dideoxynucleotides which lack a 3' OH group, preventing formation of a phosphodiester bond and terminating strand elongation. Four reactions are run in parallel with each dideoxynucleotide labeled with a different color. Gel electrophoresis separates the terminated fragments by size, allowing the DNA sequence to be read by matching fragment sizes to nucleotide colors.
Functional genomics uses genome-wide experimental approaches to assess gene function on a large scale. It analyzes gene expression through techniques like transcriptomics and proteomics. Transcriptomics analyzes gene expression profiles through RNA sequencing or microarray analysis. Microarray analysis involves hybridizing fluorescently-labeled cDNA or cRNA to microarrays containing DNA probes to measure gene expression levels across thousands of genes simultaneously. Functional genomics provides a global understanding of gene function and molecular interactions through integrated omics approaches.
Hemophilia is a genetic bleeding disorder caused by a defect in the genes responsible for blood clotting factors VIII or IX. It is usually inherited and affects boys more than girls. Symptoms include prolonged bleeding after injuries or medical procedures. Treatment involves replacing the missing clotting factor through infusions of donated blood products. Current research is developing new formulations of clotting factors that can be stored at room temperature to improve accessibility.
Comparative genomics involves comparing genomes to discover similarities and differences. It can provide insights into evolutionary relationships, help predict gene function, and aid in drug discovery. The first step is often aligning genome sequences using tools like BLAST or MUMmer. Genomes can then be compared at various levels, such as overall nucleotide statistics, genome structure, and coding/non-coding regions. Comparing gene and protein content across genomes helps predict functions. Conserved genomic features across species also aid prediction. Insights into genome evolution come from studying molecular events like inversions and duplications. Comparative genomics has impacted phylogenetics and drug target identification.
- Pharmacogenomics deals with how genetic variations influence individual responses to drugs in terms of efficacy and toxicity. It aims to identify individuals who are more or less likely to respond to drugs or require altered doses.
- Pharmacogenetics studies variations in targeted genes or related genes, while pharmacogenomics uses genetic information to guide individualized drug and dose choice.
- Genetic polymorphisms like SNPs can result in different amino acids, protein changes, or no effect. They influence drug metabolism and response.
- Pharmacogenomics offers advantages like personalized medicine but faces barriers like complexity, education needs, and drug company incentives. It is being applied in various stages of clinical trials from target identification to dosing.
KnockOut mouse technology By Bikash karkiBikash Karki
The document summarizes the process of creating a knockout mouse through genetic engineering techniques. Key points:
- Knockout mice are created by "knocking out" or inactivating specific genes in embryonic stem cells taken from early mouse embryos.
- There are two main methods - homologous recombination, which precisely replaces a gene with an inactive version, and gene trapping, which randomly inserts DNA to disrupt gene function.
- Genetically modified stem cells are injected into mouse blastocysts to generate chimeric mice, and breeding is used to produce mice that are homozygous for the knocked out gene. Studying these mice helps reveal the function of the targeted gene.
This document provides an overview of functional genomics and methods for transcriptome analysis. It discusses two main approaches - sequence-based approaches like expressed sequence tags (ESTs) and serial analysis of gene expression (SAGE), and microarray-based approaches. For sequence-based approaches, it describes how ESTs can provide gene discovery and expression information but have limitations. It outlines the SAGE methodology and gene index construction to organize EST data. For microarrays, it summarizes the basic workflow including sample preparation, hybridization, image analysis and data normalization to identify differentially expressed genes through statistical tests.
Hemophilia A and B are X-linked bleeding disorders caused by deficiencies of coagulation factors VIII and IX respectively. Hemophilia A is more common, affecting about 1 in 5,000-10,000 live male births. The disorders are inherited but only affect males, with female carriers able to pass the gene to their sons or daughters. Common clinical manifestations include hemarthrosis, hematomas, and intracranial bleeding. Treatment involves replacement of the missing coagulation factor, initially through plasma-derived or recombinant products, with the goal of preventing bleeds or treating acute bleeds.
This document provides information about computer viruses, including what they are, how they spread, different types of viruses, signs that a computer may be infected, and ways to protect against viruses using anti-virus software. It defines viruses and explains that they can replicate and spread without permission. The document then describes several types of viruses like memory resident viruses, direct action viruses, overwrite viruses, and others. It also lists common signs of infection and explains how anti-virus software works to detect and remove viruses, protecting users and their devices.
Site-directed mutagenesis is a technique used to generate specific mutations in DNA at predetermined locations. It involves using a synthetic oligonucleotide primer containing the desired mutation to introduce changes into the DNA sequence during in vitro DNA replication or PCR. This allows researchers to study the effects of mutations and engineer proteins with improved or customized properties. Common methods for site-directed mutagenesis include using single or double primers, cassette mutagenesis by replacing DNA fragments, and PCR-based mutagenesis. The technique has various applications in investigating protein function and developing proteins for commercial uses.
This document provides an overview of pharmacogenetics and discusses:
1. Pharmacogenetics is the study of how genetic factors influence individual responses to drugs. It considers both environmental and genetic factors that impact drug metabolism and effects.
2. Key concepts include how genetic polymorphisms affect drug metabolizing enzymes and transporters, leading to variability in drug efficacy and risk of adverse reactions between individuals.
3. The field has progressed from early discoveries of genetic disorders affecting drug response to now understanding the effects of common gene variants, with the goal of personalized medicine to optimize drug therapy for each patient.
I apologize, I do not see any faces in the provided text. The document appears to be about hemophilia, a genetic bleeding disorder. It discusses causes, classifications by severity, management strategies, and dental considerations for treating patients with hemophilia. However, there are no images included that I could identify faces in.
Toxicogenomics uses genomic technologies to study the effects of toxicants like drugs and chemicals on human health. It provides information on their molecular-level effects and potential toxicities. While this field shows promise to enhance risk assessments, more coordinated efforts are needed to generate data, study existing data in new ways, and address challenges. A large public database and initiatives like a proposed Human Toxicogenomics Initiative could help realize its potential to improve predictive toxicology and public health decisions.
New regulations requiring toxicity data on chemicals and an increasing number of efforts to predict the likelihood of failure of molecules earlier in the drug discovery process are combining to increase the utilization of computational models to toxicity. The potential to predict human toxicity directly from a molecular structure is feasible. By using the experimental properties of known compounds as the basis of predictive models it is possible to develop structure activity relationships and resulting algorithms related to toxicity. Several examples have been published recently, including those for drug-induced liver injury (DILI), the pregnane X receptor, P450 3A4 time dependent inhibition, and transporters associated
with toxicities. The versatility and potential of using such models in drug discovery may be illustrated by increasing the efficiency of molecular screening and decreasing the number of animal studies. With more computational power available on increasingly smaller devices, as well as many collaborative initiatives to make data and toxicology models available, this may enable the development of mobile apps for predicting human toxicities, further increasing their utilization.
Biotechnology And Chemical Weapons Controlguest971b1073
Biotechnology has the potential to both aid medical research and fuel chemical weapons proliferation. Advances like genomics, microarrays, proteomics, and combinatorial chemistry could be used to rationally design new drugs, but also new chemical weapons. This threatens the Chemical Weapons Convention by enabling covert development of novel agents from unscheduled precursors. Additionally, development of non-lethal chemical weapons could provide cover for lethal programs and erode norms against chemical weapons use. The Review Conference should address these challenges to strengthen the Convention.
Pre-discovery
Understand the disease
Target Identification
Choose a molecule to target with a drug
Target Validation
Test the target and confirm its role in the disease
Drug Discovery
Find a promising molecule (a “lead compound”)
that could become a drug
This document presents a study on the safety evaluation of microalgae in rodents. The study aims to evaluate the acute and sub-acute oral toxicity of various microalgae species, including Cladophora, Spirogyra, and Chlorella, in rats. The study will involve collecting algal biomass, selecting and housing laboratory animals, and administering single and repeated doses of the algal extracts to test groups of rats. The rats will be observed for toxicity and various physiological parameters will be measured over the course of the study period. The results will help determine if the algal species are safe enough to be used as nutritional supplements, pharmaceutical ingredients, or other applications.
QIVIVE extrapolation requires a precise correlation between exposure and the effective chemical concentration at the site where the MIE occurs.
This work demonstrates that intracellular distribution is not ruled only by physical-chemical parameters, rather it is mainly regulated by specific biological-mediated mechanisms. Substances with
apparent chemical similarity may show different distribution profile, as shown by the intra-nuclear distribution of polyphenols. While our results derive from a limited number of substances applied to
one cell line, it is plausible that using different substances and/or different cell lines would also have shown that intracellular distribution is not directly related to physical-chemical parameters.
Chemical uptake should be specifically measured and simple extrapolations based on physical-chemical properties may provide misleading decision
This document describes the development of a fluorescence-based assay called "ProteAl" to detect the volatile biomarker 2-methylbutanal produced by Proteus bacteria. Gas chromatography-mass spectrometry and Fourier transform infrared spectroscopy were used to identify 2-methylbutanal in the headspace of Proteus cultures. A fluorescent dye, 5-dimethylaminonaphthalene-1-sulfonylhydrazine, was found to react specifically with 2-methylbutanal, producing a distinct green fluorescence. Testing of 95 bacterial strains showed the ProteAl assay can identify Proteus with 100% specificity and sensitivity, providing a simple method for rapid surveillance of this pathogen.
Genomics and proteomics in drug discovery and developmentSuchittaU
This document discusses the role of genomics and proteomics in drug discovery and development. It explains that genomics and proteomics technologies can help identify new drug targets by comparing gene and protein expression between healthy and diseased cells. Proteomics in particular analyzes changes in protein levels and can quantify individual proteins using techniques like 2D gel electrophoresis and mass spectrometry. The integration of genomics and proteomics provides a more comprehensive understanding of biological systems and is improving the drug discovery process.
1. The document discusses approaches to evaluating the toxicity of industrial chemicals in a humane manner. It examines using laboratory animals versus alternative in vitro methods.
2. Currently, there are no immediate alternatives to using animals for assessing acute toxicity of chemicals through ocular, systemic, and cutaneous toxicity tests. Complex biological systems are difficult to replicate entirely in vitro.
3. Society demands high certainty in toxicity assessments to minimize risk to humans. Toxicologists must be cautious not to reduce predictive quality by replacing animal tests prematurely with alternatives that have not been fully validated.
The Karolinska Institute (KI) is the largest centre for medical education and research in Sweden and the home of the Nobel Prize in Physiology or Medicine.
KI consists of 22 departments and 600 research groups dedicated to improving human health through research and higher education.
The role of the Kohonen/Grafström team has been to guide the application, analysis, interpretation and storage of so called “omics” technology-derived data within the service-oriented subproject “ToxBank”.
A seminar report on the chemical frontiers of living matter seminar series - ...Glen Carter
This seminar report highlights a select few presentations of cutting-edge research being done in various labs across the Paris Science et Lettre (PSL) network.
This document provides a table of contents for a book on computational toxicology. It outlines different parts of the book that cover introductions to toxicology methods, computational methods, applying computers to toxicity assessment of pharmaceuticals and the environment, and new technologies and regulatory perspectives. Each chapter is written by experts in the field and covers topics like quantitative structure-activity relationship models, predicting physicochemical properties, and using computer models in drug discovery and risk assessment.
[Interdisciplinary Toxicology] Evaluation of miR-9 and miR-143 expression in ...mostafa khafaei
The document discusses a study that evaluated the expression levels of miR-9 and miR-143 in urine samples from 32 sulfur mustard exposed patients and 32 healthy subjects. The study found that the expression levels of both miR-9 and miR-143 were significantly decreased in the sulfur mustard exposed patients compared to the healthy subjects, with p-values of 0.0480 and 0.0272, respectively. This suggests an imbalance in several pathways involved in the pathogenic effects of sulfur mustard exposure, such as NF-κB signaling, TGF-β signaling, WNT pathway, inflammation, DNA repair, and apoptosis. The decreases in miR-9 and miR-143 expression may play an important role in the pathogenicity of patients exposed to
Microarrays can be used for gene expression profiling, comparative genomics, disease diagnosis, drug discovery, and toxicological research. It allows researchers to examine thousands of genes simultaneously and see changes in gene expression patterns. Microarrays have applications in areas like cancer classification, pharmacogenomics, and toxicogenomics. While a powerful tool, microarrays also have limitations like being expensive to create and requiring time to develop.
Pluripotent stem cells An in vitro model for nanotoxicityDr. Harish Handral
This document discusses the use of pluripotent stem cells (PSCs) as an in vitro model for assessing nanotoxicity. It notes that existing in vitro and in vivo models have limitations, and that PSCs can differentiate into various cell types and provide a more realistic model that reflects human physiology. PSCs are proposed as a promising alternative platform that could help address current challenges in predicting nanomaterial toxicity and screening new drugs and materials in a reliable and cost-effective way. The review focuses on how induced pluripotent stem cells and embryonic stem cells could be used to establish three-dimensional tissue models for more accurately assessing the hazardous effects of nanomaterials.
The transformational role of polymerase chain reaction (pcr) in environmental...Alexander Decker
This document discusses the transformational role of polymerase chain reaction (PCR) in environmental health research. PCR allows for exponential amplification of target DNA sequences, which has enabled rapid and sensitive detection of pathogens in environmental samples as an alternative to traditional culture methods. While PCR is widely used in developed countries, its benefits have yet to be fully realized in developing countries like Nigeria. The document provides background on DNA replication and the basics of how PCR works to exponentially amplify DNA. It argues that PCR could greatly aid environmental health monitoring and disease diagnosis in Nigeria.
Redbook 2000: IV.B.3 Pathology Considerations in Toxicity StudiesToxicologic...Dmitri Popov
Redbook 2000: IV.B.3 Pathology Considerations in Toxicity StudiesToxicological Principles for the Safety Assessment of Food IngredientsRedbook 2000Chapter IV.B.3. Pathology Considerations in Toxicity Studies.
Molecular Epidemiology entails the inclusion in epidemiological research of biologic measurement made at the molecular level-and thus is an extension of the increasing use of biologically based measures in epidemiological research (McMichael ,1994).The term ‘molecular epidemiology’ may suggest the existence of a sub-discipline with substantive new research content . Molecular techniques, however, are directed principally at enhancing the measurement of exposure, effect, or susceptibility , and not formulating new etiologic hypotheses. As techniques of refinement and elaboration , the integration of molecular measures into mainstream epidemiologic research can offer higher resolution answers in relation to disease causation.
COMPUTATIONAL TOOLS FOR PREDICTION OF NUCLEAR RECEPTOR MEDIATED EFFECTSEAJOA
Endocrine disrupting chemicals pose a significant threat to human health, society and the environment. Many of these chemicals elicit their toxicological effects through nuclear hormone receptors, like the estrogen receptor. Computational tools for predicting receptor mediated effects have been envisaged for their potential to be used for prioritization of chemicals for toxicological evaluation to reduce the amount of costly experimental testing and enable early alerts for newly designed compounds.
The document discusses the Human Genome Project (HGP), including its goals, key milestones, and findings. It also examines some of the ethical, legal, and social issues raised by the HGP. In 3 sentences:
The HGP was an international scientific research project begun in 1990 that aimed to map and sequence the entire human genome. It was completed in 2003, revealing that the human genome contains over 3 billion DNA base pairs and around 30,000 genes. However, the HGP also raised important ethical questions around issues like privacy, ownership, justice, and the potential for discrimination.
This document discusses the use of monoclonal antibodies for cancer therapy. It provides background on conventional chemotherapy and highlights limitations. It then covers the history and development of monoclonal antibodies, including their production and mechanisms of targeting cancer cells, such as antigen cross-linking, activating death receptors, and delivering cytotoxic agents. Specific examples of toxin-immunoconjugates and antibody-directed enzyme prodrug therapy are described. The monoclonal antibody Rituximab is discussed as the first FDA-approved therapeutic monoclonal antibody for cancer.
1) Computer-aided drug design uses computational techniques to aid in the drug discovery process, including finding and storing relevant information, modeling existing lead compounds, and developing new lead compounds.
2) Key techniques include pharmacophore modeling to identify functional groups important for activity, 3D QSAR to develop quantitative structure-activity models, and docking to model interactions of ligands with protein targets.
3) Developing new leads can involve de novo design to build ligands into a target structure, database searching using pharmacophore queries, and combinatorial library design to rapidly screen many potential compounds.
This document discusses the use of monoclonal antibodies for cancer therapy. It provides background on conventional chemotherapy and highlights limitations. It then covers the history and development of monoclonal antibodies, including their production and mechanisms of targeting cancer cells through antigen cross-linking, activating death receptors, or delivering cytotoxic agents. Specific examples of toxin-immunoconjugates and antibody-directed enzyme prodrug therapy are described. The monoclonal antibody Rituximab is discussed as the first FDA-approved therapeutic monoclonal antibody for cancer.
1) Computer-aided drug design uses computational techniques to aid in the drug discovery process, including finding and storing relevant information, modeling existing lead compounds, and developing new lead compounds.
2) Key techniques include pharmacophore modeling to identify functional groups important for activity, 3D QSAR to develop quantitative structure-activity models, and docking to model interactions of ligands with protein targets.
3) Developing new leads can involve de novo design to build ligands into a target structure, database searching using pharmacophore queries, and combinatorial library design to rapidly screen many potential compounds.
1) Computer-aided drug design uses computational techniques to aid in the drug discovery process, including finding and storing relevant information, modeling existing lead compounds, and developing new lead compounds.
2) Key techniques include pharmacophore modeling to identify functional groups important for activity, 3D QSAR to develop quantitative structure-activity models, and docking to model interactions of ligands with protein targets.
3) Developing new leads can involve de novo design to build ligands into a target structure, database searching using pharmacophore queries, and combinatorial library design to rapidly screen many potential compounds.
1. Computer-aided drug design uses computational techniques to aid in the drug discovery process, including finding and storing relevant information, modeling existing lead compounds, and developing new lead compounds.
2. Key techniques include pharmacophore modeling to identify functional groups important for activity, quantitative structure-activity relationship modeling to predict activity, molecular docking to study binding, and free energy perturbation calculations to compare binding of candidates.
3. The workflow involves generating working models of ligands or targets, proposing new lead structures through techniques like de novo design or database searching, and evaluating candidates through synthesis and testing.
This document discusses the use of monoclonal antibodies for cancer therapy. It provides background on conventional chemotherapy and highlights limitations. It then covers the history and development of monoclonal antibodies, including their production and mechanisms of targeting cancer cells, such as antigen cross-linking, activating death receptors, and delivering cytotoxic agents. Specific examples of toxin-immunoconjugates and antibody-directed enzyme prodrug therapy are described. The mechanism and applications of the monoclonal antibody Rituximab for lymphoma are discussed. In conclusion, the document notes the potential for optimizing monoclonal antibody combinations with chemotherapy and radiation therapy.
1. Toxicology Letters 140 Á/141 (2003) 145 Á/148
www.elsevier.com/locate/toxlet
Review
Toxicogenomics: challenges and opportunities
G. Orphanides *
Syngenta Central Toxicology Laboratory, Alderley Park, Macclesfield, Cheshire SK10 4TJ, UK
Received 15 September 2002; accepted 12 December 2002
Abstract
Toxicogenomics describes the measurement of global gene expression changes in biological samples exposed to
toxicants. This new technology promises to greatly facilitate research into toxicant mechanisms, with the possibility of
assisting in the detection of compounds with the potential to cause adverse health effects earlier in the development of
pharmaceutical and chemical products. In this short review, I discuss the opportunities presented by toxicogenomics,
the challenges we face in the application of these tools, and the progress we have made in realising the potential of these
new genomic approaches.
# 2003 Elsevier Science Ireland Ltd. All rights reserved.
Keywords: Toxicogenomics; Microarrays; Mechanistic toxicology; Predictive toxicology
1. Introduction these new tools to advance their discipline and a
new field was born. The application of gene
The publication of the draft sequence of the expression profiling to toxicology, termed toxico-
human genome almost 2 years ago signalled the genomics, presents us with opportunities to define,
arrival of the genomic era of the biological sciences at unprecedented levels of detail, the molecular
(International Human Genome Sequencing Con- events that precede and accompany toxicity,
sortium, 2001). This newfound knowledge accel- promising to shed light on toxic mechanisms that
erated the development of tools that allow are presently poorly understood (Afshari et al.,
biological processes to be examined on a global 1999; Farr and Dunn, 1999; Nuwasyr et al., 1999;
scale. Among these tools are those that facilitate Pennie, 2000; Pennie et al., 2000; Orphanides et al.,
the simultaneous measurement of the expression 2001; Gant, 2002; Ulrich and Friend, 2002).
levels of thousands of different genes, technologies Moreover, it is hoped that gene expression changes
known collectively as gene expression profiling induced upon chemical exposure will provide a
(Duggan et al., 1999; Brown and Botstein, 1999). means of predicting mechanisms of toxicity more
Toxicologists quickly realised the potential of rapidly.
Used in conjunction with existing tools available
* Tel.: '/44-1625-510803; fax: '/44-1625-590249.
to the toxicologist, toxicogenomics promises sig-
E-mail address: george.orphanides@syngenta.com (G. nificant advances in research and investigative
Orphanides). toxicology. These advances include:
0378-4274/03/$ - see front matter # 2003 Elsevier Science Ireland Ltd. All rights reserved.
doi:10.1016/S0378-4274(02)00500-3
2. 146 G. Orphanides / Toxicology Letters 140 Á/141 (2003) 145 Á/148
. a more detailed appreciation of molecular used successfully to predict chemical activity. The
mechanisms of toxicity. most comprehensive study of this kind involved a
. faster screens for substance toxicity. combination of chemical treatments and mutant
. enhanced extrapolation between experimental strains of the yeast Saccharomyces cerevisiae to
animals and humans in the context of risk generate a gene expression database capable of
assessment. predicting the biological effects of exogenous
compounds (Hughes et al., 2000).
In this article, I discuss the use of toxicoge- Two recent studies indicate that toxicogenomics
nomics in mechanistic and predictive toxicology. can be used to predict chemical mode of action in
In particular, I examine how far we have come toxicologically relevant species (Waring et al.,
towards realising the full potential of these tools. 2001; Hamadeh et al., 2002). These reports de-
monstrate that the liver gene expression profiles
associated with exposure of rats to different
2. Use of toxicogenomics to predict mechanisms of hepatotoxins segregate according to mechanisms
toxicity of toxicity. Thus, it appears that the assertion that
toxicogenomics has the potential to provide en-
A goal of modern toxicology is to protect the hanced methods for predicting toxicity is well
human population from exposure to harmful founded. The rodent liver is ideally suited for
substances by identifying compounds with the demonstrating proof of principle: the hepatocyte is
potential to cause toxicity. Most current testing the predominant cell type, therefore hepatotoxic
strategies measure the effects of long-term chemi- chemicals will induce mechanistically linked gene
cal exposure in experimental animals. Through the expression changes in the majority of cells that
identification of gene expression changes asso- make up the organ. However, many toxicants
ciated with chemical exposure, the hope is that target only a small proportion of cells in an organ.
toxicogenomics will facilitate the development of A challenge for the future application of toxico-
methods that predict the long-term effects of genomics in a predictive context is the identifica-
compounds using short-term assays. The under- tion of diagnostic gene expression changes
lying assumption is that compounds that induce originating from cells that represent a minority
toxicity through similar mechanisms will elicit population. Nevertheless, it appears that this
comparable changes in gene expression. It is, general approach holds much promise.
therefore, possible that toxicant-induced expres-
sion changes will act as sensitive and specific
indicators of toxic mechanism. In this way, gene
expression ‘fingerprints’ can be identified for 3. Toxicogenomics as a mechanistic tool
multiple mechanisms of toxic insult and entered
into a database. The gene expression profile of a The global analysis of gene expression levels has
suspected toxicant can then be analysed for found many diverse applications in modern biol-
similarity with the expression fingerprints of ogy. A particular strength of this approach as
known toxicants. applied to toxicology is that it is holistic and,
The predictive capacity of gene expression therefore, provides an unbiased view of alterations
profiling has been demonstrated most compel- in cellular processes associated with chemical
lingly in a clinical setting. A number of studies insult. In this regard, global gene expression
have reported the classification of tumour type profiling is an ideal tool for hypothesis generation
using transcript profiling (reviewed by Clarke et in the context of mechanistic toxicology. Indivi-
al., 2001). For example, van’t Veer et al. (2002) dual genes, or entire pathways, implicated in a
identified a gene expression ‘fingerprint’ capable of mechanism of toxicity using this technology can be
distinguishing metastatic and non-metastatic further evaluated using more conventional ap-
breast tumours. This approach has also been proaches.
3. G. Orphanides / Toxicology Letters 140 Á/141 (2003) 145 Á/148 147
A major challenge in the application of gene toxicology data (e.g. biochemical, clinical and
expression technologies to mechanistic toxicology histopathological data) can greatly facilitate the
is the identification of gene regulation events interpretation of toxicogenomic data. A successful
linked directly to the mode of toxicity under toxicogenomic study will, therefore, be multi-
investigation. Successful application of toxicoge- disciplinary, requiring the expert skills of the
nomics in this context requires an understanding toxicologist, pathologist and molecular biologist
of the link between gene expression changes and (Orphanides et al., 2001).
phenotype (Smith, 2001). The simultaneous mea-
surement of changes in the expression levels of tens
of thousands of genes is now becoming routine.
4. Conclusions
However, the increase in the rate at which gene
expression data can be generated has not been
Toxicogenomics is an evolving science. We have
accompanied by corresponding advances in our
witnessed many successes of the genomic sciences
ability to interpret them as biologically meaningful
in other fields of biology, and these tools are now
information.
beginning to enhance our ability to understand
Any given toxicant is likely to induce alterations
and predict mechanisms of toxicity. It is likely that
in the expression levels of many different genes,
toxicogenomics, along with other global profiling
and only some of these genes will play a role in the
tools such as proteomics (Pandey and Mann, 2000)
mechanism of toxicity. Appropriate experimental
and metabonomics (Nicholson et al., 2002), will
design can facilitate the identification of relevant
revolutionise research and investigative toxicol-
gene changes. For example, the use of animal
ogy, leading to a holistic appreciation of molecular
models in which pathways relevant to the mode of
responses to toxicants. However, there is still a
action have been inactivated or modified can aid
long way to go before the full potential of
the identification of gene expression changes
toxicogenomics is realised. The sheer weight of
directly linked to the molecular mechanism of a
data generated by gene expression profiling can be
toxicant. Transgenic ‘knock-out’ mice resistant to
overwhelming. Extraction of value from this data
the toxic effects of the compound being studied
will be facilitated by the development of toxicoge-
can be used to identify genes whose regulation is
nomic databases capable of being interrogated by
not directly related to the development of toxicity.
expert and non-expert user alike. Moreover, the
Changes in gene expression seen in these knock-
identification of gene expression changes of pre-
out mice exposed to toxicant are unlikely to be
dictive value or mechanistic significance often
linked to the adverse effects of the compound.
requires the use of sophisticated computational
Therefore, any changes in gene expression that
tools, which will evolve alongside gene expression
occur in a sensitive wild-type animal, but not in a
methodologies (Bassett et al., 1999). One thing we
resistant knock-out animal, are more likely to be
can be confident about is that the tools of the
directly associated with the mechanism of toxicity.
genomic era are here to stay. The toxicologist of
While, not all gene expression changes that match
the future may feel equally at home with a
this description will be directly involved in the
toxicogenomic data set as with a histopathology
mode of action of a toxicant, this strategy focuses
slide.
attention on the most likely candidates. This
approach as been used to implicate the lactoferrin
protein in the mechanism of rodent non-genotoxic
hepatocarcinogenesis induced by peroxisome pro- Acknowledgements
liferators (Hasmall et al., 2002).
Toxicant-induced gene expression changes are I thank Drs Ian Kimber and Jonathan Moggs
often difficult to interpret in isolation. Careful for critical comments on this article and apologise
selection of compound dose and time of exposure to those authors whose work I have not cited due
and the concurrent collection of conventional to limitations on article length.
4. 148 G. Orphanides / Toxicology Letters 140 Á/141 (2003) 145 Á/148
References Friend, S.H., 2000. Functional discovery via a compendium
of expression profiles. Cell 102, 109 Á/126.
Afshari, C.A., Nuwaysir, E.F., Barrett, J.C., 1999. Application International Human Genome Sequencing Consortium, Initial
of complementary DNA microarray technology to carcino- sequencing and analysis of the human genome, Nature 409
gen identification, toxicology, and drug safety evaluation. (2001) 860 Á/922.
Cancer Res. 59, 4759 Á/4760. Nicholson, J.K., Connelly, J., Lindon, J.C., Holmes, E., 2002.
Bassett, D.E., Eisen, M.B., Boguski, M.S., 1999. Gene expres- Metabonomics: a platform for studying drug toxicity and
sion informatics */it’s all in your mine. Nat. Gen. 21 gene function. Nat. Rev. Drug Disc. 1, 153 Á/161.
(supplement), 51 Á/55. Nuwasyr, E.F., Bittner, M., Trent, J., Barrett, J.C., Afshari,
Brown, P.O., Botstein, D., 1999. Exploring the new world of C.A., 1999. Microarrays and toxicology: the advent of
the genome with DNA microarrays. Nat. Gen. 21 (supple- toxicogenomics. Mol. Carcinogen. 24, 153 Á/159.
ment), 33 Á/37. Orphanides, G., Pennie, W.D., Moffat, G.J., Kimber, I., 2001.
Clarke, P.A., te Poele, R., Wooster, R., Workman, P., 2001. Toxicogenomics: theoretical and practical considerations.
Gene expression microarray analysis in cancer biology, Comm. Toxicol. 7, 333 Á/346.
pharmacology and drug development: progress and poten- Pandey, A., Mann, M., 2000. Proteomics to study genes and
tial. Biochem. Pharmacol. 62, 1311 Á/1336. genomes. Nature 405, 837 Á/846.
Duggan, D.J., Bittner, M., Chen, Y., Meltzer, P., Trent, J., Pennie, W.D., 2000. Use of cDNA microarrays to probe and
1999. Expression profiling using cDNA microarrays. Nat. understand the toxicological consequences of altered gene
Gen. 21 (supplement), 10 Á/14. expression. Toxicol. Lett. 112 Á/113, 473 Á/477.
Farr, S., Dunn, R.T., II, 1999. Concise review: gene expression Pennie, W.D., Tugwood, J.D., Oliver, G.J.A., Kimber, I., 2000.
applied to toxicology. Toxicol. Sci. 50, 1 Á/9. The principles and practice of toxicogenomics: applications
Gant, T.W., 2002. Classifying toxicity and pathology by gene- and opportunities. Toxicol. Sci. 54, 277 Á/283.
expression profile */taking a lead from studies in neoplasia. Smith, L.L., 2001. Key challenges for toxicologists in the 21st
Trends Pharm. Sci. 23, 388 Á/393. century. Trends Pharmacol. Sci. 22, 281 Á/285.
Hamadeh, H.K., Bushel, P.R., Jayadev, S., DiSorbo, O., Ulrich, R., Friend, S.H., 2002. Toxicogenomics and drug
Bennett, L., Li, L., Tennant, R., Stoll, R., Barrett, J.C., discovery: will new technologies help us produce better
Paules, R.S., Blanchard, K., Afshari, C.A., 2002. Prediction
drugs? Nat. Rev. Drug Disc. 1, 84 Á/88.
of compound signature using high density gene expression
van’t Veer, L.J., Dai, H., van de Vijver, M.J., He, Y.D., Hart,
profiling. Toxicol. Sci. 67, 232 Á/240.
A.A.M., Mao, M., Peterse, H.L., van der Kooy, K.,
Hasmall, S., Orphanides, G., James, N., Pennie, W., Hedley,
Marton, M.J., Witteveen, A.T., Schreiber, G.J., Kerkhoven,
K., Soames, A., Kimber, I., Roberts, R., 2002. Down-
R.M., Roberts, C., Linsley, P.S., Bernards, R., Friend, S.H.,
regulation of lactoferrin by PPARalpha ligands: role in
perturbation of hepatocyte proliferation and apoptosis. 2002. Gene expression profiling predicts clinical outcome of
Toxicol. Sci. 68, 304 Á/313. breast cancer. Nature 415, 530 Á/536.
Hughes, T.R., Marton, M.J., Jones, A.R., Roberts, C.J., Waring, J.F., Jolly, R.A., Ciurlionis, R., Lum, P.Y., Praes-
Stoughton, R., Armour, C.D., Bennett, H.A., Coffey, E., gaard, J.T., Morfitt, D.C., Buratto, B., Roberts, C., Schadt,
Dai, H., He, Y.D., Kidd, M.J., King, A.M., Meyer, M.R., E., Ulrich, R.G., 2001. Clustering of hepatotoxins based on
Slade, D., Lum, P.Y., Stapaniants, S.B., Shoemaker, D.D., mechanism of toxicity using gene expression profiles.
Gachotte, D., Chakraburtty, K., Simon, J., Bard, M., Toxicol. Appl. Pharmacol. 175, 28 Á/42.