This poster by Mary Shimoyama about RGD's PhenoMiner Phenotype Database and Data Mining tool was presented at the December 2009 Rat Genomics and Models meeting at Cold Spring Harbor Laboratory.
Research Methodology:
Course Outline
1. Meaning of Research
* Nature of Research
* Roles of research
* Ways of gaining knowledge
* Problems of research in developing countries
2. Types of research
* Survey research method
* Pure ( basic and fundamental), Applied ( survey, experimental, expos- Facto research.
* instrumentation in research ( Instrument of data collection in research e.g. observations, questionnaire etc.
* Literature review ( more notes at www.airloaded.com
Research Methodology:
Course Outline
1. Meaning of Research
* Nature of Research
* Roles of research
* Ways of gaining knowledge
* Problems of research in developing countries
2. Types of research
* Survey research method
* Pure ( basic and fundamental), Applied ( survey, experimental, expos- Facto research.
* instrumentation in research ( Instrument of data collection in research e.g. observations, questionnaire etc.
* Literature review ( more notes at www.airloaded.com
Conférence de restitution du CES 2015 de Las Vegas jouée chez l'Usine IO à Paris pour Cap Digital et Systematic le 3 février 2015. Je peux rejouer de manière personnalisée cette conférence en intra-entreprise.
This poster by Melinda Dwinell about RGD's Phenotypes and Models Portal was presented at the December 2009 Rat Genomics and Models meeting at Cold Spring Harbor Laboratory.
Conférence de restitution du CES 2015 de Las Vegas jouée chez l'Usine IO à Paris pour Cap Digital et Systematic le 3 février 2015. Je peux rejouer de manière personnalisée cette conférence en intra-entreprise.
This poster by Melinda Dwinell about RGD's Phenotypes and Models Portal was presented at the December 2009 Rat Genomics and Models meeting at Cold Spring Harbor Laboratory.
A Review of Various Methods Used in the Analysis of Functional Gene Expressio...ijitcs
Sequencing projects arising from high-throughput technologies including those of sequencing DNA microarray allowed measuring simultaneously the expression levels of millions of genes of a biological sample as well as to annotate and to identify the role (function) of those genes. Consequently, to better manage and organize this significant amount of information, bioinformatics approaches have been developed. These approaches provide a representation and a more 'relevant' integration of data in order to test and validate the researchers’ hypothesis. In this context, this article describes and discusses some techniques used for the functional analysis of gene expression data.
An Ensemble of Filters and Wrappers for Microarray Data Classification mlaij
The development of microarray technology has suppli
ed a large volume of data to many fields. The gene
microarray analysis and classification have demonst
rated an effective way for the effective diagnosis
of
diseases and cancers. In as much as the data achiev
ing from microarray technology is very noisy and al
so
has thousands of features, feature selection plays
an important role in removing irrelevant and redund
ant
features and also reducing computational complexity
. There are two important approaches for gene
selection in microarray data analysis, the filters
and the wrappers. To select a concise subset of inf
ormative
genes, we introduce a hybrid feature selection whic
h combines two approaches. The fact of the matter i
s
that candidate’s features are first selected from t
he original set via several effective filters. The
candidate
feature set is further refined by more accurate wra
ppers. Thus, we can take advantage of both the filt
ers
and wrappers. Experimental results based on 11 micr
oarray datasets show that our mechanism can be
effected with a smaller feature set. Moreover, thes
e feature subsets can be obtained in a reasonable t
ime
An Ensemble of Filters and Wrappers for Microarray Data Classification mlaij
The development of microarray technology has supplied a large volume of data to many fields. The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. In as much as the data achieving from microarray technology is very noisy and also has thousands of features, feature selection plays an important role in removing irrelevant and redundant features and also reducing computational complexity. There are two important approaches for gene selection in microarray data analysis, the filters and the wrappers. To select a concise subset of informative genes, we introduce a hybrid feature selection which combines two approaches. The fact of the matter is that candidate’s features are first selected from the original set via several effective filters. The candidate feature set is further refined by more accurate wrappers. Thus, we can take advantage of both the filters and wrappers. Experimental results based on 11 microarray datasets show that our mechanism can be effected with a smaller feature set. Moreover, these feature subsets can be obtained in a reasonable time.
Mining frequent pattern is a NP-hard problem and has become a hot topic in recent researches. Moreover,
protein dataset contains distinct Pattern that can be used in many areas such as drug discovery, disease
prediction, etc. In early decades, pattern discovery and protein fold recognition was determined by
biophysics and biochemistry approach; and X-ray and NMR have been used for protein structure
prediction which are very expensive and time consuming while, a mathematical approach can reduce the
cost of such laboratory experiments. Many computer based tests have been applied for the protein fold
detection such as graph based algorithms and data mining viewpoints like classification or clustering, and
all have their advantages and drawbacks. Pattern matching in protein sequential dataset for fold
recognition plays a meaningful role in the field of bioinformatics since it evolved prediction of unknown
protein function. There are lots of pattern recognition algorithms but in this work we used PrefixSpan. The
reason of selecting this algorithm will be discussed below in section 2. For evaluating the result of
experiments we used SCOPE dataset which is a classified protein dataset and ASTRAL, a discriminative
sequential dataset of SCOPE.
Leveraging Publicly Accessible Clinical Trails Data Sharing, Dissemination an...Vaticle
In the broader realm of the advancement of science and the betterment of the human condition, there are several purported benefits for sharing clinical trials and research data. The scientific community has just begun to embrace open-access datasets to build their knowledge base, gain insight into new discoveries, and generate novel data-driven hypotheses that were not initially formulated in the studies. With the increasing amount of clinical trial data available, comes the need to leverage a multitude of shared datasets. Your knowledge base needs to facilitate discovery across research domains.
This talk highlights the data sharing, dissemination, and repurposing of clinical and molecular studies generated by government-funded research consortia. Further, we are building a new knowledge base resource, IMMGRAKN to facilitate translational discovery from crowd-sourced clinical trials data in ImmPort (www.immport.org), an NIH-NIAID funded open-access immunology database and analysis portal. The case studies demonstrating the use of IMMGRAKN will be discussed
tranSMART Community Meeting 5-7 Nov 13 - Session 5: Advancing tranSMART Analy...David Peyruc
tranSMART Community Meeting 5-7 Nov 13 - Session 5: Advancing tranSMART Analytical Capabilities with Knowledge Content
Sirimon Ocharoen, Thomson Reuters
To effectively analyze data in tranSMART, biological analysis/knowledge-based approach is needed. Through a case study, we will demonstrate how system biology content can be integrated in tranSMART to enable functional analysis and biological interpretation. We will also share our experience and user feedbacks from various projects.
ANALYSIS OF PROTEIN MICROARRAY DATA USING DATA MININGijbbjournal
Latest progress in biology, medical science, bioinformatics, and biotechnology has become important and
tremendous amounts of biodata that demands in-depth analysis. On the other hand, recent progress in data
mining research has led to the development of numerous efficient and scalable methods for mining
interesting patterns in large databases. This paper bridge the two fields, data mining and bioinformatics
for successful mining of biological data. Microarrays constitute a new platform which allows the discovery
and characterization of proteins.
A high-throughput approach for multi-omic testing for prostate cancer researchThermo Fisher Scientific
The proliferation of genetic testing technologies and genome-scale studies has increased our understanding of the genetic basis of complex diseases. However, this information alone tells an incomplete story of the underlying biology. Integrative approaches that combine data from multiple sources, such as the genome, transcriptome and/or proteome, can provide a more comprehensive and multi-dimensional model of complex diseases. Similarly, the integration of multiple data types in disease screening can improve our understanding of disease in populations. In a series of groundbreaking multi-omic, population-based studies of prostate cancer, researchers at the Karolinska Institutet in Stockholm, Sweden identified sets of genetic and protein biomarkers that when evaluated together with other clinical research data performed significantly better in predicting cancer risk (1,2) than the most-widely used single protein biomarker, the prostate-specific antigen (PSA).
Part of the MaRS BioEntrepreneurship series session: Clinical Trials Strategy
Speaker: Sue Gilbert Evans
This is available as an audio presentation:
http://www.marsdd.com/bioent/feb12
Also view the event blog and summary:
http://blog.marsdd.com/2007/02/14/bioentrepreneurship-clinical-trial-strategies-its-never-too-soon/
In this research, a hybrid wrapper model is proposed to identify the featured gene subset from the gene expression data. To balance the gap between exploration
and exploitation, a hybrid model with a popular meta-heuristic algorithm named
spider monkey optimizer (SMO) and simulated annealing (SA) is applied. In the proposed model, ReliefF is used as a filter to obtain the relevant gene subset
from dataset by removing the noise and outliers prior to feeding the data to the
wrapper SMO. To enhance the quality of the solution, simulated annealing is
deployed as local search with the SMO in the second phase, which will guide to the detection of the most optimal feature subset. To evaluate the performance of the proposed model, support vector machine (SVM) as a fitness function to recognize the most informative biomarker gene from the cancer datasets along with University of California, Irvine (UCI) datasets. To further evaluate the model, 4 different classifiers (SVM, na¨ıve Bayes (NB), decision tree (DT), and k-nearest neighbors (KNN)) are used. From the experimental results and analysis, it’s noteworthy to accept that the ReliefF-SMO-SA-SVM performs relatively better than its state-of-the-art counterparts. For cancer datasets, our model performs better in terms of accuracy with a maximum of 99.45%.
mHealth Israel_Ryo Kosaka_AIST_National Institute of Advanced Industrial Scie...Levi Shapiro
Presentation by Ryo Kosaka, Senior Research Scientist, Health Research Institute, National Institute of Advanced Industrial Science and Technology (AIST). Includes an overview of priority strategies in Life Sciences and Biotech and description of the organization of the Life Sciences and Biotech department. Recent projects include a Portable System for High-Speed DNA Quantification, Application of a cell microarray chip for clinical diagnosis and single cell analysis, Safe and Secure Artificial Heart, New diagnosis for liver fibrosis utilizing glycans, AIST ventures from the department of Life Science & Biotech as well as International cooperation.
Similar to Phenotype Database and Data Mining at RGD (20)
Human QTL Data within the Rat Genome DatabaseJennifer Smith
This poster by Tim Lowry about RGD's human QTL data was presented at the December 2009 Rat Genomics and Models meeting at Cold Spring Harbor Laboratory.
The Diabetes Portal at the Rat Genome DatabaseJennifer Smith
This poster by Tom Hayman about RGD's Diabetes Disease Portal was presented at the December 2009 Rat Genomics and Models meeting at Cold Spring Harbor Laboratory.
RGD--A Repository and Cumulative Resource for Rat StrainsJennifer Smith
This poster by Rajni Nigam about RGD's rat strain data was presented at the December 2009 Rat Genomics and Models meeting at Cold Spring Harbor Laboratory.
This poster by Diane Munzenmaier about RGD's new Physiological Pathways Portal was presented at the December 2009 Rat Genomics and Models meeting at Cold Spring Harbor Laboratory.
Pathway resources at the Rat Genome DatabaseJennifer Smith
This poster by Victoria Petri about RGD's pathway resources was presented at the December 2009 Rat Genomics and Models meeting at Cold Spring Harbor Laboratory.
This poster by Jennifer Smith about methods of education and community outreach and input at the Rat Genome Database was presented at the December 2009 Rat Genomics and Models meeting at Cold Spring Harbor Laboratory.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
1. Phenotype Database and Data Mining at RGD
Mary Shimoyama
Rat Genome Database, Human and Molecular Genetics Center, Medical College of Wisconsin, Milwaukee, Wisconsin
ABSTRACT: There has been an emergence of multiple large
scale phenotype projects in the rat model organism
community as well as renewed interest in the ongoing
Standardizing Phenotype Data with Ontologies
phenotype data generated by thousands of researchers
using hundreds of rat strains. In order to provide ready
access to such data, the Rat Genome Database (RGD),
http://rgd.mcw.edu is developing a Rat Phenome Database,
http://rgd.mcw.edu/phenotypes . To integrate data from
1 2 3 4
multiple datasets and published literature, RGD is
developing a Rat Strain Taxonomy 1 and three ontologies
to standardize the major aspects of phenotype records: Study
Strain Taxonomy Clinical Measurement Measurement Experimental
Clinical Measurement Ontology 2 Measurement Method
Ontology 3 and Experimental Conditions Ontology 4
Researcher Experiment
Method Conditions
In a pilot study phenotype data from Physgen
http://pga.mcw.edu the National BioResource Project for the
Rat in Japan http://www.anim.med.kyoto-u.ac.jp/nbr/ and Experimental
data from published literature were integrated. Conditions
Clinical
Sample
Measurement
A data mining tool was developed to leverage the utility of
Experimental
the ontologies. 5 Users may begin their search with any of Condition Ontology
Clinical Measurement
Ontology
the four components of the phenotype data. In the example,
the user chooses a Clinical Measurement 6 followed by a Strain Ontology
Measurement
Assay type
Method
Ontology
Ontology
filter for Measurement Method 7 and Experimental
Conditions 8 and finally limits the search by Strain 9 Ontology/Vocabulary
The tool keeps track of the search parameters and provides
a running count of records retrieved. Only filter choices
applicable to set parameters and for which there is data are
presented to the user. Results can be downloaded in both
Major components of phenotype data records
summary and expanded formats 10
Development of the ontologies for use with rat phenotypes
continues and expansions and modifications are being
applied for use with the human clinical data from the Family
Blood Pressure Project in conjunction with Washington
University School of Medicine.
Data Mining Using Ontologies
5 6
7
Users may choose to start from any of the 4 Query parameters and record counts
8
phenotype components Tool keeps track of query and number shown
of records returned.
The ontology is presented with
number of associated records
Second filter chosen
10 9
Results may be seen and downloaded in Only applicable choices for which
summary or expanded formats there are data are presented
Query record maintained
1%
RGD is funded by grant HL64541 from the National Heart, Lung, and Blood Institute on behalf of the NIH.