This document discusses various topics in bioinformatics and Biopython:
1. It introduces GitHub as a code hosting site and shows how to access a private GitHub repository.
2. It covers various Python control structures (if/else, while, for) and data structures (lists, dictionaries).
3. It provides examples of using Biopython to work with biological sequences, including translating DNA to protein, finding complements, and working with different genetic codes.
This document provides an overview of the PEAR DB abstraction layer. It allows for portable database programming in PHP by providing a common API that works across different database backends like MySQL, PostgreSQL, Oracle, etc. It handles tasks like prepared statements, transactions, error handling, and outputting query results in a standardized way. PEAR DB aims to simplify database programming and make applications less dependent on the underlying database system.
This document provides an overview of using Python for bioinformatics. It discusses why Python is useful for bioinformatics due to its built-in libraries and wide scientific use. It also covers Python basics like strings, regular expressions, control structures, lists, dictionaries, reading/writing files, and using GitHub for code sharing. Examples are given for many of these topics. Finally, it poses questions about analyzing sequence data and a protein database using Python.
This document provides an introduction to relational database management systems (RDBMS) through a series of slides. It covers topics such as installing MySQL, connecting to databases, using SQL commands to retrieve and manipulate data, and designing databases. The slides introduce fundamental RDBMS concepts like tables, rows, columns, keys, and relationships. It also demonstrates how to use the MySQL command line interface to issue queries and explore database structure. Examples are provided for common SQL statements like SELECT, CREATE, INSERT and more.
This document provides information on using Perl to interact with and manipulate databases. It discusses:
- Using the DBI module to connect to databases in a vendor-independent way
- Installing Perl modules like DBI and DBD drivers to connect to specific databases like Postgres
- Preparing the Postgres database environment, including initializing and starting the database
- Using the DBI handler and statements to connect to and execute queries on the database
- Retrieving and manipulating database records through functions like SELECT, adding new records, etc.
The document provides code examples for connecting to Postgres with Perl, executing queries to retrieve data, and manipulating the database through operations like inserting new records. It focuses on
This document discusses Biopython, a Python package for biological data analysis. It provides concise summaries of key Biopython concepts:
1) Biopython is an object-oriented Python package that consists of modules for common biological data operations like working with sequences.
2) Key Biopython classes include Alphabet for sequence alphabets, Seq for representing sequences, SeqRecord for sequences with metadata, and SeqIO for reading/writing sequences to files.
3) Classes specify attributes (data) and methods (functions) that objects can have. For example, Seq objects have attributes like sequence and alphabet, and methods like translate() and complement().
The document provides instructions for running Hadoop in standalone, pseudo-distributed, and fully distributed modes. It discusses downloading and installing Hadoop, configuring environment variables and files for pseudo-distributed mode, starting Hadoop daemons, and running a sample word count MapReduce job locally to test the installation.
Hadoop is an open-source software framework for distributed storage and processing of large datasets across clusters of computers. The core of Hadoop includes HDFS for distributed storage, and MapReduce for distributed processing. Other Hadoop projects include Pig for data flows, ZooKeeper for coordination, and YARN for job scheduling. Key Hadoop daemons include the NameNode, Secondary NameNode, DataNodes, JobTracker and TaskTrackers.
This document provides an overview of the PEAR DB abstraction layer. It allows for portable database programming in PHP by providing a common API that works across different database backends like MySQL, PostgreSQL, Oracle, etc. It handles tasks like prepared statements, transactions, error handling, and outputting query results in a standardized way. PEAR DB aims to simplify database programming and make applications less dependent on the underlying database system.
This document provides an overview of using Python for bioinformatics. It discusses why Python is useful for bioinformatics due to its built-in libraries and wide scientific use. It also covers Python basics like strings, regular expressions, control structures, lists, dictionaries, reading/writing files, and using GitHub for code sharing. Examples are given for many of these topics. Finally, it poses questions about analyzing sequence data and a protein database using Python.
This document provides an introduction to relational database management systems (RDBMS) through a series of slides. It covers topics such as installing MySQL, connecting to databases, using SQL commands to retrieve and manipulate data, and designing databases. The slides introduce fundamental RDBMS concepts like tables, rows, columns, keys, and relationships. It also demonstrates how to use the MySQL command line interface to issue queries and explore database structure. Examples are provided for common SQL statements like SELECT, CREATE, INSERT and more.
This document provides information on using Perl to interact with and manipulate databases. It discusses:
- Using the DBI module to connect to databases in a vendor-independent way
- Installing Perl modules like DBI and DBD drivers to connect to specific databases like Postgres
- Preparing the Postgres database environment, including initializing and starting the database
- Using the DBI handler and statements to connect to and execute queries on the database
- Retrieving and manipulating database records through functions like SELECT, adding new records, etc.
The document provides code examples for connecting to Postgres with Perl, executing queries to retrieve data, and manipulating the database through operations like inserting new records. It focuses on
This document discusses Biopython, a Python package for biological data analysis. It provides concise summaries of key Biopython concepts:
1) Biopython is an object-oriented Python package that consists of modules for common biological data operations like working with sequences.
2) Key Biopython classes include Alphabet for sequence alphabets, Seq for representing sequences, SeqRecord for sequences with metadata, and SeqIO for reading/writing sequences to files.
3) Classes specify attributes (data) and methods (functions) that objects can have. For example, Seq objects have attributes like sequence and alphabet, and methods like translate() and complement().
The document provides instructions for running Hadoop in standalone, pseudo-distributed, and fully distributed modes. It discusses downloading and installing Hadoop, configuring environment variables and files for pseudo-distributed mode, starting Hadoop daemons, and running a sample word count MapReduce job locally to test the installation.
Hadoop is an open-source software framework for distributed storage and processing of large datasets across clusters of computers. The core of Hadoop includes HDFS for distributed storage, and MapReduce for distributed processing. Other Hadoop projects include Pig for data flows, ZooKeeper for coordination, and YARN for job scheduling. Key Hadoop daemons include the NameNode, Secondary NameNode, DataNodes, JobTracker and TaskTrackers.
The document describes the steps to set up a Hadoop cluster with one master node and three slave nodes. It includes installing Java and Hadoop, configuring environment variables and Hadoop files, generating SSH keys, formatting the namenode, starting services, and running a sample word count job. Additional sections cover adding and removing nodes and performing health checks on the cluster.
Vmlinux: anatomy of bzimage and how x86 64 processor is bootedAdrian Huang
This slide deck describes the Linux booting flow for x86_64 processors.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
The document provides descriptions of various components in Hadoop including Hadoop Core, Pig, ZooKeeper, JobTracker, TaskTracker, NameNode, Secondary NameNode, and the design of HDFS. It also discusses how to deploy Hadoop in a distributed environment and configure core-site.xml, hdfs-site.xml, and mapred-site.xml.
MySQL Slow Query log Monitoring using Beats & ELKI Goo Lee
This document provides instructions for using Filebeat, Logstash, Elasticsearch, and Kibana to monitor MySQL slow query logs. It describes installing and configuring each component, with Filebeat installed on database servers to collect slow query logs, Logstash to parse and index the logs, Elasticsearch for storage, and Kibana for visualization and dashboards. Key steps include configuring Filebeat to ship logs to Logstash, using grok filters in Logstash to parse the log fields, outputting to Elasticsearch, and visualizing slow queries and creating sample dashboards in Kibana.
This document provides an overview and introduction to using the command line interface and submitting jobs to the NIAID High Performance Computing (HPC) Cluster. The objectives are to learn basic Unix commands, practice file manipulation from the command line, and submit a job to the HPC cluster. The document covers topics such as the anatomy of the terminal, navigating directories, common commands, tips for using the command line more efficiently, accessing and mounting drives on the HPC cluster, and an overview of the cluster queue system.
This presentation covers all aspects of PostgreSQL administration, including installation, security, file structure, configuration, reporting, backup, daily maintenance, monitoring activity, disk space computations, and disaster recovery. It shows how to control host connectivity, configure the server, find the query being run by each session, and find the disk space used by each database.
eZ Cluster allows running an eZ Publish installation on multiple servers for improved performance, redundancy, and scalability. It matches the database storage for metadata with either database or network file system storage for content files. The cluster handlers store metadata in the database and files either in the database or on an NFS server. Configuration involves setting the cluster handler, storing files on the database or NFS, moving existing files to the cluster, rewriting URLs, and indexing binary files. The cluster API provides methods for reading, writing, and caching files while handling concurrency and stale caching.
Virtual File System in Linux Kernel
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
The document provides an overview of Hydra, an open source distributed data processing system. It discusses Hydra's goals of supporting streaming and batch processing at massive scale with fault tolerance. It also covers key Hydra concepts like jobs, tasks, and nodes. The document then demonstrates setting up a local Hydra development environment and creating a sample job to analyze log data and find top search terms.
In this session we will cover wide area replica sets and using tags for backup. Attendees should be well versed in basic replication and familiar with concepts in the morning's basic replication talk. No beginner topics will be covered in this session
- Replica sets in MongoDB allow for replication across multiple servers, with one server acting as the primary and able to accept writes, and other secondary servers replicating the primary.
- If the primary fails, the replica set will automatically elect a new primary from the secondary servers and continue operating without interruption.
- The replica set configuration specifies the members, their roles, and settings like heartbeat frequency to monitor member health and elect a primary if needed.
Database replication involves keeping identical copies of data on different servers to provide redundancy and minimize downtime. Replication is recommended for databases in production from the start. A MongoDB replica set consists of a primary server that handles client requests and secondary servers that copy the primary's data. Replica sets can include up to 50 members with 7 voting members and use an oplog to replicate operations from the primary to secondaries. For elections and writes to succeed, a majority of voting members must be reachable.
Process Address Space: The way to create virtual address (page table) of user...Adrian Huang
Process Address Space: The way to create virtual address (page table) of userspace application.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
SCALE 15x Minimizing PostgreSQL Major Version Upgrade DowntimeJeff Frost
This document provides instructions for minimizing downtime when performing a major version upgrade of PostgreSQL using logical replication with Slony. It discusses various methods for performing the upgrade, including dump/restore, pg_upgrade, and logical replication with Slony. It then provides a step-by-step guide to setting up logical replication between two PostgreSQL nodes using Slony, including initializing the cluster and nodes, creating replication sets, subscribing nodes, and monitoring the initial synchronization process. The document demonstrates how Slony allows performing a graceful switchover and switchback between nodes when upgrading PostgreSQL versions.
Page cache mechanism in Linux kernel.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
The document discusses the glance-replicator tool in OpenStack. Glance-replicator allows replication of images between two glance servers. It can replicate images and also import and export images. The document provides examples of using glance-replicator commands like compare, livecopy to replicate images between two devstack all-in-one OpenStack environments. It demonstrates the initial state with only one environment having images and after replication both environments having the same set of images.
The document discusses various topics in bioinformatics including:
1) Control structures, lists, dictionaries, and regular expressions in Python.
2) Parsing Swiss-Prot files and extracting amino acid frequencies using Biopython.
3) Functions for working with biological sequences like transcription, translation, and translating between different genetic codes using the Biopython module.
En esta presentación se muestran un conjunto de librerías y frameworks en Python para poder realizar pruebas tanto funcionales com ono funcionales, a diferentes niveles (unitario, aceptación y e2)
The document describes the steps to set up a Hadoop cluster with one master node and three slave nodes. It includes installing Java and Hadoop, configuring environment variables and Hadoop files, generating SSH keys, formatting the namenode, starting services, and running a sample word count job. Additional sections cover adding and removing nodes and performing health checks on the cluster.
Vmlinux: anatomy of bzimage and how x86 64 processor is bootedAdrian Huang
This slide deck describes the Linux booting flow for x86_64 processors.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
The document provides descriptions of various components in Hadoop including Hadoop Core, Pig, ZooKeeper, JobTracker, TaskTracker, NameNode, Secondary NameNode, and the design of HDFS. It also discusses how to deploy Hadoop in a distributed environment and configure core-site.xml, hdfs-site.xml, and mapred-site.xml.
MySQL Slow Query log Monitoring using Beats & ELKI Goo Lee
This document provides instructions for using Filebeat, Logstash, Elasticsearch, and Kibana to monitor MySQL slow query logs. It describes installing and configuring each component, with Filebeat installed on database servers to collect slow query logs, Logstash to parse and index the logs, Elasticsearch for storage, and Kibana for visualization and dashboards. Key steps include configuring Filebeat to ship logs to Logstash, using grok filters in Logstash to parse the log fields, outputting to Elasticsearch, and visualizing slow queries and creating sample dashboards in Kibana.
This document provides an overview and introduction to using the command line interface and submitting jobs to the NIAID High Performance Computing (HPC) Cluster. The objectives are to learn basic Unix commands, practice file manipulation from the command line, and submit a job to the HPC cluster. The document covers topics such as the anatomy of the terminal, navigating directories, common commands, tips for using the command line more efficiently, accessing and mounting drives on the HPC cluster, and an overview of the cluster queue system.
This presentation covers all aspects of PostgreSQL administration, including installation, security, file structure, configuration, reporting, backup, daily maintenance, monitoring activity, disk space computations, and disaster recovery. It shows how to control host connectivity, configure the server, find the query being run by each session, and find the disk space used by each database.
eZ Cluster allows running an eZ Publish installation on multiple servers for improved performance, redundancy, and scalability. It matches the database storage for metadata with either database or network file system storage for content files. The cluster handlers store metadata in the database and files either in the database or on an NFS server. Configuration involves setting the cluster handler, storing files on the database or NFS, moving existing files to the cluster, rewriting URLs, and indexing binary files. The cluster API provides methods for reading, writing, and caching files while handling concurrency and stale caching.
Virtual File System in Linux Kernel
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
The document provides an overview of Hydra, an open source distributed data processing system. It discusses Hydra's goals of supporting streaming and batch processing at massive scale with fault tolerance. It also covers key Hydra concepts like jobs, tasks, and nodes. The document then demonstrates setting up a local Hydra development environment and creating a sample job to analyze log data and find top search terms.
In this session we will cover wide area replica sets and using tags for backup. Attendees should be well versed in basic replication and familiar with concepts in the morning's basic replication talk. No beginner topics will be covered in this session
- Replica sets in MongoDB allow for replication across multiple servers, with one server acting as the primary and able to accept writes, and other secondary servers replicating the primary.
- If the primary fails, the replica set will automatically elect a new primary from the secondary servers and continue operating without interruption.
- The replica set configuration specifies the members, their roles, and settings like heartbeat frequency to monitor member health and elect a primary if needed.
Database replication involves keeping identical copies of data on different servers to provide redundancy and minimize downtime. Replication is recommended for databases in production from the start. A MongoDB replica set consists of a primary server that handles client requests and secondary servers that copy the primary's data. Replica sets can include up to 50 members with 7 voting members and use an oplog to replicate operations from the primary to secondaries. For elections and writes to succeed, a majority of voting members must be reachable.
Process Address Space: The way to create virtual address (page table) of user...Adrian Huang
Process Address Space: The way to create virtual address (page table) of userspace application.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
SCALE 15x Minimizing PostgreSQL Major Version Upgrade DowntimeJeff Frost
This document provides instructions for minimizing downtime when performing a major version upgrade of PostgreSQL using logical replication with Slony. It discusses various methods for performing the upgrade, including dump/restore, pg_upgrade, and logical replication with Slony. It then provides a step-by-step guide to setting up logical replication between two PostgreSQL nodes using Slony, including initializing the cluster and nodes, creating replication sets, subscribing nodes, and monitoring the initial synchronization process. The document demonstrates how Slony allows performing a graceful switchover and switchback between nodes when upgrading PostgreSQL versions.
Page cache mechanism in Linux kernel.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
The document discusses the glance-replicator tool in OpenStack. Glance-replicator allows replication of images between two glance servers. It can replicate images and also import and export images. The document provides examples of using glance-replicator commands like compare, livecopy to replicate images between two devstack all-in-one OpenStack environments. It demonstrates the initial state with only one environment having images and after replication both environments having the same set of images.
The document discusses various topics in bioinformatics including:
1) Control structures, lists, dictionaries, and regular expressions in Python.
2) Parsing Swiss-Prot files and extracting amino acid frequencies using Biopython.
3) Functions for working with biological sequences like transcription, translation, and translating between different genetic codes using the Biopython module.
En esta presentación se muestran un conjunto de librerías y frameworks en Python para poder realizar pruebas tanto funcionales com ono funcionales, a diferentes niveles (unitario, aceptación y e2)
This tutorial is intended for verification engineers that must validate algorithmic designs. It presents the detailed steps for implementing a SystemVerilog verification environment that interfaces with a GNU Octave mathematical model. It describes the SystemVerilog – C++ communication layer with its challenges, like proper creation and activation or piped algorithm synchronization handling. The implementation is illustrated for Ncsim, VCS and Questa.
Docker Logging and analysing with Elastic StackJakub Hajek
Collecting logs from the entire stateless environment is challenging parts of the application lifecycle. Correlating business logs with operating system metrics to provide insights is a crucial part of the entire organization. What aspects should be considered while you design your logging solutions?
Docker Logging and analysing with Elastic Stack - Jakub Hajek PROIDEA
Collecting logs from the entire stateless environment is challenging parts of the application lifecycle. Correlating business logs with operating system metrics to provide insights is a crucial part of the entire organization. We will see the technical presentation on how to manage a large amount of the data in a typical environment with microservices.
This document provides an overview of OpenStack APIs and the WSGI (Web Server Gateway Interface) that powers them. It begins with an introduction to WSGI and how OpenStack services are implemented as WSGI applications. It then demonstrates how the OpenStack APIs can be accessed via libraries like novaclient or directly with HTTP requests. Code examples are provided showing how to authenticate against Keystone and retrieve images using urllib2. The document concludes with explanations of how WSGI, WebOb, and Paste are used to implement the OpenStack "web stack".
The document discusses porting Python 2 code to Python 3. It recommends taking a gradual approach, using compatibility shims like Six and Future to support both Python 2 and 3 in the same codebase. It provides tips on changing string handling, migrating approaches like map and filter, and using tools like Futurize, Modernize and Pylint to automate the porting process. Common issues discussed include bytes vs text, iterator consumption, and dictionary changes during iteration. The document emphasizes adopting better string hygiene practices and treating bytes and text separately.
PyCon AU 2012 - Debugging Live Python Web ApplicationsGraham Dumpleton
Monitoring tools record the result of what happened to your web application when a problem arises, but for some classes of problems, monitoring systems are only a starting point. Sometimes it is necessary to take more intrusive steps to plan for the unexpected by embedding mechanisms that will allow you to interact with a live deployed web application and extract even more detailed information.
DISQUS is a comment system that handles high volumes of traffic, with up to 17,000 requests per second and 250 million monthly visitors. They face challenges in unpredictable spikes in traffic and ensuring high availability. Their architecture includes over 100 servers split between web servers, databases, caching, and load balancing. They employ techniques like vertical and horizontal data partitioning, atomic updates, delayed signals, consistent caching, and feature flags to scale their large Django application.
This document discusses Go web development using the Gin web framework. It provides an overview of Gin's features and file structure conventions. It also describes using Orator ORM for database migrations in Go applications. Benchmark results show the json-iterator library provides better JSON performance than the standard encoding/json package in Go. The document concludes with recommendations for Nginx SSL and security header parameters.
This document discusses porting a legacy Python application called Eddie-tool to work with both Python 2 and 3. The application is over 10K lines of code for system monitoring and was last updated in 2009. The author explains why they want to support both Python versions to accommodate enterprise clients still using Python 2. They outline their porting process which included using tools like 2to3 and python-modernize, writing unit tests, and creating a compatibility module. The outcome was a new agent called Boris that was ported in 22 hours and works with Python 2.7 and 3.3+ while addressing issues like bytes, longs, exceptions, and other changes between the Python versions.
The document describes a workshop on Xilinx Vivado High Level Synthesis (HLS) tools held at NECST. The agenda includes an introduction to hardware design flow, the Vivado HLS design flow, kernel creation and optimization, and a hands-on example of implementing a vector addition using Vivado HLS. The example takes the participants through various implementation versions to optimize the kernel by applying directives for loop pipelining, array partitioning, and memory optimizations.
This document provides an overview of Python fundamentals including installing Python, hidden documentation tools, data types, strings, lists, tuples, dictionaries, control flow statements, functions, classes, the datetime library, importing modules, and web2py fundamentals such as the request and response objects, templates, controllers, models, and more. Key concepts covered include Python types like strings, lists, tuples and dictionaries, control structures like if/else and for loops, functions, classes and objects, and the basics of using the web2py framework to build web applications.
Database firewall is a useful tool that monitor databases to identify and protect against database specific attacks that mostly seek to access sensitive information stored in the databases. However the commercial database firewalls are expensive and needs specific product knowledge, while the opensource database firewalls are designed for specific opensource database servers.
In order to fulfill the need of inexpensive database firewall, Snort - an opensource IDS/IPS - is possible to achieve the goal in some scenarios with familiar rule writing. The paper will explain the limitation of Snort as a database firewall, constraints in commercial database statement and some example implementation.
The document discusses reading and writing files in Python. It provides examples of opening files for reading, writing, and appending. It demonstrates how to read an entire file, individual lines, and loop through lines. It also shows how to write strings to files and close files once writing is complete. Additional topics covered include a template for reading files line by line and examples of counting lines, words, and characters in a file.
The document provides instructions for setting up a TI-RTOS project for the CC1352R wireless microcontroller. It describes creating a CCS project targeting the CC1352R, configuring compiler and linker settings, generating a system configuration file, and adding TI-RTOS and driver library files. The goal is to build a basic "hello world" project to demonstrate real-time operating system functionality on the CC1352R wireless microcontroller.
Down the rabbit hole, profiling in DjangoRemco Wendt
The document discusses various tools for profiling Python code such as cProfile, profile, hotshot, line profiler, and trace to identify inefficient code and bottlenecks. It covers using these tools to profile CPU and I/O bound problems as well as memory profiling issues. The document also demonstrates how to optimize code through caching, removing unnecessary function calls, and memoization.
The PyConTW (http://tw.pycon.org) organizer wishes to improve the quality and quantity of the programming cummunities in Taiwan. Though Python is their core tool and methodology, they know it's worth to learn and communicate with wide-ranging communities. Understanding cultures and ecosystem of a language takes me about three to six months. This six-hour course wraps up what I - an experienced Java developer - have learned from Python ecosystem and the agenda of the past PyConTW.
你可以在以下鏈結找到中文內容:
http://www.codedata.com.tw/python/python-tutorial-the-1st-class-1-preface
( ** Python Certification Training: https://www.edureka.co/python ** )
This Edureka PPT on Advanced Python tutorial covers all the important aspects of using Python for advanced use-cases and purposes. It establishes all of the concepts like system programming , shell programming, pipes and forking to show how wide of a spectrum Python offers to the developers.
Python Tutorial Playlist: https://goo.gl/WsBpKe
Blog Series: http://bit.ly/2sqmP4s
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
This document provides an overview of essential Linux commands and utilities for SQL Server DBAs. It covers topics such as Linux history, users and permissions, file editing and navigation commands like vi, process monitoring with ps and top, and system diagnostic utilities like sar, vmstat, and mpstat. The document aims to teach SQL Server DBAs basic Linux skills to manage their environment and troubleshoot issues.
This document provides an overview of bioinformatics and biological databases. It discusses how bioinformatics draws from fields like biology, computer science, statistics, and machine learning. Biological databases are important resources for bioinformatics that can be searched and analyzed to answer questions, find similar sequences, locate patterns, and make predictions. The document also outlines common uses of biological databases, such as annotation searches, homology searches, pattern searches, and predictive analyses.
The document discusses the Rh blood group system and its clinical significance. It describes the key observations in 1939 that linked adverse reactions in mothers to stillborn fetuses and blood transfusions from fathers, indicating a relationship. This syndrome is now called hemolytic disease of the fetus and newborn. The Rh system was identified in 1940 through experiments immunizing animals with Rhesus macaque monkey red blood cells. The D antigen is the most important RBC antigen in transfusion practice, as those lacking it do not produce anti-D antibody unless exposed to D antigen through transfusion or pregnancy. Testing for D is routinely performed to ensure D-negative patients receive D-negative blood.
The document discusses views and materialized views in data warehousing and decision support systems. It covers three main points:
1) OLAP queries typically involve aggregate queries, so precomputation is essential for fast response times. Materialized views allow precomputing aggregates across multiple dimensions.
2) Warehouses can be thought of as collections of asynchronously replicated tables and periodically maintained views, renewing interest in efficient view maintenance.
3) Materialized views store the results of views in the database for fast access like a cache, but they require maintenance as underlying tables change. Incremental maintenance algorithms are ideal to efficiently update materialized views.
The document discusses various database concepts including normalization, which is used to design optimal relation schemas by removing redundant data. It also covers transaction processing, which involves executing logical database operations as transactions to maintain data integrity. Database systems use techniques like logging and concurrency control to prevent transaction anomalies and ensure failures can be recovered from.
This document contains a list of names, emails, and study programs of students. It includes their official student code, last name, first name, email, and educational program. There are 20 students listed with their details.
This document discusses the Biological Databases project being conducted by a group of students. The project involves using the video game Minecraft to visualize protein structures retrieved from the Protein Data Bank (PDB). Python scripts are used to import PDB data files and place blocks in Minecraft to represent atoms, with different block colors used to distinguish atom types. SPARQL queries are also employed to search the RDF version of the PDB for protein entries. The goal is to build 3D protein models inside Minecraft for educational and visualization purposes.
The document discusses various bioinformatics tools and algorithms for analyzing protein sequences, including Biopython for working with biological sequence data, the Kyte-Doolittle algorithm for predicting transmembrane regions, and the Chou-Fasman algorithm for predicting secondary structure from amino acid preferences for alpha helices, beta sheets, and random coils. It also provides examples of analyzing Swiss-Prot data to find properties of human proteins and applying these tools and libraries to extract insights from protein sequences.
The document discusses various topics related to analyzing protein sequences using Python and Biopython. It provides examples of using Biopython to parse sequence data from UniProt, calculate lengths and translations of sequences. It also discusses analyzing properties of sequences like molecular weight, isoelectric point, transmembrane regions, and comparing sequences to find conserved motifs. Finally, it introduces hydropathy indices and tools for predicting properties like transmembrane helices from primary sequences.
This document discusses Python functions. It explains that there are built-in functions provided as part of Python and user-defined functions. User-defined functions are created using the def keyword and can take parameters and return values. The body of a function is indented and runs when the function is called. Functions allow code to be reused and organized in a modular way. Examples are provided to demonstrate defining and calling functions with different parameters and return values.
The document provides a recap of Python programming concepts like conditions and statements, while loops, for loops, break and continue statements, and working with strings. It also introduces regular expressions as a way to match patterns in strings using a formal language that can be interpreted by a regular expression processor.
[SUMMARY
This document discusses next generation DNA sequencing technologies. It begins by describing some of the limitations of traditional Sanger sequencing, such as read lengths of 500-1000 bases and throughput of 57,000 bases per run. It then introduces some key next generation sequencing technologies, such as 454 sequencing which uses emulsion PCR and pyrosequencing to achieve read lengths of 20-100 bases but higher throughput of 20-100 Mb per run. Illumina/Solexa sequencing is also discussed, which uses sequencing by synthesis with reversible terminators and laser-based detection. Finally, third generation sequencing technologies are mentioned, such as Pacific Biosciences' single molecule real time sequencing and nanopore sequencing. In summary, the document provides a high-level
The document provides an overview of the history and evolution of various programming languages. It discusses early languages like FORTRAN, LISP, PASCAL, C, and Java. It also covers scripting languages and their uses. The document explains what Python is as a programming language - that it is interpreted, object-oriented, and high-level. It was named after Monty Python and was created by Guido van Rossum. The document then gives examples of using Python to program Minecraft by importing protein data from PDB files and using coordinates to place blocks to visualize proteins in the game.
This document provides an introduction to bio-ontologies and the semantic web. It discusses what ontologies are and how they are used in the bio domain through initiatives like the OBO Foundry. It introduces key semantic web technologies like RDF, URIs, Turtle syntax, and SPARQL query language. It provides examples of ontologies like the Gene Ontology and how ontologies can be represented and queried using these semantic web standards.
This document provides an overview of NoSQL databases, including:
- Key-value stores store data as maps or hashmaps and are efficient for data access but limited in query capabilities.
- Column-oriented stores group attributes into column families and store data efficiently but are operationally challenging.
- Document databases store loosely structured data like JSON and allow retrieving documents by keys or contents.
- Graph databases are suited for interaction networks and path finding but are less suited for tabular data.
The document discusses creating a multicore database project. It recommends taking the following steps:
1. Define what the project is about, what it aims to achieve, and who it is for.
2. Identify information resources and develop a basic data model.
3. Design a user interface mockup without technical constraints, thinking creatively.
This document discusses biological databases and PHP. It begins with an overview of biological databases and examples using BIOSQL to load genetic data from GenBank into a MySQL database. It then provides examples of building a basic 3-tier model with Apache, PHP, and a MySQL backend database. The document also includes a brief introduction to PHP, covering its history, why it is commonly used, and basic syntax like conditional statements.
This document discusses biological databases and SQL. It provides an overview of primary and derived data in biological research, as well as different data levels. It then discusses direct querying of selected bioinformatics databases using SQL and provides examples of 3-tier database models. The document proceeds to discuss rationale for learning SQL to query biological databases and provides definitions and explanations of key SQL concepts like tables, records, queries, data types, keys, integrity rules and constraints.
This document discusses biological databases and bioinformatics. It begins with an overview of bioinformatics as an interdisciplinary field combining biology, computer science, and information technology. It then discusses different types of biological databases, including those focused on sequences, pathways, protein structures, and gene expression. The document outlines some common uses of biological databases, including searching for annotations, identifying similar sequences through homology, searching for patterns, and making predictions. It also briefly discusses comparing data across databases. The summary provides a high-level overview of the key topics and uses of biological databases covered in the document.
The document discusses several topics related to protein structure prediction using Python:
1. It introduces the Chou-Fasman algorithm for predicting protein secondary structure from amino acid sequence. The algorithm calculates preference parameters for each amino acid to be in alpha helices, beta sheets, or other structures.
2. It provides an example of calculating helical propensity.
3. It lists the preference parameters output by the Chou-Fasman algorithm for each amino acid.
4. It outlines the steps of applying the Chou-Fasman algorithm to predict secondary structure elements in a protein sequence.
The document provides information on various Python programming concepts including control structures, lists, dictionaries, regular expressions, exceptions, and biological applications using Biopython. It discusses if/else statements, while and for loops, list operations, dictionary usage, regex patterns, exception handling roles, and gives examples analyzing protein sequences and structures using Biopython.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
Communicating effectively and consistently with students can help them feel at ease during their learning experience and provide the instructor with a communication trail to track the course's progress. This workshop will take you through constructing an engaging course container to facilitate effective communication.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
4. GitHub: Hosted GIT
• Largest open source git hosting site
• Public and private options
• User-centric rather than project-centric
• http://github.ugent.be (use your Ugent
login and password)
– Accept invitation from Bioinformatics-I-
2015
URI:
– https://github.ugent.be/Bioinformatics-I-
2015/Python.git
8. Regex.py
text = 'abbaaabbbbaaaaa'
pattern = 'ab'
for match in re.finditer(pattern, text):
s = match.start()
e = match.end()
print ('Found "%s" at %d:%d' % (text[s:e], s, e))
m = re.search("^([A-Z]) ",line)
if m:
from_letter = m.groups()[0]
9. Question 3. Swiss-Knife.py
• Using a database as input ! Parse
the entire Swiss Prot collection
– How many entries are there ?
– Average Protein Length (in aa and
MW)
– Relative frequency of amino acids
• Compare to the ones used to construct
the PAM scoring matrixes from 1978 –
1991
10. Question 3: Getting the database
Uniprot_sprot.dat.gz – 528Mb
(save on your network drive H:)
Unzipped 2.92 Gb !
http://www.ebi.ac.uk/uniprot/download-center
11. Amino acid frequencies
1978 1991
L 0.085 0.091
A 0.087 0.077
G 0.089 0.074
S 0.070 0.069
V 0.065 0.066
E 0.050 0.062
T 0.058 0.059
K 0.081 0.059
I 0.037 0.053
D 0.047 0.052
R 0.041 0.051
P 0.051 0.051
N 0.040 0.043
Q 0.038 0.041
F 0.040 0.040
Y 0.030 0.032
M 0.015 0.024
H 0.034 0.023
C 0.033 0.020
W 0.010 0.014
Second step: Frequencies of Occurence
12. Extra Questions
• How many records have a sequence of length 260?
• What are the first 20 residues of 143X_MAIZE?
• What is the identifier for the record with the
shortest sequence? Is there more than one record
with that length?
• What is the identifier for the record with the
longest sequence? Is there more than one record
with that length?
• How many contain the subsequence "ARRA"?
• How many contain the substring "KCIP-1" in the
description?
13. Perl / Python 00
• A class is a package
• An object is a reference to a data
structure (usually a hash) in a class
• A method is a subroutine in the class
14.
15.
16. Biopython functionality and tools
• The ability to parse bioinformatics files into Python
utilizable data structures
• Support the following formats:
– Blast output
– Clustalw
– FASTA
– PubMed and Medline
– ExPASy files
– SCOP
– SwissProt
– PDB
• Files in the supported formats can be iterated over
record by record or indexed and accessed via a
dictionary interface
17. Biopython functionality and tools
• Code to deal with on-line bioinformatics destinations (NCBI,
ExPASy)
• Interface to common bioinformatics programs (Blast,
ClustalW)
• A sequence obj dealing with seqs, seq IDs, seq features
• Tools for operations on sequences
• Tools for dealing with alignments
• Tools to manage protein structures
• Tools to run applications
18. Install Biopython
The Biopython module name is Bio
It must be downloaded and installed
(http://biopython.org/wiki/Download)
You need to install numpy first
>>>import Bio
19. Install Biopython
pip is the preferred installer program.
Starting with Python 3.4, it is included
by default with the Python binary
installers.
pip3.5 install Biopython
#pip3.5 install yahoo_finance
from yahoo_finance import Share
yahoo = Share('AAPL')
print (yahoo.get_open())
20. Run Install.py (is BioPython installed ?)
import pip
import sys
import platform
import webbrowser
print ("Python " + platform.python_version()+ " installed
packages:")
installed_packages = pip.get_installed_distributions()
installed_packages_list = sorted(["%s==%s" % (i.key, i.version)
for i in installed_packages])
print(*installed_packages_list,sep="n")
21. BioPython
• Make a histogram of the MW (in kDa) of all proteins in
Swiss-Prot
• Find the most basic and most acidic protein in Swiss-Prot?
• Biological relevance of the results ?
From AAIndex
H ZIMJ680104
D Isoelectric point (Zimmerman et al., 1968)
R LIT:2004109b PMID:5700434
A Zimmerman, J.M., Eliezer, N. and Simha, R.
T The characterization of amino acid sequences in proteins by
statistical
methods
J J. Theor. Biol. 21, 170-201 (1968)
C KLEP840101 0.941 FAUJ880111 0.813 FINA910103 0.805
I A/L R/K N/M D/F C/P Q/S E/T G/W H/Y I/V
6.00 10.76 5.41 2.77 5.05 5.65 3.22 5.97 7.59 6.02
5.98 9.74 5.74 5.48 6.30 5.68 5.66 5.89 5.66 5.96
22. • Introduction to Biopython
– Sequence objects (I)
– Sequence Record objects (I)
– Protein structures (PDB module) (II)
• Working with DNA and protein sequences
– Transcription and Translation
• Extracting information from biological resources
– Parsing Swiss-Prot files (I)
– Parsing BLAST output (I)
– Accessing NCBI’s Entrez databases (II)
– Parsing Medline records (II)
• Running external applications (e.g. BLAST) locally and from a
script
– Running BLAST over the Internet
– Running BLAST locally
• Working with motifs
– Parsing PROSITE records
– Parsing PROSITE documentation records
24. Sequence Object
• Seq objects vs Python strings:
– They have different methods
– The Seq object has the attribute alphabet
(biological meaning of Seq)
>>> import Bio
>>> from Bio.Seq import Seq
>>> my_seq = Seq("AGTACACTGGT")
>>> my_seq
Seq('AGTACACTGGT', Alphabet())
>>> print my_seq
Seq('AGTACACTGGT', Alphabet())
>>> my_seq.alphabet
Alphabet()
>>>
25. The alphabet attribute
• Alphabets are defined in the Bio.Alphabet module
• We will use the IUPAC alphabets
(http://www.chem.qmw.ac.uk/iupac)
• Bio.Alphabet.IUPAC provides definitions for DNA, RNA and
proteins + provides extension and customization of basic
definitions:
– IUPACProtein (IUPAC standard AA)
– ExtendedIUPACProtein (+ selenocysteine, X,
etc)
– IUPACUnambiguousDNA (basic GATC letters)
– IUPACAmbiguousDNA (+ ambiguity letters)
– ExtendedIUPACDNA (+ modified bases)
– IUPACUnambiguousRNA
– IUPACAmbiguousRNA
27. >>> my_seq = Seq("AGTAACCCTTAGCACTGGT", IUPAC.unambiguous_dna)
>>> for index, letter in enumerate(my_seq):
... print index, letter
...
0 A
1 G
2 T
3 A
4 A
5 C
...etc
>>> print len(my_seq)
19
>>> print my_seq[0]
A
>>> print my_seq[2:10]
Seq('TAACCCTT', IUPACProtein())
>>> my_seq.count('A')
5
>>> 100*float(my_seq.count('C')+my_seq.count('G'))/len(my_seq)
47.368421052631582
Sequences act like strings
28. >>> my_seq = Seq("AGTAACCCTTAGCACTGGT", IUPAC.unambiguous_dna)
>>>>>> str(my_seq)
'AGTAACCCTTAGCACTGGT'
>>> print my_seq
AGTAACCCTTAGCACTGGT
>>> fasta_format_string = ">DNA_idn%sn"% my_seq
>>> print fasta_format_string
>DNA_id
AGTAACCCTTAGCACTGGT
# Biopython 1.44 or older
>>>my_seq.tostring()
'AGTAACCCTTAGCACTGGT'
Turn Seq objects into strings
You may need the plain sequence string (e.g. to write to a file or to insert
into a database)
29. >>> dna_seq = Seq("AGTAACCCTTAGCACTGGT", IUPAC.unambiguous_dna)
>>> protein_seq = Seq("KSMKPPRTHLIMHWIIL", IUPAC.IUPACProtein())
>>> protein_seq + dna_seq
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/home/abarbato/biopython-1.53/build/lib.linux-x86_64-
2.4/Bio/Seq.py", line 216, in __add__
raise TypeError("Incompatable alphabets %s and %s"
TypeError: Incompatable alphabets IUPACProtein() and
IUPACUnambiguousDNA()
BUT, if you give generic alphabet to dna_seq and protein_seq:
>>> from Bio.Alphabet import generic_alphabet
>>> dna_seq.alphabet = generic_alphabet
>>> protein_seq.alphabet = generic_alphabet
>>> protein_seq + dna_seq
Seq('KSMKPPRTHLIMHWIILAGTAACCCTTAGCACTGGT', Alphabet())
Concatenating sequences
You can’t add sequences with incompatible alphabets (protein sequence
and DNA sequence)
30. >>> from Bio.Alphabet import generic_dna
>>> dna_seq = Seq("acgtACGT", generic_dna)
>>> dna_seq.upper()
Seq('ACGTACGT', DNAAlphabet())
>>> dna_seq.lower()
Seq('acgtacgt', DNAAlphabet())
>>>
Changing case
Seq objects have upper() and lower() methods
Note that the IUPAC alphabets are for upper case only
31. >>> from Bio.Seq import Seq
>>> from Bio.Alphabet import IUPAC
>>> dna_seq = Seq("AGTAACCCTTAGCACTGGT", IUPAC.unambiguous_dna)
>>> dna_seq.complement()
Seq('TCATTGGGAATCGTGACCA', IUPACUnambiguousDNA())
>>> dna_seq.reverse_complement()
Seq('ACCAGTGCTAAGGGTTACT', IUPACUnambiguousDNA())
Nucleotide sequences and (reverse) complements
Seq objects have upper() and lower() methods
Note that these operations are not allowed with protein
alphabets
33. Transcription
>>> coding_dna =
Seq("ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG",
IUPAC.unambiguous_dna)
>>> template_dna = coding_dna.reverse_complement()
>>> template_dna
Seq('CTATCGGGCACCCTTTCAGCGGCCCATTACAATGGCCAT',
IUPACUnambiguousDNA())
>>> messenger_rna = coding_dna.transcribe()
>>> messenger_rna
Seq('AUGGCCAUUGUAAUGGGCCGCUGAAAGGGUGCCCGAUAG',
IUPACUnambiguousRNA())
>>> messenger_rna.back_transcribe()
Seq('ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG',
IUPACUnambiguousDNA())
Note: all this does is a switch T --> U and adjust the alphabet.
The Seq object also includes a back-transcription method:
34. Translation
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import IUPAC
>>> messenger_rna = Seq('AUGGCCAUUGUAAUGGGCCGCUGAAAGGGUGCCCGAUAG',
IUPAC.unambiguous_rna)
>>> messenger_rna.translate()
Seq('MAIVMGR*KGAR*', HasStopCodon(IUPACProtein(), '*'))
>>>
You can also translate directly from the coding strand DNA sequence
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import IUPAC
>>> coding_dna = Seq("ATGGCCATTGTAATGGGCCGCTGAAAGGGTGCCCGATAG",
IUPAC.unambiguous_dna)
>>> coding_dna.translate()
Seq('MAIVMGR*KGAR*', HasStopCodon(IUPACProtein(), '*'))
>>>
35. Translation with different translation tables
>>> coding_dna.translate(table="Vertebrate Mitochondrial")
Seq('MAIVMGRWKGAR*', HasStopCodon(IUPACProtein(), '*'))
>>> coding_dna.translate(table=2)
Seq('MAIVMGRWKGAR*', HasStopCodon(IUPACProtein(), '*'))
>>> coding_dna.translate(to_stop = True)
Seq('MAIVMGR', IUPACProtein())
>>> coding_dna.translate(table=2,to_stop = True)
Seq('MAIVMGRWKGAR', IUPACProtein())
Translation tables available in Biopython are based on those from the NCBI.
By default, translation will use the standard genetic code (NCBI table id 1)
If you deal with mitochondrial sequences:
If you want to translate the nucleotides up to the first in frame stop, and
then stop (as happens in nature):
36. Translation tables
>>> from Bio.Data import CodonTable
>>> standard_table =
CodonTable.unambiguous_dna_by_name["Standard"]
>>> mito_table =
CodonTable.unambiguous_dna_by_name["Vertebrate Mitochondrial"]
#Using the NCB table ids:
>>>standard_table = CodonTable.unambiguous_dna_by_id[1]
>>> mito_table = CodonTable.unambiguous_dna_by_id[2]
Translation tables available in Biopython are based on those from the NCBI.
By default, translation will use the standard genetic code (NCBI table id 1)
http://www.ncbi.nlm.nih.gov/Taxonomy/Utils/wprintgc.cgi
37. Translation tables
>>>print standard_table
Table 1 Standard, SGC0
| T | C | A | G |
--+---------+---------+---------+---------+--
T | TTT F | TCT S | TAT Y | TGT C | T
T | TTC F | TCC S | TAC Y | TGC C | C
T | TTA L | TCA S | TAA Stop| TGA Stop| A
T | TTG L(s)| TCG S | TAG Stop| TGG W | G
--+---------+---------+---------+---------+--
C | CTT L | CCT P | CAT H | CGT R | T
C | CTC L | CCC P | CAC H | CGC R | C
C | CTA L | CCA P | CAA Q | CGA R | A
C | CTG L(s)| CCG P | CAG Q | CGG R | G
--+---------+---------+---------+---------+--
A | ATT I | ACT T | AAT N | AGT S | T
A | ATC I | ACC T | AAC N | AGC S | C
A | ATA I | ACA T | AAA K | AGA R | A
A | ATG M(s)| ACG T | AAG K | AGG R | G
--+---------+---------+---------+---------+--
G | GTT V | GCT A | GAT D | GGT G | T
G | GTC V | GCC A | GAC D | GGC G | C
G | GTA V | GCA A | GAA E | GGA G | A
G | GTG V | GCG A | GAG E | GGG G | G
--+---------+---------+---------+---------+--
38. Translation tables
>>> print mito_table
Table 2 Vertebrate Mitochondrial, SGC1
| T | C | A | G |
--+---------+---------+---------+---------+--
T | TTT F | TCT S | TAT Y | TGT C | T
T | TTC F | TCC S | TAC Y | TGC C | C
T | TTA L | TCA S | TAA Stop| TGA W | A
T | TTG L | TCG S | TAG Stop| TGG W | G
--+---------+---------+---------+---------+--
C | CTT L | CCT P | CAT H | CGT R | T
C | CTC L | CCC P | CAC H | CGC R | C
C | CTA L | CCA P | CAA Q | CGA R | A
C | CTG L | CCG P | CAG Q | CGG R | G
--+---------+---------+---------+---------+--
A | ATT I(s)| ACT T | AAT N | AGT S | T
A | ATC I(s)| ACC T | AAC N | AGC S | C
A | ATA M(s)| ACA T | AAA K | AGA Stop| A
A | ATG M(s)| ACG T | AAG K | AGG Stop| G
--+---------+---------+---------+---------+--
G | GTT V | GCT A | GAT D | GGT G | T
G | GTC V | GCC A | GAC D | GGC G | C
G | GTA V | GCA A | GAA E | GGA G | A
G | GTG V(s)| GCG A | GAG E | GGG G | G
--+---------+---------+---------+---------+--
39. MutableSeq objects
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import IUPAC
>>> my_seq =
Seq('CGCGCGGGTTTATGATGACCCAAATATAGAGGGCACAC',
IUPAC.unambiguous_dna)
>>> my_seq[5] = 'A'
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: object does not support item assignment
>>>
Like Python strings, Seq objects are immutable
However, you can convert it into a mutable sequence (a MutableSeq object)
>>> mutable_seq = my_seq.tomutable()
>>> mutable_seq
MutableSeq('CGCGCGGGTTTATGATGACCCAAATATAGAGGGCACAC',
IUPACUnambiguousDNA())
40. MutableSeq objects
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import IUPAC
>>> mutable_seq =
MutableSeq('CGCGCGGGTTTATGATGACCCAAATATAGAGGGCACAC',
IUPAC.unambiguous_dna)
>>> mutable_seq[5] = 'A'
>>> mutable_seq
MutableSeq('CGCGCAGGTTTATGATGACCCAAATATAGAGGGCACAC',
IUPACUnambiguousDNA())
You can create a mutable object directly
A MutableSeq object can be easily converted into a read-only sequence:
>>> new_seq = mutable_seq.toseq()
>>> new_seq
Seq('CGCGCAGGTTTATGATGACCCAAATATAGAGGGCACAC',
IUPACUnambiguousDNA())
41. Sequence Record objects
The SeqRecord class is defined in the Bio.SeqRecord module
This class allows higher level features such as identifiers and features to be
associated with a sequence
>>> from Bio.SeqRecord import SeqRecord
>>> help(SeqRecord)
42. class SeqRecord(__builtin__.object)
A SeqRecord object holds a sequence and information about it.
Main attributes:
id - Identifier such as a locus tag (string)
seq - The sequence itself (Seq object or similar)
Additional attributes:
name - Sequence name, e.g. gene name (string)
description - Additional text (string)
dbxrefs - List of db cross references (list of strings)
features - Any (sub)features defined (list of SeqFeature objects)
annotations - Further information about the whole sequence (dictionary)
Most entries are strings, or lists of strings.
letter_annotations -
Per letter/symbol annotation (restricted dictionary). This holds
Python sequences (lists, strings or tuples) whose length
matches that of the sequence. A typical use would be to hold a
list of integers representing sequencing quality scores, or a
string representing the secondary structure.
43. >>> from Bio.Seq import Seq
>>> from Bio.SeqRecord import SeqRecord
>>> TMP = Seq('MKQHKAMIVALIVICITAVVAALVTRKDLCEVHIRTGQTEVAVF’)
>>> TMP_r = SeqRecord(TMP)
>>> TMP_r.id
'<unknown id>'
>>> TMP_r.id = 'YP_025292.1'
>>> TMP_r.description = 'toxic membrane protein'
>>> print TMP_r
ID: YP_025292.1
Name: <unknown name>
Description: toxic membrane protein
Number of features: 0
Seq('MKQHKAMIVALIVICITAVVAALVTRKDLCEVHIRTGQTEVAVF',
Alphabet())
>>> TMP_r.seq
Seq('MKQHKAMIVALIVICITAVVAALVTRKDLCEVHIRTGQTEVAVF',
You will typically use Bio.SeqIO to read in sequences from files as
SeqRecord objects. However, you may want to create your own SeqRecord
objects directly:
44. >>> from Bio.Seq import Seq
>>> from Bio.SeqRecord import SeqRecord
>>> from Bio.Alphabet import IUPAC
>>> record
SeqRecord(seq=Seq('MKQHKAMIVALIVICITAVVAALVTRKDLCEVHIRTGQ
TEVAVF', IUPACProtein()), id='YP_025292.1', name='HokC',
description='toxic membrane protein', dbxrefs=[])
>>> print record
ID: YP_025292.1
Name: HokC
Description: toxic membrane protein
Number of features: 0
Seq('MKQHKAMIVALIVICITAVVAALVTRKDLCEVHIRTGQTEVAVF',
IUPACProtein())
>>>
You can also create your own SeqRecord objects as follows:
45. The format() method
It returns a string containing your cord formatted using one of the output
file formats supported by Bio.SeqIO
>>> from Bio.Seq import Seq
>>> from Bio.SeqRecord import SeqRecord
>>> from Bio.Alphabet import generic_protein
>>> rec =
SeqRecord(Seq("MGSNKSKPKDASQRRRSLEPSENVHGAGGAFPASQTPSKPASADGHRGPSA
AFVPPAAEPKLFGGFNSSDTVTSPQRAGALAGGVTTFVALYDYESRTETDLSFKKGERLQIVNNTR
KVDVREGDWWLAHSLSTGQTGYIPS", generic_protein), id = "P05480",
description = "SRC_MOUSE Neuronal proto-oncogene tyrosine-protein
kinase Src: MY TEST")
>>> print rec.format("fasta")
>P05480 SRC_MOUSE Neuronal proto-oncogene tyrosine-protein kinase
Src: MY TEST
MGSNKSKPKDASQRRRSLEPSENVHGAGGAFPASQTPSKPASADGHRGPSAAFVPPAAEP
KLFGGFNSSDTVTSPQRAGALAGGVTTFVALYDYESRTETDLSFKKGERLQIVNNTRKVD
VREGDWWLAHSLSTGQTGYIPS
46. INPUT FILE
SCRIPT.py
OUTPUT FILE
Seq1 “ACTGGGAGCTAGC”
Seq2 “TTGATCGATCGATCG”
Seq3 “GTGTAGCTGCT”
F = open(“input.txt”)
for line in F:
<parse line>
<get seq id>
<get description>
<get sequence>
<get other info>
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio.Alphabet import generic_protein
rec = SeqRecord(Seq(<sequence>, alphabet),id
= <seq_id>, description = <description>)
Format_rec = rec.format(“fasta”)
Out.write(Format_rec)
>P05480 SRC_MOUSE Neuronal proto-oncogene tyrosine-protein
kinase Src: MY TEST
MGSNKSKPKDASQRRRSLEPSENVHGAGGAFPASQTPSKPASADGHRGPSAAFVPPAAEP
KLFGGFNSSDTVTSPQRAGALAGGVTTFVALYDYESRTETDLSFKKGERLQIVNNTRKVD
47. Extracting information from biological resources:
parsing Swiss-Prot files, PDB files, ENSEMBLE records,
blast output files, etc.
• Sequence I/O
– Parsing or Reading Sequences
– Writing Sequence Files
A simple interface for working with assorted file formats in a uniform way
>>>from Bio import SeqIO
>>>help(SeqIO)
Bio.SeqIO
48. Bio.SeqIO.parse()
• A handle to read the data form. It can be:
– a file opened for reading
– the output from a command line program
– data downloaded from the internet
• A lower case string specifying the sequence format (see
http://biopython.org/wiki/SeqIO for a full listing of supported
formats).
Reads in sequence data as SeqRecord objects.
It expects two arguments.
The object returned by Bio.SeqIO is an iterator which returns SeqRecord
objects
49. >>> from Bio import SeqIO
>>> handle = open("P05480.fasta")
>>> for seq_rec in SeqIO.parse(handle, "fasta"):
... print seq_rec.id
... print repr(seq_rec.seq)
... print len(seq_rec)
...
sp|P05480|SRC_MOUSE
Seq('MGSNKSKPKDASQRRRSLERGPSA...ENL', SingleLetterAlphabet())
541
>>> handle.close()
>>> for seq_rec in SeqIO.parse(handle, "genbank"):
... print seq_rec.id
... print repr(seq_rec.seq)
... print len(seq_rec)
...
U49845.1
Seq('GATCCTCCATATACAACGGTACGGAA...ATC', IUPACAmbiguousDNA())
5028
>>> handle.close()
50. >>> from Bio import SeqIO
>>> handle = open("AP006852.gbk")
>>> for seq_rec in SeqIO.parse(handle, "genbank"):
... print seq_rec.id
... print repr(seq_rec.seq)
... print len(seq_rec)
...
AP006852.1
Seq('CCACTGTCCAATACCCCCAACAGGAAT...TGT', IUPACAmbiguousDNA())
949626
>>>
>>>handle = open("AP006852.gbk")
>>>identifiers=[seq_rec.id for seq_rec in SeqIO.parse(handle,"genbank")]
>>>handle.close()
>>>identifiers
['AP006852.1']
>>>
Candida albicans genomic DNA, chromosome 7, complete sequence
Using list comprehension:
51. >>> from Bio import SeqIO
>>> handle = open("sprot_prot.fasta")
>>> ids = [seq_rec.id for seq_rec in SeqIO.parse(handle,"fasta")]
>>> ids
['sp|P24928|RPB1_HUMAN', 'sp|Q9NVU0|RPC5_HUMAN',
'sp|Q9BUI4|RPC3_HUMAN', 'sp|Q9BUI4|RPC3_HUMAN',
'sp|Q9NW08|RPC2_HUMAN', 'sp|Q9H1D9|RPC6_HUMAN',
'sp|P19387|RPB3_HUMAN', 'sp|O14802|RPC1_HUMAN',
'sp|P52435|RPB11_HUMAN', 'sp|O15318|RPC7_HUMAN',
'sp|P62487|RPB7_HUMAN', 'sp|O15514|RPB4_HUMAN',
'sp|Q9GZS1|RPA49_HUMAN', 'sp|P36954|RPB9_HUMAN',
'sp|Q9Y535|RPC8_HUMAN', 'sp|O95602|RPA1_HUMAN',
'sp|Q9Y2Y1|RPC10_HUMAN', 'sp|Q9H9Y6|RPA2_HUMAN',
'sp|P78527|PRKDC_HUMAN', 'sp|O15160|RPAC1_HUMAN',…,
'sp|Q9BWH6|RPAP1_HUMAN']
>>> ]
Here we do it using the sprot_prot.fasta file
52. Iterating over the records in a sequence file
Instead of using a for loop, you can also use the next() method of an
iterator to step through the entries
>>> handle = open("sprot_prot.fasta")
>>> rec_iter = SeqIO.parse(handle, "fasta")
>>> rec_1 = rec_iter.next()
>>> rec_1
SeqRecord(seq=Seq('MHGGGPPSGDSACPLRTIKRVQFGVLSPDELKRMSVTEGGIKYPET
TEGGRPKL...EEN', SingleLetterAlphabet()),
id='sp|P24928|RPB1_HUMAN', name='sp|P24928|RPB1_HUMAN',
description='sp|P24928|RPB1_HUMAN DNA-directed RNA polymerase II
subunit RPB1 OS=Homo sapiens GN=POLR2A PE=1 SV=2', dbxrefs=[])
>>> rec_2 = rec_iter.next()
>>> rec_2
SeqRecord(seq=Seq('MANEEDDPVVQEIDVYLAKSLAEKLYLFQYPVRPASMTYDDIPHLS
AKIKPKQQ...VQS', SingleLetterAlphabet()),
id='sp|Q9NVU0|RPC5_HUMAN', name='sp|Q9NVU0|RPC5_HUMAN',
description='sp|Q9NVU0|RPC5_HUMAN DNA-directed RNA polymerase III
subunit RPC5 OS=Homo sapiens GN=POLR3E PE=1 SV=1', dbxrefs=[])
>>> handle.close()
53. If your file has one and only one record (e.g. a GenBank file for a single
chromosome), then use the Bio.SeqIO.read().
This will check there are no extra unexpected records present
Bio.SeqIO.read()
>>> rec_iter = SeqIO.parse(open("1293613.gbk"), "genbank")
>>> rec = rec_iter.next()
>>> print rec
ID: U49845.1
Name: SCU49845
Description: Saccharomyces cerevisiae TCP1-beta gene, partial cds, and Axl2p
(AXL2) and Rev7p (REV7) genes, complete cds.
Number of features: 6
/sequence_version=1
/source=Saccharomyces cerevisiae (baker's yeast)
/taxonomy=['Eukaryota', 'Fungi', 'Ascomycota', 'Saccharomycotina',
'Saccharomycetes', 'Saccharomycetales', 'Saccharomycetaceae', 'Saccharomyces']
/keywords=['']
/references=[Reference(title='Cloning and sequence of REV7, a gene whose function
is required for DNA damage-induced mutagenesis in Saccharomyces cerevisiae', ...),
Reference(title='Selection of axial growth sites in yeast requires Axl2p, a novel
plasma membrane glycoprotein', ...), Reference(title='Direct Submission', ...)]
/accessions=['U49845']
/data_file_division=PLN
/date=21-JUN-1999
/organism=Saccharomyces cerevisiae
/gi=1293613
Seq('GATCCTCCATATACAACGGTATCTCCACCTCAGGTTTAGATCTCAACAACGGAA...ATC',
IUPACAmbiguousDNA())
54. Sequence files as lists
Sequence files as dictionaries
>>> from Bio import SeqIO
>>> handle = open("ncbi_gene.fasta")
>>> records = list(SeqIO.parse(handle, "fasta"))
>>> >>> records[-1]
SeqRecord(seq=Seq('gggggggggggggggggatcactctctttcagtaacctcaac...c
cc', SingleLetterAlphabet()), id='A10421', name='A10421',
description='A10421 Synthetic nucleotide sequence having a human
IL-2 gene obtained from pILOT135-8. : Location:1..1000',
dbxrefs=[])
>>> handle = open("ncbi_gene.fasta")
>>> records = SeqIO.to_dict(SeqIO.parse(handle, "fasta"))
>>> handle.close()
>>> records.keys()
['M69013', 'M69012', 'AJ580952', 'J03005', 'J03004', 'L13858',
'L04510', 'M94539', 'M19650', 'A10421', 'AJ002990', 'A06663',
'A06662', 'S62035', 'M57424', 'M90035', 'A06280', 'X95521',
'X95520', 'M28269', 'S50017', 'L13857', 'AJ345013', 'M31328',
'AB038040', 'AB020593', 'M17219', 'DQ854814', 'M27543', 'X62025',
'M90043', 'L22075', 'X56614', 'M90027']
>>> seq_record = records['X95521']
'X95521 M.musculus mRNA for cyclic nucleotide phosphodiesterase :
Location:1..1000'
55. Parsing sequences from the net
Parsing GenBank records from the net
Parsing SwissProt sequence from the net
Handles are not always from files
>>>from Bio import Entrez
>>>from Bio import SeqIO
>>>handle = Entrez.efetch(db="nucleotide",rettype="fasta",id="6273291")
>>>seq_record = SeqIO.read(handle,”fasta”)
>>>handle.close()
>>>seq_record.description
>>>from Bio import ExPASy
>>>from Bio import SeqIO
>>>handle = ExPASy.get_sprot_raw("6273291")
>>>seq_record = SeqIO.read(handle,”swiss”)
>>>handle.close()
>>>print seq_record.id
>>>print seq_record.name
>>>prin seq_record.description
56. Indexing really large files
Bio.SeqIO.index() returns a dictionary without keeping
everything in memory.
It works fine even for million of sequences
The main drawback is less flexibility: it is read-only
>>> from Bio import SeqIO
>>> recs_dict = SeqIO.index("ncbi_gene.fasta", "fasta")
>>> len(recs_dict)
34
>>> recs_dict.keys()
['M69013', 'M69012', 'AJ580952', 'J03005', 'J03004', 'L13858', 'L04510',
'M94539', 'M19650', 'A10421', 'AJ002990', 'A06663', 'A06662', 'S62035',
'M57424', 'M90035', 'A06280', 'X95521', 'X95520', 'M28269', 'S50017',
'L13857', 'AJ345013', 'M31328', 'AB038040', 'AB020593', 'M17219', 'DQ854814',
'M27543', 'X62025', 'M90043', 'L22075', 'X56614', 'M90027']
>>> print recs_dict['M57424']
ID: M57424
Name: M57424
Description: M57424 Human adenine nucleotide translocator-2 (ANT-2) gene,
complete cds. : Location:1..1000
Number of features: 0
Seq('gagctctggaatagaatacagtagaggcatcatgctcaaagagagtagcagatg...agc',
SingleLetterAlphabet())
57. Writing sequence files
Bio.SeqIO.write()
This function takes three arguments:
1. some SeqRecord objects
2. a handle to write to
3. a sequence format
from Bio.Seq import Seq
from Bio.SeqRecors import SeqRecord
from Bio.Alphabet import generic_protein
Rec1 = SqRecord(Seq(“ACCA…”,generic_protein), id=“1”, description=“”)
Rec1 = SqRecord(Seq(“CDRFAA”,generic_protein), id=“2”, description=“”)
Rec1 = SqRecord(Seq(“GRKLM”,generic_protein), id=“3”, description=“”)
My_records = [Rec1, Rec2, Rec3]
from Bio import SeqIO
handle = open(“MySeqs.fas”,”w”)
SeqIO.write(My_records, handle, “fasta”)
handle.close()
58. Converting between sequence file formats
We can do file conversion by combining Bio.SeqIO.parse()
and Bio.SeqIO.write()
from Bio import SeqIO
>>> In_handle = open ("AP006852.gbk", "r")
>>> Out_handle = open("AP006852.fasta", "w")
>>> records = SeqIO.parse(In_handle, "genbank")
>>> count = SeqIO.write(records, Out_handle, "fasta")
>>> count
1
>>>
>>> In_handle.close()
>>> Out_handle.close()