This document summarizes challenges in assembling large DNA sequence data sets and strategies to address them.
1. The cost to generate DNA sequence data is decreasing rapidly, creating data sets too large for most computers to assemble. Hundreds to thousands of such data sets are generated each year.
2. Techniques like streaming compression and low-memory probabilistic data structures allow assembly memory usage to scale linearly with the sample size rather than the total data, enabling assembly of larger datasets.
3. Benchmarking different computational platforms revealed that while some platforms have faster processors, the ability to store large amounts of data locally is also important for assembly tasks. Scaling algorithms, rather than just optimizing code, is key to addressing
Ryktene om TDDs død er sterkt overdrevet, men det har skjedd mye inn test-drevet utvikling siden Kent Beck introduserte TDD med Extreme Programming i 1999. TDD, BDD, ATDD, ST, DDT … hvordan passer alt dette sammen? Hva skal til for å lykkes med automatisert testing?
TDD, the way to better software | Dan Ursu | CodeWay 2015YOPESO
Watch this presentation if you want to know what problems TDD (Test-Driven Development) solves for developers, project managers and the clients in charge of software products.
Watch the video here:
https://www.youtube.com/watch?v=bxk1i-PC-1Q
The code used for the demo:
https://github.com/yopeso/CodeWayTDDFearlessRefactor
Daniel Cerecedo | From legacy to cloud... and beyond | Codemotion Madrid 2018 Codemotion
A case study of how a medium sized company moved a legacy java code base to the cloud. Considerations around microservices architecture, containerization, tooling, quality management, git flows and more. Understand the challenges faced, decisions made along the way and the impact of each one.
Find out more presentations at https://madrid2018.codemotionworld.com/speakers/
For a Bioinformatics Discussion for Students and Post-Docs (BioDSP) meeting: Expands on Sandve's "Ten Simple Rules for Reproducible Computational Research"
Ryktene om TDDs død er sterkt overdrevet, men det har skjedd mye inn test-drevet utvikling siden Kent Beck introduserte TDD med Extreme Programming i 1999. TDD, BDD, ATDD, ST, DDT … hvordan passer alt dette sammen? Hva skal til for å lykkes med automatisert testing?
TDD, the way to better software | Dan Ursu | CodeWay 2015YOPESO
Watch this presentation if you want to know what problems TDD (Test-Driven Development) solves for developers, project managers and the clients in charge of software products.
Watch the video here:
https://www.youtube.com/watch?v=bxk1i-PC-1Q
The code used for the demo:
https://github.com/yopeso/CodeWayTDDFearlessRefactor
Daniel Cerecedo | From legacy to cloud... and beyond | Codemotion Madrid 2018 Codemotion
A case study of how a medium sized company moved a legacy java code base to the cloud. Considerations around microservices architecture, containerization, tooling, quality management, git flows and more. Understand the challenges faced, decisions made along the way and the impact of each one.
Find out more presentations at https://madrid2018.codemotionworld.com/speakers/
For a Bioinformatics Discussion for Students and Post-Docs (BioDSP) meeting: Expands on Sandve's "Ten Simple Rules for Reproducible Computational Research"
"Arbitrator Subpoenas: Are They Worth The Paper They Are Printed On?" was presented by Don Gregory as a webinar on December 3, 2015, for the American Arbitration Association.
The presentation examined whether arbitrator subpoenas to third-party witnesses are effective and how third parties can be encouraged to participate in arbitration proceedings.
"Arbitrator Subpoenas: Are They Worth The Paper They Are Printed On?" was presented by Don Gregory as a webinar on December 3, 2015, for the American Arbitration Association.
The presentation examined whether arbitrator subpoenas to third-party witnesses are effective and how third parties can be encouraged to participate in arbitration proceedings.
Big Data Analytics: Finding diamonds in the rough with AzureChristos Charmatzis
In this session it will presented main workflows and technologies of getting value from Big Data stored in our Enterprise using Azure.
- When we have a Big Data problem
- Finding the best solution for our Big Data
- Working inside the Data Team
- Extract the true value of our data.
Distributed Models Over Distributed Data with MLflow, Pyspark, and PandasDatabricks
Does more data always improve ML models? Is it better to use distributed ML instead of single node ML?
In this talk I will show that while more data often improves DL models in high variance problem spaces (with semi or unstructured data) such as NLP, image, video more data does not significantly improve high bias problem spaces where traditional ML is more appropriate. Additionally, even in the deep learning domain, single node models can still outperform distributed models via transfer learning.
Data scientists have pain points running many models in parallel automating the experimental set up. Getting others (especially analysts) within an organization to use their models Databricks solves these problems using pandas udfs, ml runtime and MLflow.
With Dask and Numba, you can NumPy-like and Pandas-like code and have it run very fast on multi-core systems as well as at scale on many-node clusters.
Taken some of the hype out of Big Data again - Medtech Pharma, Nürnberg july ...Claus Stie Kallesøe
I was invitted to redo the talk about Big Data i did in Berlin earlier this year - slides also here.
Slides are similar but updated to reflect my new company and some slides are new.
Enjoy
Francesc Alted (UberResearch GmbH), “New Trends In Storing And Analyzing Large Data Silos With Python”.
Bio: Teacher, developer and consultant in a wide variety of business applications. Particularly interested in the field of very large databases, with special emphasis in squeezing the last drop of performance out of computer as whole, i.e. not only the CPU, but the memory and I/O subsystems.
DN18 | The Data Janitor Returns | Daniel Molnar | Oberlo/Shopify Dataconomy Media
Abstract of the Presentation:
This talk is for the underdog. If you’re trying to solve data related problems with no or limited resources, be them time, money or skills don’t go no further. This talk points mostly to decades old technology, free operating systems and cheap hardware if possible, but if it makes sense to spend a hundred bucks instead of tearing your hair, we’ll say so. This talk is opinionated and updated to GDPR, deep learning and all the hype.
About the Author:
Daniel Molnar is a data nerd and startup specialist. With over 19 years of experience in startups and nine years of expertise in data related topics, he is an experienced co-founder who has built and hired teams up of to 30 people. He comes with expertise in proven build-to-market capabilities and utilizing data for successful products. An amalgamation of his skills would be CS + data + product background under one hat.
More information, visit: http://www.godatadriven.com/accelerator.html
Data scientists aren’t a nice-to-have anymore, they are a must-have. Businesses of all sizes are scooping up this new breed of engineering professional. But how do you find the right one for your business?
The Data Science Accelerator Program is a one year program, delivered in Amsterdam by world-class industry practitioners. It provides your aspiring data scientists with intensive on- and off-site instruction, access to an extensive network of speakers and mentors and coaching.
The Data Science Accelerator Program helps you assess and radically develop the skills of your data science staff or recruits.
Our goal is to deliver you excellent data scientists that help you become a data driven enterprise.
The right tools
We teach your organisation the proven data science tools.
The right hands
We are trusted by many industry leading partners.
The right experience
We've done big data and data science at many clients, we know what the real world is like.
The right experts
We have a world class selection of lecturers that you will be working with.
Vincent D. Warmerdam
Jonathan Samoocha
Ivo Everts
Rogier van der Geer
Ron van Weverwijk
Giovanni Lanzani
The right curriculum
We meet twice a month. Once for a lecture, once for a hackathon.
Lectures
The RStudio stack.
The art of simulation.
The iPython stack.
Linear modelling.
Operations research.
Nonlinear modelling.
Clustering & ensemble methods.
Natural language processing.
Time series.
Visualisation.
Scaling to big data.
Advanced topics.
Hackathons
Scrape and mine the internet.
Solving multiarmed bandit problems.
Webdev with flask and pandas as a backend.
Build an automation script for linear models.
Build a heuristic tsp solver.
Code review your automation for nonlinear models.
Build a method that outperforms random forests.
Build a markov chain to generate song lyrics.
Predict an optimal portfolio for the stock market.
Create an interactive d3 app with backend.
Start up a spark cluster with large s3 data.
You pick!
Interested?
Ping us here. signal@godatadriven.com
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
1. Instrument ALL the things:
Studying data-intensive
workflows in the clowd.
C. Titus Brown
Michigan State University
(See blog post)
2. A few upfront definitions
Big Data, n: whatever is still inconvenient to compute on.
Data scientist, n: a statistician who lives in San Francisco.
Professor, n: someone who writes grants to fund people
who do the work (c.f. Fernando Perez)
I am a professor (not a data scientist) who
writes grants so that others can do data-
intensive biology.
3. This talk dedicated to Terry Peppers
Titus, I no longer understand
what you actually do…
Daddy, what do you do at
work!?
4. I assemble puzzles for a living.
Well, ok, I strategize about solving multi-dimensional puzzles with billions of pieces and no box.
5. Three bioinformatic strategies in use
• Greedy: “if the piece sorta fits…”
• N2 – “Do these two pieces match? How about
this next one?”
• The Dutch approach.
7. The Dutch Solution
Algorithmically:
• Is linear in time with number of pieces
(Way better than N2!)
• Is linear in memory with volume of data
(This is due to errors in digitization process.)
9. Our research challenges –
1. It costs only $10k & 1 week to generate
enough sequence data that no commodity
computer (and few supercomputers) can
assemble it.
2. Hundreds -> thousands of such data sets are
being generated each year.
10. Our research challenges –
1. It costs only $10k & 1 week to generate
enough sequence data that no commodity
computer (and few supercomputers) can
assemble it.
2. Hundreds -> thousands of such data sets are
being generated each year.
11. Our research (i) - CS
• Streaming lossy compression approach that
discards pieces we’ve seen before.
• Low memory probabilistic data structures.
(…see Pycon 2013 talk)
=> RAM now scales better: O(I) where I << N
(I is sample dependent but typically I < N/20)
12. Our research (ii) - approach
• Open source, open data, open science, and
reproducible computational research.
– GitHub
– Automated testing, CI, & literate reSTing
– Blogging, Twitter
– IPython Notebook for data analysis, figures.
• Protocols for assembling in the cloud.
14. Doing things right => #awesomesauce
Protocols in English
for running analyses in
the cloud
Literate reSTing =>
shell scripts
Tool
competitions
Benchmarking
Education
Acceptance
tests
15. Benchmarking strategy
• Rent a bunch of cloud VMs from Amazon and
Rackspace.
• Extract commands from tutorials using
literate-resting.
• Use ‘sar’ (sysstat pkg) to sample CPU, RAM,
and disk I/O.
24. Can’t we just use a faster computer?
• Demo data on m1.xlarge: 2789 s
• Demo data on m3.xlarge: 1970 s – 30% faster!
(Why?
m3.xlarge has 2x40 GB SSD drives & 40% faster
cores.)
Great! Let’s try it out!
25. Observation #3: multifaceted problem!
• Full data on m1.xlarge: 45.5 h
• Full data on m3.xlarge: out of disk space.
We need about 200 GB to run the full pipeline.
You can have fast disk or lots of disk but not
both, for the moment.
26. Future directions
1. Invest in cache-local data structures and
algorithms.
2. Invest in streaming/in-memory approaches.
3. Not clear (to me) that straight code
optimization or infrastructure engineering is
worthwhile investment.
27. Frequently Offered Solutions
1. You should like, totally multithread that.
(See: McDonald & Brown, POSA)
2. Hadoop will just crush that workload, dude.
(Unlikely to be cost-effective.)
3. Have you tried <my proprietary Big Data
technology stack>?
(Thatz Not Science)
28. Optimization vs scaling
• Linear time/memory improvements would not
have addressed our core problem.
(2 years, 20x improvement, 100x increase in data.)
• Puzzle problem is a graph problem with big
data, no locality, small compute. Not friendly.
• We need(ed) to scale our algorithms.
• Can now run on single-chassis, in ~15 GB RAM.
31. What are we losing by focusing our
engineering on pleasantly parallel
problems?
• Hadoop is fundamentally not that interesting.
• Research is about the 100x.
• Scaling new problems, evaluating/creating
new data structures and algorithms, etc.
32. (From my PyCon 2011 talk.)
Theme: Life’s too short to tackle the
easy problems – come to academia!
33. Thanks!
• Leigh Sheneman, for starting the
benchmarking project.
• Labbies: Michael R. Crusoe, Luiz Irber, Likit
Preeyanon, Camille Scott, and Qingpeng
Zhang.
34. Thanks!
• github.com/ged-lab/
– khmer – core project
– khmer-protocols – tutorials/acceptance tests
– literate-resting – script to pull out code from reST tutorials
• Blog post at: http://ivory.idyll.org/blog/2014-pycon.html
• Michael R. Crusoe, Likit Preeyanon, Camille Scott, and
Qingpeng Zhang are here at PyCon.
…note, you can probably afford to
buy them off me :)