Python for High Throughput
Science
Mark Basham
Scientific Software Group
Diamond Light Source Ltd UK.
Overview
• What is Diamond Light Source
• Big Data?
• Python for scientists
• Python for developers
Diamond Light source
What do I do?
• Provide data analysis for use during and
after beamtime for users
–Users may or may not have any prior
experience.
–~30 beamlines with over 100 techniques
used.
• With 12 other Full time developers
Where it all started
Client server
technology
Communication with
EPICS and hardware
Scan mechanism
www.opengda.org
Jython
and Python
Visualisation
Communication
with external
analysis
Analysis
tools
All core technologies open source
Acquisition
• 1.0 release 2002
• 3.0 release 2004
– Jython introduced
as scripting
language
Beamline setup and
data collection speed
increased.
Universal Data Problem
Detector History at DLS
• Early 2007:
– Diamond first user.
– No detector faster than ~10 MB/sec.
• Early 2009:
– first Lustre system (DDN S2A9900)
– first Pilatus 6M system @ 60 MB/s.
• Early 2011:
– second Lustre system (DDN SFA10K)
– first 25Hz Pilatus 6M system @150 MB/s.
• Early 2013:
– first GPFS system (DDN SFA12K)
– First 100 Hz Pilatus 6M system @ 600 MB/sec
– ~10 beamlines with 10 GbE detectors (mainly Pilatus and PCO Edge).
• Early 2015:
– delivery of Percival detector (6000 MB/sec).
1
10
100
1000
10000
2007 2012
Peak Detector
Performance (MB/s)
< 100GB/day
< 1TB/day
> 1TB/day
Per Beamline Data Rates
Data Storage
● ~1PB of Lustre
● ~1PB of GPFS
● ~0.5PB of on-line archive
● ~1PB near-line archive
– >200M files
High performance parallel
file systems HATE lots of
small files.
< 100GB/day
< 1TB/day
> 1TB/day
Small Data Rate Beamlines (Variety)
“I have all the data I have ever
collected on a floppy disk and
process it by hand…”
Principal beam-line scientist when asked
about data volumes in 2005
“I have all the data I have ever
collected on a floppy disk and
process it by hand…”
~1 TB so far this year
Processing Data (Variety)
• Experimental work requires exploring
– Matlab
– IDL
– IgorPro
– Excel
– Origin
– Mathmatica
Processing Playing with Data (Variety)
• Experimental work requires exploring
– Matlab
– IDL
– IgorPro
– Excel
– Origin
– Mathmatica
• Issue is scalability at all and at a reasonable price
Clusters (Velocity)
●132 Intel based nodes, 1280
Intel cores in service.
●80 NVIDIA GPGPU’s, 23328
GPU cores in service.
●Split across 6 clusters, with a
range of capabilities.
●Mostly used by MX and
tomography beamlines.
●All accessed via Sun Grid
Engine interface.
Python is the Obvious answer
• Users have used it during their beam times.
• Free and easily distributable.
• ...
• BUT – how to give it to them in a way they
understand.
Extending the Acquisition tools
Client server
technology
Communication with
EPICS and hardware
Scan mechanism
www.opengda.org
Jython
and Python
Visualisation
Communication
with external
analysis
Analysis
tools
Data read, write,
convert
Metadata
structure
Workflows
All core technologies open source
www.dawnsci.org
DAWN is a
collection of
generic and
bespoke ‘views’
collated into
‘perspectives’.
The perspectives
and views can
be used in part
or whole in either
the GDA or
DAWN.
Acquisition Analysis
Main Dawn Elements for Python
Python/Jython
Data
Exploring
Workflow
PyDev Scripting
IPython Console
Python Actor scisoftpy module
HDF5
Visualisation
www.dawnsci.org
Scisoftpy plotting
Interactive console
Run on CMD
Real-time variable
view
IPython interface
Integrated
Debugging
Scripting tools
Breakpoints and
Step by step debugging
Interact with the interpreter while paused
Python @ Diamond
• Anaconda
–Numpy
–Scipy
–H5py
–Mpi4py
–Webservices
• Astra (Tomography)
• FFTW
(Ptychography)
• CCTBX
(Crystallography)
Processing Playing with Data (Variety)
• Experimental work requires exploring
– Python
• Scientific Software team
– Modules for easy access and common tasks
– Repositories and Training
Aside – Python for Optimization
• We produce a very fast beam of electrons
(99.999999% the speed of light)
• We oscillate this beam between magnet
arrays called Insertion Devices (ID’s) to
make lots of light
Insertion Devices (ID’s ~600 magnets)
Individual Magnet (~800)
Unique MagnetMagnet Holder
x
yz
X Y Z
Perfect 1.0 0.0 0.0
Real 1.08 0.02 -0.04
Simple Optimisation Problem
• From 800 magnets, pick 600 of them
in the right order so that they appear
to be a perfect array.
• But we already have code in Fortran
–Bit hard to use
–Not that extensible to new systems
Objective Functions
• Slower in Python than Fortran
–Original code ~ 1,000 times slower
–Numpy array optimised ~ 10 times
slower
• Python improvements,
–Caching ~ matched the speed
–Clever updating ~ 100 times faster.
OptID
• Artificial Immune systems
– Global optimiser
– Need more evaluations
• Parallelization
– Threading with np to use processors
– Mpi4py for data transfer and making use of the cluster
• Running on 25 machines, 200 cpu’s
• First sort with the new code has been built.
< 100GB/day
< 1TB/day
> 1TB/day
High Data Rate Beamlines
Archiving (Veracity)
• Simple task of registering files and metadata with a
remote service.
– Xml parsing
– Contact web services
– File system interaction
• Nearly 1PB of data and 200 Million files archives through
this system.
• Extended onto the cluster to deal with the additional
load.
< 100GB/day
< 1TB/day
> 1TB/day
MX Data Processing
(Volume and Velocity)
MX Data Reduction (Volume)
Fast DP - fast
Index
Integrate
PointlessScale, refine in P1
Scale, postrefine, merge in point group
Choose best point group
Integrate Integrate Integrate Integrate
Output MTZ File
xia2 – thorough
downstream processing...
Experimental Phasing (Velocity)
Fast EP
Prepare for Shelx - ShelxC
Phase - ShelxE
Solvent fraction
Original
Inverted
Find substructure - ShelxD
#sites
Spacegroups
0.25 0.75
Experimentally phased map
Fast DP MTZ file
Results location: (visitpath)/processed/(folder)/(prefix)
DIALS
• Full application being built in Python
– 4 full time developers
• CCTBX
– Extending and working with this open source project
• Boost
– Optimization when required using Boost
< 100GB/day
< 1TB/day
> 1TB/day
Tomography Data Reconstruction
(Volume and Velocity)
Tomography Current Implemetation
• Existing codes for reconstruction in c with CUDA
– Only runs on Tiffs
– Minimal data correction for experimental artefacts
– Only uses 1GPU
• Python
– Splits data and manages cluster usage (2 GPU’s per
Node)
– Extracts corrected data from HDF
– Builds input files from metadata
Tomography Next Gen
• Mpi4py
– Cluster organisation,
– Parallelism
– Queues using send buffers
• Transfer of data using ZeroMQ
– Using blosc for compression
• Processing in python where possible
– But calls to external code will be used initially
Multiprocessor + MPI “profiling”
MPI “profiling”
Multiprocessor/MPI “profiling”
• Javascript
var dataTable = new google.visualization.DataTable()
• Python
import logging
logging.basicConfig(level=0,format='L
%(asctime)s.%(msecs)03d M' + machine_number_string +
' ' + rank_names[machine_rank] + ' %(levelname)-6s
%(message)s', datefmt='%H:%M:%S')
• Jinja2 templating to tie the 2 together
Where are we going?
• Scientists are having to become developers
– We try to steer them in the right direction
– Python is a very good, if not the best tool to do this
• Developers are having to work faster and be more
reactive to new detectors, clusters, software, methods,....
– Python allows this, and is being adopted almost as
standard by new computational projects at Diamond
Acknowledgements
– Alun Ashton
– Graeme Winter
– Greg Matthews
– Tina Friedrich
– Frederik Ferner
– Jonah Graham
(Kichwa)
– Matthew Gerring
– Peter Chang
– Baha El Kassaby
– Jacob Filik
– Karl Levik
– Irakli Sikharulidze
– Olof Svensson
– Andy Gotz
– Gábor Náray
– Ed Rial
– Robert Oates
Thanks for Listening...
@basham_mark
www.dawnsci.org
www.diamond.ac.uk

Python for High Throughput Science by Mark Basham

  • 1.
    Python for HighThroughput Science Mark Basham Scientific Software Group Diamond Light Source Ltd UK.
  • 2.
    Overview • What isDiamond Light Source • Big Data? • Python for scientists • Python for developers
  • 4.
  • 5.
    What do Ido? • Provide data analysis for use during and after beamtime for users –Users may or may not have any prior experience. –~30 beamlines with over 100 techniques used. • With 12 other Full time developers
  • 6.
    Where it allstarted Client server technology Communication with EPICS and hardware Scan mechanism www.opengda.org Jython and Python Visualisation Communication with external analysis Analysis tools All core technologies open source Acquisition • 1.0 release 2002 • 3.0 release 2004 – Jython introduced as scripting language Beamline setup and data collection speed increased.
  • 7.
  • 8.
    Detector History atDLS • Early 2007: – Diamond first user. – No detector faster than ~10 MB/sec. • Early 2009: – first Lustre system (DDN S2A9900) – first Pilatus 6M system @ 60 MB/s. • Early 2011: – second Lustre system (DDN SFA10K) – first 25Hz Pilatus 6M system @150 MB/s. • Early 2013: – first GPFS system (DDN SFA12K) – First 100 Hz Pilatus 6M system @ 600 MB/sec – ~10 beamlines with 10 GbE detectors (mainly Pilatus and PCO Edge). • Early 2015: – delivery of Percival detector (6000 MB/sec). 1 10 100 1000 10000 2007 2012 Peak Detector Performance (MB/s)
  • 9.
    < 100GB/day < 1TB/day >1TB/day Per Beamline Data Rates
  • 10.
    Data Storage ● ~1PBof Lustre ● ~1PB of GPFS ● ~0.5PB of on-line archive ● ~1PB near-line archive – >200M files High performance parallel file systems HATE lots of small files.
  • 12.
    < 100GB/day < 1TB/day >1TB/day Small Data Rate Beamlines (Variety)
  • 13.
    “I have allthe data I have ever collected on a floppy disk and process it by hand…” Principal beam-line scientist when asked about data volumes in 2005
  • 14.
    “I have allthe data I have ever collected on a floppy disk and process it by hand…” ~1 TB so far this year
  • 15.
    Processing Data (Variety) •Experimental work requires exploring – Matlab – IDL – IgorPro – Excel – Origin – Mathmatica
  • 16.
    Processing Playing withData (Variety) • Experimental work requires exploring – Matlab – IDL – IgorPro – Excel – Origin – Mathmatica • Issue is scalability at all and at a reasonable price
  • 17.
    Clusters (Velocity) ●132 Intelbased nodes, 1280 Intel cores in service. ●80 NVIDIA GPGPU’s, 23328 GPU cores in service. ●Split across 6 clusters, with a range of capabilities. ●Mostly used by MX and tomography beamlines. ●All accessed via Sun Grid Engine interface.
  • 18.
    Python is theObvious answer • Users have used it during their beam times. • Free and easily distributable. • ... • BUT – how to give it to them in a way they understand.
  • 19.
    Extending the Acquisitiontools Client server technology Communication with EPICS and hardware Scan mechanism www.opengda.org Jython and Python Visualisation Communication with external analysis Analysis tools Data read, write, convert Metadata structure Workflows All core technologies open source www.dawnsci.org DAWN is a collection of generic and bespoke ‘views’ collated into ‘perspectives’. The perspectives and views can be used in part or whole in either the GDA or DAWN. Acquisition Analysis
  • 20.
    Main Dawn Elementsfor Python Python/Jython Data Exploring Workflow PyDev Scripting IPython Console Python Actor scisoftpy module HDF5 Visualisation www.dawnsci.org
  • 21.
  • 22.
    Interactive console Run onCMD Real-time variable view IPython interface Integrated Debugging
  • 23.
    Scripting tools Breakpoints and Stepby step debugging Interact with the interpreter while paused
  • 24.
    Python @ Diamond •Anaconda –Numpy –Scipy –H5py –Mpi4py –Webservices • Astra (Tomography) • FFTW (Ptychography) • CCTBX (Crystallography)
  • 25.
    Processing Playing withData (Variety) • Experimental work requires exploring – Python • Scientific Software team – Modules for easy access and common tasks – Repositories and Training
  • 26.
    Aside – Pythonfor Optimization • We produce a very fast beam of electrons (99.999999% the speed of light) • We oscillate this beam between magnet arrays called Insertion Devices (ID’s) to make lots of light
  • 27.
  • 28.
    Individual Magnet (~800) UniqueMagnetMagnet Holder x yz X Y Z Perfect 1.0 0.0 0.0 Real 1.08 0.02 -0.04
  • 29.
    Simple Optimisation Problem •From 800 magnets, pick 600 of them in the right order so that they appear to be a perfect array. • But we already have code in Fortran –Bit hard to use –Not that extensible to new systems
  • 30.
    Objective Functions • Slowerin Python than Fortran –Original code ~ 1,000 times slower –Numpy array optimised ~ 10 times slower • Python improvements, –Caching ~ matched the speed –Clever updating ~ 100 times faster.
  • 31.
    OptID • Artificial Immunesystems – Global optimiser – Need more evaluations • Parallelization – Threading with np to use processors – Mpi4py for data transfer and making use of the cluster • Running on 25 machines, 200 cpu’s • First sort with the new code has been built.
  • 32.
    < 100GB/day < 1TB/day >1TB/day High Data Rate Beamlines
  • 33.
    Archiving (Veracity) • Simpletask of registering files and metadata with a remote service. – Xml parsing – Contact web services – File system interaction • Nearly 1PB of data and 200 Million files archives through this system. • Extended onto the cluster to deal with the additional load.
  • 34.
    < 100GB/day < 1TB/day >1TB/day MX Data Processing (Volume and Velocity)
  • 35.
    MX Data Reduction(Volume) Fast DP - fast Index Integrate PointlessScale, refine in P1 Scale, postrefine, merge in point group Choose best point group Integrate Integrate Integrate Integrate Output MTZ File xia2 – thorough downstream processing...
  • 36.
    Experimental Phasing (Velocity) FastEP Prepare for Shelx - ShelxC Phase - ShelxE Solvent fraction Original Inverted Find substructure - ShelxD #sites Spacegroups 0.25 0.75 Experimentally phased map Fast DP MTZ file Results location: (visitpath)/processed/(folder)/(prefix)
  • 37.
    DIALS • Full applicationbeing built in Python – 4 full time developers • CCTBX – Extending and working with this open source project • Boost – Optimization when required using Boost
  • 38.
    < 100GB/day < 1TB/day >1TB/day Tomography Data Reconstruction (Volume and Velocity)
  • 39.
    Tomography Current Implemetation •Existing codes for reconstruction in c with CUDA – Only runs on Tiffs – Minimal data correction for experimental artefacts – Only uses 1GPU • Python – Splits data and manages cluster usage (2 GPU’s per Node) – Extracts corrected data from HDF – Builds input files from metadata
  • 40.
    Tomography Next Gen •Mpi4py – Cluster organisation, – Parallelism – Queues using send buffers • Transfer of data using ZeroMQ – Using blosc for compression • Processing in python where possible – But calls to external code will be used initially
  • 41.
    Multiprocessor + MPI“profiling”
  • 42.
  • 43.
    Multiprocessor/MPI “profiling” • Javascript vardataTable = new google.visualization.DataTable() • Python import logging logging.basicConfig(level=0,format='L %(asctime)s.%(msecs)03d M' + machine_number_string + ' ' + rank_names[machine_rank] + ' %(levelname)-6s %(message)s', datefmt='%H:%M:%S') • Jinja2 templating to tie the 2 together
  • 44.
    Where are wegoing? • Scientists are having to become developers – We try to steer them in the right direction – Python is a very good, if not the best tool to do this • Developers are having to work faster and be more reactive to new detectors, clusters, software, methods,.... – Python allows this, and is being adopted almost as standard by new computational projects at Diamond
  • 45.
    Acknowledgements – Alun Ashton –Graeme Winter – Greg Matthews – Tina Friedrich – Frederik Ferner – Jonah Graham (Kichwa) – Matthew Gerring – Peter Chang – Baha El Kassaby – Jacob Filik – Karl Levik – Irakli Sikharulidze – Olof Svensson – Andy Gotz – Gábor Náray – Ed Rial – Robert Oates
  • 46.