This document summarizes Tim Bell's presentation on big data science and computing at CERN. It discusses:
1) The large volumes of data generated by the LHC experiments, including 40 million pictures per second and over 800 petabytes of stored data worldwide.
2) The worldwide computing grid used to store, process and analyze LHC data across over 170 computing centers in 42 countries.
3) How CERN has transitioned its computing infrastructure from mainframes to Linux and open source cloud technologies like OpenStack to manage its increasing data needs.
In this decl from HiPEAC 2018 in Manchester, CERN's Maria Girona outlines computing challenges at the Large Hadron Collider (LHC).
"The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. The detectors at the LHC ring see as many as 800 million proton-proton collisions per second. An event in 10 to the 11th power is new physics and there is a hierarchical series of steps to extract a tiny signal from an enormous background. High energy physics (HEP) has long been a driver in managing and processing enormous scientific datasets and the largest scale high throughput computing centers. HEP developed one of the first scientific computing grids that now regularly operates 750k processor cores and half of an exabyte of disk storage located on 5 continents including hundred of connected facilities. In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced."
Watch the video: https://wp.me/p3RLHQ-i4s
Learn more: https://www.hipeac.net/2018/manchester/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
CERN is the European Centre for Particle Physics based in Geneva. The home of the Large Hadron Collider and the birth place of the world wide web is expanding its computing resources with a second data centre to process over 35PB/year from one of the largest scientific experiments ever constructed.
Within the constraints of fixed budget and manpower, agile computing techniques and common open source tools are being adopted to support over 11,000 physicists in their search for how the universe works and what is it made of.
By challenging special requirements and understanding how other large computing infrastructures are built, we have deployed a 50,000 core cloud based infrastructure building on tools such as Puppet, OpenStack and Kibana.
In moving to a cloud model, this has also required close examination of the IT processes and culture. Finding the right approach between Enterprise and DevOps techniques has been one of the greatest challenges of this transformation.
This talk will cover the requirements, tools selected, results achieved so far and the outlook for the future.
In this decl from HiPEAC 2018 in Manchester, CERN's Maria Girona outlines computing challenges at the Large Hadron Collider (LHC).
"The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. The detectors at the LHC ring see as many as 800 million proton-proton collisions per second. An event in 10 to the 11th power is new physics and there is a hierarchical series of steps to extract a tiny signal from an enormous background. High energy physics (HEP) has long been a driver in managing and processing enormous scientific datasets and the largest scale high throughput computing centers. HEP developed one of the first scientific computing grids that now regularly operates 750k processor cores and half of an exabyte of disk storage located on 5 continents including hundred of connected facilities. In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced."
Watch the video: https://wp.me/p3RLHQ-i4s
Learn more: https://www.hipeac.net/2018/manchester/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
CERN is the European Centre for Particle Physics based in Geneva. The home of the Large Hadron Collider and the birth place of the world wide web is expanding its computing resources with a second data centre to process over 35PB/year from one of the largest scientific experiments ever constructed.
Within the constraints of fixed budget and manpower, agile computing techniques and common open source tools are being adopted to support over 11,000 physicists in their search for how the universe works and what is it made of.
By challenging special requirements and understanding how other large computing infrastructures are built, we have deployed a 50,000 core cloud based infrastructure building on tools such as Puppet, OpenStack and Kibana.
In moving to a cloud model, this has also required close examination of the IT processes and culture. Finding the right approach between Enterprise and DevOps techniques has been one of the greatest challenges of this transformation.
This talk will cover the requirements, tools selected, results achieved so far and the outlook for the future.
Overlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in DetailJose Antonio Coarasa Perez
Overlay opportunistic clouds in CMS/ATLAS at CERN: The CMSooooooCloud in detail
The CMS and ATLAS online clusters consist of more than 3000 computers each. They have been exclusively used for the data acquisition that led to the Higgs particle discovery, handling 100Gbytes/s data flows and archiving 20Tbytes of data per day.
An openstack cloud layer has been deployed on the newest part of the clusters (totalling 1300 hypervisors and more than 13000 cores in CMS alone) as a minimal overlay so as to leave the primary role of the computers untouched while allowing an opportunistic usage of the cluster.
This presentation will show how to share resources with a minimal impact on the existing infrastructure. We will present the architectural choices made to deploy an unusual, as opposed to dedicated, "overlaid cloud infrastructure". These architectural choices ensured a minimal impact on the running cluster configuration while giving a maximal segregation of the overlaid virtual computer infrastructure. The use of openvswitch to avoid changes on the network infrastructure and encapsulate the virtual machines traffic will be illustrated, as well as the networking configuration adopted due to the nature of our private network. The design and performance of the openstack cloud controlling layer will be presented. We will also show the integration carried out to allow the cluster to be used in an opportunistic way while giving full control to the CMS online run control.
With the HPC Cloud facility, SURFsara offers self-service, dynamically scalable and fully configurable HPC systems to the Dutch academic community. Users have, for example, a free choice of operating system and software.
The HPC Cloud offers full control over a HPC cluster, with fast CPUs and high memory nodes and it is possible to attach terabytes of local storage to a compute node. Because of this flexibility, users can fully tailor the system for a particular application. Long-running and small compute jobs are equally welcome. Additionally, the system facilitates collaboration: users can share control over their virtual private HPC cluster with other users and share processing time, data and results. A portal with wiki, fora, repositories, issue system, etc. is offered for collaboration projects as well.
How HPC and large-scale data analytics are transforming experimental scienceinside-BigData.com
In this deck from DataTech19, Debbie Bard from NERSC presents: Supercomputing and the scientist: How HPC and large-scale data analytics are transforming experimental science.
"Debbie Bard leads the Data Science Engagement Group NERSC. NERSC is the mission supercomputing center for the USA Department of Energy, and supports over 7000 scientists and 700 projects with supercomputing needs. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic. She obtained her PhD at Edinburgh University, and has worked at Imperial College London as well as the Stanford Linear Accelerator Center (SLAC) in the USA, before joining the Data Department at NERSC, where she focuses on data-intensive computing and research, including supercomputing for experimental science and machine learning at scale."
Watch the video: https://wp.me/p3RLHQ-kLV
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...Igor Sfiligoi
The San Diego Supercomputer Center (SDSC) and the Wisconsin IceCube Particle Astrophysics Center (WIPAC) at the University of Wisconsin–Madison successfully completed a computational experiment as part of a multi-institution collaboration that marshalled all globally available for sale GPUs (graphics processing units) across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform.
In all, some 51,500 GPU processors were used during the approximately 2-hour experiment conducted on November 16 and funded under a National Science Foundation EAGER grant.
The experiment – completed just prior to the opening of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC19) in Denver, CO – was coordinated by Frank Würthwein, SDSC Lead for High-Throughput Computing, and Benedikt Riedel, Computing Manager for the IceCube Neutrino Observatory and Global Computing Coordinator at WIPAC. Igor Sfiligoi, SDSC’s lead scientific software developer for high-throughput computing, and David Schultz, a production software manager with IceCube, conducted the actual run.
This presentation was given at several booths during SC19 by Frank Würthwein.
"Building and running the cloud GPU vacuum cleaner"Frank Wuerthwein
This talk, describing the "Largest Cloud Simulation in History" (Jensen Huang at SC19), was given at the MAGIC meeting on Dec. 4th 2019. MAGIC stands for "Middleware and Grid Interagency Cooperation", and is a group within NITRD. Current federal agencies that are members of MAGIC include DOC, DOD, DOE, HHS, NASA, and NSF.
Burst data retrieval after 50k GPU Cloud runIgor Sfiligoi
We ran a 50k GPU multi-cloud simulation to support the IceCube science. This talk provided an overview of what happened to the associated data.
Presented at the Internet2 booth at SC19.
NRP Engagement webinar - Running a 51k GPU multi-cloud burst for MMA with Ic...Igor Sfiligoi
NRP Engagement webinar: Description of the 380 PFLOP32S , 51k GPU multi-cloud burst using HTCondor to run IceCube photon propagation simulation.
Presented January 27th, 2020.
Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...Igor Sfiligoi
Presented at PEARC20.
This talk presents expanding the IceCube’s production HTCondor pool using cost-effective GPU instances in preemptible mode gathered from the three major Cloud providers, namely Amazon Web Services, Microsoft Azure and the Google Cloud Platform. Using this setup, we sustained for a whole workday about 15k GPUs, corresponding to around 170 PFLOP32s, integrating over one EFLOP32 hour worth of science output for a price tag of about $60k. In this paper, we provide the reasoning behind Cloud instance selection, a description of the setup and an analysis of the provisioned resources, as well as a short description of the actual science output of the exercise.
Overlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in DetailJose Antonio Coarasa Perez
Overlay opportunistic clouds in CMS/ATLAS at CERN: The CMSooooooCloud in detail
The CMS and ATLAS online clusters consist of more than 3000 computers each. They have been exclusively used for the data acquisition that led to the Higgs particle discovery, handling 100Gbytes/s data flows and archiving 20Tbytes of data per day.
An openstack cloud layer has been deployed on the newest part of the clusters (totalling 1300 hypervisors and more than 13000 cores in CMS alone) as a minimal overlay so as to leave the primary role of the computers untouched while allowing an opportunistic usage of the cluster.
This presentation will show how to share resources with a minimal impact on the existing infrastructure. We will present the architectural choices made to deploy an unusual, as opposed to dedicated, "overlaid cloud infrastructure". These architectural choices ensured a minimal impact on the running cluster configuration while giving a maximal segregation of the overlaid virtual computer infrastructure. The use of openvswitch to avoid changes on the network infrastructure and encapsulate the virtual machines traffic will be illustrated, as well as the networking configuration adopted due to the nature of our private network. The design and performance of the openstack cloud controlling layer will be presented. We will also show the integration carried out to allow the cluster to be used in an opportunistic way while giving full control to the CMS online run control.
With the HPC Cloud facility, SURFsara offers self-service, dynamically scalable and fully configurable HPC systems to the Dutch academic community. Users have, for example, a free choice of operating system and software.
The HPC Cloud offers full control over a HPC cluster, with fast CPUs and high memory nodes and it is possible to attach terabytes of local storage to a compute node. Because of this flexibility, users can fully tailor the system for a particular application. Long-running and small compute jobs are equally welcome. Additionally, the system facilitates collaboration: users can share control over their virtual private HPC cluster with other users and share processing time, data and results. A portal with wiki, fora, repositories, issue system, etc. is offered for collaboration projects as well.
How HPC and large-scale data analytics are transforming experimental scienceinside-BigData.com
In this deck from DataTech19, Debbie Bard from NERSC presents: Supercomputing and the scientist: How HPC and large-scale data analytics are transforming experimental science.
"Debbie Bard leads the Data Science Engagement Group NERSC. NERSC is the mission supercomputing center for the USA Department of Energy, and supports over 7000 scientists and 700 projects with supercomputing needs. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic. She obtained her PhD at Edinburgh University, and has worked at Imperial College London as well as the Stanford Linear Accelerator Center (SLAC) in the USA, before joining the Data Department at NERSC, where she focuses on data-intensive computing and research, including supercomputing for experimental science and machine learning at scale."
Watch the video: https://wp.me/p3RLHQ-kLV
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...Igor Sfiligoi
The San Diego Supercomputer Center (SDSC) and the Wisconsin IceCube Particle Astrophysics Center (WIPAC) at the University of Wisconsin–Madison successfully completed a computational experiment as part of a multi-institution collaboration that marshalled all globally available for sale GPUs (graphics processing units) across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform.
In all, some 51,500 GPU processors were used during the approximately 2-hour experiment conducted on November 16 and funded under a National Science Foundation EAGER grant.
The experiment – completed just prior to the opening of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC19) in Denver, CO – was coordinated by Frank Würthwein, SDSC Lead for High-Throughput Computing, and Benedikt Riedel, Computing Manager for the IceCube Neutrino Observatory and Global Computing Coordinator at WIPAC. Igor Sfiligoi, SDSC’s lead scientific software developer for high-throughput computing, and David Schultz, a production software manager with IceCube, conducted the actual run.
This presentation was given at several booths during SC19 by Frank Würthwein.
"Building and running the cloud GPU vacuum cleaner"Frank Wuerthwein
This talk, describing the "Largest Cloud Simulation in History" (Jensen Huang at SC19), was given at the MAGIC meeting on Dec. 4th 2019. MAGIC stands for "Middleware and Grid Interagency Cooperation", and is a group within NITRD. Current federal agencies that are members of MAGIC include DOC, DOD, DOE, HHS, NASA, and NSF.
Burst data retrieval after 50k GPU Cloud runIgor Sfiligoi
We ran a 50k GPU multi-cloud simulation to support the IceCube science. This talk provided an overview of what happened to the associated data.
Presented at the Internet2 booth at SC19.
NRP Engagement webinar - Running a 51k GPU multi-cloud burst for MMA with Ic...Igor Sfiligoi
NRP Engagement webinar: Description of the 380 PFLOP32S , 51k GPU multi-cloud burst using HTCondor to run IceCube photon propagation simulation.
Presented January 27th, 2020.
Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...Igor Sfiligoi
Presented at PEARC20.
This talk presents expanding the IceCube’s production HTCondor pool using cost-effective GPU instances in preemptible mode gathered from the three major Cloud providers, namely Amazon Web Services, Microsoft Azure and the Google Cloud Platform. Using this setup, we sustained for a whole workday about 15k GPUs, corresponding to around 170 PFLOP32s, integrating over one EFLOP32 hour worth of science output for a price tag of about $60k. In this paper, we provide the reasoning behind Cloud instance selection, a description of the setup and an analysis of the provisioned resources, as well as a short description of the actual science output of the exercise.
Tackling Tomorrow’s Computing Challenges Today at CERNinside-BigData.com
In this deck from ISC 2018, Physicist and CTO of CERN openlab, Dr. Maria Girone discusses the demands of capturing, storing, and processing the large volumes of data generated by the LHC experiments.
"CERN openlab is a unique public-private partnership between The European Organization for Nuclear Research (CERN) and some of the world`s leading ICT companies. It plays a leading role in helping CERN address the computing and storage challenges related to the Large Hadron Collider’s (LHC) upgrade program.
The LHC is the world's most powerful particle accelerator and is one of the largest and most complicated machines ever built. The LHC collides proton pairs 40 million times every second in each of four interaction points, where four particle detectors are hosted. This extremely high rate of collisions makes it possible to identify rare phenomenon and is vital in helping physicists reach the requisite level of statistical certainty to declare new discoveries, such as the Higgs boson in 2012. Extracting a signal from this huge background of collisions is one of the most significant challenges faced by the high-energy physics (HEP) community."
Watch the video: https://wp.me/p3RLHQ-iSu
Learn more: http://information-technology.web.cern.ch/about/organisation/cern-openlab
and
https://www.isc-hpc.com/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
El Acceso Abierto en España ha recorrido un largo camino y con éxitos evidentes pero con demasiados obstáculos. A riesgo de simplificar demasiado, hemos pasado en España de abordar como dar soporte desde el punto de vista de la tecnología al Acceso Abierto, a tener que afrontar problemas menos de infraestructura y más de estrategia y política institucional con respecto al mismo. Son muchas las preguntas y dudas a las que aun hoy nos enfrentamos.
¿Cuál es y cuál debe ser el papel de las agencias financiadoras? ¿Resulta fácil para un investigador en España cumplir con los requisitos que se le imponen con respecto al Acceso Abierto? ¿Es posible apostar a nivel institucional por el Acceso Abierto y al mismo tiempo cumplir con las limitaciones que la vigente Ley de propiedad intelectual impone al acceso a través de las redes? ¿Es factible un acuerdo gana/gana entre bibliotecas y grandes editores académicos? ¿Cuál es el papel que deben jugar las bibliotecas? ¿Estamos preparados para afrontar los retos del Open Data?
REBIUN, sectorial de la CRUE dedicada a las bibliotecas universitarias, ya en 2004 se posicionó como defensora e impulsora del Acceso Abierto y, desde entonces, han sido numerosas las iniciativas y trabajos liderados por REBIUN en este sentido. En la actualidad, su vigente plan estratégico recoge objetivos estratégicos relacionados con la difusión y promoción del acceso abierto.
National scale research computing and beyond pearc panel 2017Gregory Newby
Panel at the PEARC 2017 event in New Orleans, July 11-13. Panelists were: Gregory Newby, Chief Technology Officer, Compute Canada; Florian Berberich, Member of the Board of Directors PRACE aisbl; Gergely Sipos, Customer and Technical Outreach Manager, EGI Foundation; and John Towns, Director of Collaborative eScience Programs, National Center for Supercomputing Applications.
Panel abstract: How might the international community of research computing users and stakeholders benefit from knowledge sharing among national- or international-scale research computing organizations and providers? It is common for large-scale investments in research computing systems, services and support to be guided and funded with government oversight and centralized planning. There are many commonalities, including stakeholder relations, outcomes reporting, long-range strategic planning, and governance. What trends exist currently, and how might information sharing and collaboration among resource providers be beneficial? Is there desire to form a partnership, or to build upon existing relationships? Participants in this panel will include personnel involved in US, Canadian and European research computing jurisdictions.
Coupling Australia’s Researchers to the Global Innovation EconomyLarry Smarr
08.10.13
Sixth Lecture in the
Australian American Leadership Dialogue Scholar Tour
University of Technology Sydney
Title: Coupling Australia’s Researchers to the Global Innovation Economy
Sydney, Australia
Coupling Australia’s Researchers to the Global Innovation EconomyLarry Smarr
08.10.15
Eighth Lecture in the
Australian American Leadership Dialogue Scholar Tour
Australian National University
Title: Coupling Australia’s Researchers to the Global Innovation Economy
Canberra, Australia
The Singularity: Toward a Post-Human RealityLarry Smarr
06.02.13
Talk to UCSD's Sixth College
Honor's Course on Kurzweil's The Singularity is Near
Title: The Singularity: Toward a Post-Human Reality
La Jolla, CA
Horizon Europe Quantum Webinar - Cluster 4 Destinations 4 and 5 | PitchesKTN
KTN Global Alliance in partnership with the Foreign, Commonwealth and Development Office (FCDO) in Germany, UK Science and Innovation Network and UK National Contact Points (NCPs) from Innovate UK as well as European NCPs focussed on pitching of project ideas and brokering partnerships for European Research and Innovation collaborations and networking.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
2. Keeping up with big data science
Experiences and outlook for the CERN LHC computing
14/03/2019 Tim Bell 2
Tim Bell
CERN IT
@noggin143
Register Lectures
14th March 2019
3. About Tim
• Responsible for
Compute and
Monitoring in
CERN IT
department
• Previously worked
for IBM and
Deutsche Bank
14/03/2019 Tim Bell 3
4. The Mission of CERN
Push back the frontiers of knowledge
E.g. the secrets of the Big Bang …what was the matter like
within the first moments of the Universe’s existence?
Develop new technologies for
accelerators and detectors
Information technology - the Web and the GRID
Medicine - diagnosis and therapy
Train scientists and engineers of
tomorrow
Unite people from different countries
and cultures
14/03/2019 Tim Bell 4
5. 5
CERN: founded in 1954: 12 European States
“Science for Peace”
Today: 22 Member States
Member States: Austria, Belgium, Bulgaria, Czech Republic, Denmark, Finland,
France, Germany, Greece, Hungary, Israel, Italy, Netherlands, Norway, Poland,
Portugal, Romania, Slovak Republic, Spain, Sweden, Switzerland and
United Kingdom
Associate Members in the Pre-Stage to Membership: Cyprus, Serbia, Slovenia
Associate Member States: India, Lithuania, Pakistan, Turkey, Ukraine
Applications for Membership or Associate Membership:
Brazil, Croatia, Estonia
Observers to Council: Japan, Russia, United States of America;
European Union, JINR and UNESCO
~ 2600 staff
~ 1800 other paid personnel
~ 13000 scientific users
Budget (2018) ~ 1150 MCHF
5
6. Science is getting more and more global
CERN: 235 staff, 55 fellows, 7 doctoral + 3 technical students
6
10. Discovery 2012, Nobel Prize in Physics 2013
The Nobel Prize in Physics 2013 was awarded jointly to François Englert
and Peter W. Higgs "for the theoretical discovery of a mechanism that
contributes to our understanding of the origin of mass of subatomic
particles, and which recently was confirmed through the discovery of the
predicted fundamental particle, by the ATLAS and CMS experiments at
CERN's Large Hadron Collider”. 10
11. 12th March 1989, 30 years ago
“Vague but interesting”
Or Archie? Gopher?
14/03/2019 Tim Bell 11
https://web30.web.cern.ch/
https://www.youtube.com/watch?v=A1L2xODZSI4
12. Medical Application as an Example of Particle Physics Spin-off
Accelerating particle beams
~30’000 accelerators worldwide
~17’000 used for medicine
Hadron Therapy
Leadership in Ion
Beam Therapy now
in Europe and
Japan
Tumour
Target
Protons
light ions
>100’000 patients treated worldwide (45 facilities)
>50’000 patients treated in Europe (14 facilities)
X-ray protons
Detecting particles
Imaging PET Scanner
Clinical trial in Portugal, France
and Italy for new breast imaging
system (ClearPEM)
14/03/2019 Tim Bell 12
13. Data Analysis at the LHC
The process to transform raw data into useful physics datasets
• This is a complicated series of steps at the LHC (Run2)
Data
Volume
Processing
and people
HLT Reconstruction Reprocessing Organized
Analysis
Final
Selection
50k
cores
80kcores
20k
cores
40k
cores
DAQandTrigger
(lessthan200)
Operations
(lessthan100)
Operations
(lessthan100)
AnalysisUsers
(lMorethan1000)
AnalysisUsers
(lMorethan1000)
SelectedRAW
(1GB/s)
DerivedData
(2GB/s)
FromDetector(1PB/s)
AnalysisSelection
(100MB/s)
AfterHardwareTrigger(TB/s)
DerivedData
(2GB/s)
14/03/2019 Tim Bell 13
14. 14
Tier-1: permanent
storage, re-processing,
analysis
Tier-0
(CERN and Hungary):
data recording,
reconstruction and
distribution
Tier-2: Simulation,
end-user analysis
> 2 million jobs/day
~1M CPU cores
~1 EB of storage
~170 sites,
42 countries
10-100 Gb links
WLCG:
An International collaboration to distribute and analyse LHC data
Integrates computer centres worldwide that provide computing and storage
resource into a single infrastructure accessible by all LHC physicists
The Worldwide LHC Computing Grid
14/03/2019 Tim Bell
15. Big Science – Big Data
40 million pictures per second in one experiment, of which about 1000 recorded
Worldwide LHC Computing Grid – 800 PB of storage
>170 sites in 42 countries
14/03/2019 Tim Bell 15
16. 2018 was quite a year - Storage
LHCC; 26 Feb 2019 Ian Bird 16
2018: 88 PB
ATLAS: 24.7
CMS: 43.6
LHCb: 7.3
ALICE: 12.4
inc. parked b-physics data
Data transfers
Heavy Ion Run
CERN Tape Store
330 PB archived
18. TEIN
TEIN,
Mumbai
GÉANT,
Europe
GARR
LHCONE VRF domain/aggregator
UChi
PoP router
See http://lhcone.net for more detail.
Ver. 4.2, May 29, 2018 – WEJohnston, ESnet, wej@es.net
Belle II Tier 1/2
- yellow outline indicates LHC+Belle II site
- Dashed outline indicates distributed site
}KEK
LHC Tier 2/3 ATLAS and CMS
LHCONE L3VPN: A global infrastructure for High Energy Physics data analysis (LHC, Belle II, Pierre Auger Observatory,
NOvA, XENON)
TIFR
NKN,India
toCERN
Starlight
(Chicago)
NetherLight
(Amsterdam)MANLAN
(NewYork)
GÉANT
Open
London
CESNET
Czechia
ESnet
Geneva
RedIRIS
Spain
PIC-T1
RoEduNet
Romania
UAIC xx, ISS,
NIHAM,
ITIM, NIPNEx3
PacWave
(Sunnyvale)
JANET
UK
IC
PacWave
Los Angeles
NORDUnet
Nordic
UChi
(MWT2)
ARNES
Slovenia
SiGNET xx
AGLT2
MSU
AGLT2
UM
NREN/site router at exchange point
Budapest
LPNHE
CEA-
RENATER
France
LPNHE
APC,
LAL
IPNHO
LLR,
CEA-
IRFU
LPC,
CPPM,
IPHC,
Subatech
CC-IN2P3-T1
Korea
CANARI
E
GARR
Italy
INFN
Bari, Catania,
Frascati,
Legnaro,
Milano, Roma1,
Torino
INFN
Pisa, Napoli
London
Internet2
ESnet
Caltech
MIT
NDGF-
T1,
NDGF-
T1b
NDGF-
T1c
RedIRI
S
PacificWave
(distributedexchange)
Bucharest
LHC Tier 1 ATLAS and CMSCNAF-T1
CSTNet
Network providerANSP
UWisc
Redecomep
HEPGrid
(UERJ)
CBPF
RNP/IPĔ
Brazil
SAMP
A
(USP)
AtlanticWave
(distributedexchange)
PNWG,
KREONET,
SINET,
CANARIE,
ESnet
KIAE/JINR
Russia
HEPNET
JINR, IHEP
Protvino, GridPNPI,
RCC-KI T2, ITEP
UIUC
(MWT2)
UCSD
MANLAN
toCERN
NOTES
1) LHCOPN paths are not shown on
this diagram
2) The “LHCONE peerings” at the
exchange points indicate who has a
presence there and not that all peer
with each other (see
https://twiki.cern.ch/twiki/bin/view/LH
CONE/LhcOneVRF )
Internet2
USA
ASGC
KREON
ET
RRC-KI T1
JINR T1
Global ResearchPlatform Network (GRPnet)
PacificWave/
PNWG
PERN
Pakistan
NCP-
LCG2
Mumbai
Madrid
CANARIE
toGÉANT
(Londonvia
Orient+)
FCCN
Portugal
CANARIE
London
SINET-Internet2
SPRACE
ANSP
Prague
Paris
Brussels
Belnet
Belgium
Lisbon
ESnet
USA
BNL-T1BNL-T1, FNAL-T1
Moscow
Helsinki
prague_cesnet_lcg
2,
FZU/praguelcg2
SingAREN
Singapore
DFN
Germany
DESY
RWTH, Wupp.U,
GSI
DE-KIT-T1KIT
Frankfurt
toCERN
SINET,
Internet2
CANARIE,
ESnet
Internet2, SINET,
Caltech, UCSD,
AARNet, UCSD
Sites that are
standalone VRFsUNLPNU LHC ALICE or LHCb site
NORDUn
et
CANARIE
CANARI
E
ESnet, CERN,
MREN,
Internet2,
GÉANT, GARR,
CANARIE,
UChicago, ANL,
FNAL,
KREONET,
PacificWave,
GRPnet, UMich
via Merit/MILR,
ASGC,
KIAE/RU,
CSTNET, CUDI
ASGC
UT
Arlington
CNAF-T1
Kharkov-
KIPT
URAN
Ukraine
ThaiREN
Thailand
Hamburg
Beijing
MyREN
Malaysia
Singapore
NKN
India
NKN
India
CSTNet China
Chin
a
Next
Gen
erati
on
Inter
net,
Inter
net
Exch
ange
,
Beiji
ng
IHEP,
Beijing
CMS
ATLAS
Sarov
INR
(Troitsk)
CIEMAT-LCG2,
UAM-LCG2
BEgrid-ULB-
VUB,
Begrid-UCL,
IIHE
AARNet
Australia
UMel
PK-CIIT
PNNL, SLAC, ANL, ORNL
to
JGN
HKIX
Hong Kong
ASGC
Taiwan
ASGC-T1
Indiana
USA
UNL
KREON
ET
Milan
ESnet
CERN
Geneva
CERN-T1
CERN-T0
NET2
BU
UCSB
SINET+
ESnet
CANARIE
Canada
TRIUMF-T1 Montreal
MREN
(Chicago)
Harvard
NL-T1
SURFsara
Nikhef
Netherlands
Beijing
viaOrient+
Internet2,
CERN, MIT,
UChi, UFla,
UOak,
Vand,
GÉANT (all
of NRENS
and sites)
and
Brazilian
sites),
ESnet,
NORDUnet,
KIAE/RU
(HEPNET,
GRIDPNPI,
IHEP),
CANARIE
SANET
Slovakia
FMPh-
UNIBA
xxIEPSA
S-
Kosice
GAR
R
Amsterdam
Amsterdam
CERN,
NORDUn
et,
GÉANT,
Caltech,
NL-T1,
U. Mich,
SURFsar
a, ESnet,
PSNC
Vienna
PIONIER
PolandPSNC
KIAE
Cyfronet AGH
Kraków, U.Warsaw
ICM
CERNLight
Geneva
GRNET
Greece
HEPLAB
xx
Ioannina
UIC
CIC
OmniPoP
(Chicago)
JGN
Japan
ESnet,Internet2,
CANARIEvia
PacWave,LA
Vanderbilt, UFlorida,
NE, SoW, Harvard
VECC, TIFR
(same VRF as
ESnet Europe)
IU(MWT2), ND,
Purdue
WIX
(Washington)
SOX
AMPATH
NAP of Americas
(Miami)
PTTA
NAP do Brasil
(São Paulo)
SimFraU
Uvic, Utor, UBC, McGill
ASGC2, NCU,
NTU
SINET
Japan
KEK T1
Hirosh.,
Tsukuba,
Tokyo
ICEPP
U Tokyo
CUDI
CUDI
Mexico
UNAM
Tokyo
Osaka
Osaka
KREONET
NORDUnet
Poznan
Communication links:
1/10, 20/30/40, and 100Gb/s
Underlined link information
indicates link provider, not use
Exchange point/regional R&E communication
nexus
w/ switch providing VLAN connections
Collaborating sites not
connected to LHCONE
JGN –
SingAREN/NS
CC
Tokyo
to ESnet
via
Seattle,
SINET
KREONE
T
Hong
Kong
Connection internal to a domain,
and of unspecified bandwidth
PacWave /
PNWG
(Seattle)
Hong Kong
CNGI-6IX
CERNet2 China
KREONET2
Korea
PNUKISTI –T1
KNU, KCMS
to
JGN
London
GÉA
NT,
CAN
ARIE,
NOR
DUne
t,
SINE
T
ESne
t
MOXY
(Montreal)
GÉANT,
MAN LAN,
Worldwide networking
19. LHC Schedule
Run 3 Alice, LHCb
upgrades
Run 4 ATLAS, CMS
upgrades
14/03/2019 Tim Bell 19
20. CERN Infrastructure Transitions
• Pre-LHC (-2009)
• Mainframes (80s) to Unix (90s) to Linux (00s)
• EU funded developments such as Quattor and
Lemon
• Long Shutdown 1 (2013-2015)
• Move to open source cloud based infrastructure
• Community tools such as OpenStack, Puppet and
Grafana
• Long Shutdown 2 (2019-2021) ?
• Add Containerisation with Kubernetes and Terraform
14/03/2019 Tim Bell 20
22. Open Source Communities
• Good cultural fit with CERN
• Meritocracy
• Sharing with other labs
• Giving back to society
• Matches staffing models
• Contract lengths
• Attracts skills
• Peer recognition
• Career opportunities
• Need to support growth
• Contributions back
• Scale testing
• Dojos e.g. OpenStack, CentOS, Ceph
• Evangelise
• Input for Governance
14/03/2019 Tim Bell 22
24. CERN Open Data Portal
14/03/2019 Tim Bell 24
Publicly-accessible site for curated releases of CERN data sets and software
http://opendata.cern.ch
LHC
and
more
2016
CMS
300 TB
2017
CMS
~1 PB
25. LHC Schedule
Run 3 Alice, LHCb
upgrades
Run 4 ATLAS, CMS
upgrades
14/03/2019 Tim Bell 25
26. Events at HL-LHC
• Increased complexity due to much higher pile-up and
higher trigger rates will bring several challenges to
reconstruction algorithmsMS had to cope with monster pile-up
8b4e bunch structure à pile-up of ~ 60 events/x-ing
for ~ 20 events/x-ing)
CMS: event with 78 reconstructed vertices
CMS: event from 2017 with 78
reconstructed vertices
ATLAS: simulation for HL-LHC
with 200 vertices
14/03/2019 Tim Bell 26
27. HL-LHC computing cost parameters
Tim Bell 27
Core
Algorithms
Infrastructure
Software
Performance
Parameters
Business of the experiments:
amount of Raw data, thresholds;
Detector design long term
computing cost implications
Business of the experiments:
reconstruction, and
simulation algorithms
Performance/architectures/memory
etc.;
Tools to support: automated
build/validation
Collaboration with externals – via HSF
New grid/cloud models; optimize
CPU/disk/network; economies
of scale via clouds, joint
procurements etc.
28. The HL-LHC computing challenge
• HL-LHC needs for ATLAS and CMS are above the expected
hardware technology evolution (15% to 20%/yr) and funding (flat)
• The main challenge is storage, but computing requirements grow
20-50x
14/03/2019 Tim Bell 28
30. • Data flow challenges
3
0
Desert site
Perth /
Cape Town
World users
10 – 50 x data rate reduction by SDP
SDP
31. Google
searches
98 PB
LHC Science
data
~200 PB
SKA Phase 1 –
2023
~300 PB/year
science data
HL-LHC – 2026
~600 PB Raw data
HL-LHC – 2026
~1 EB Physics data
SKA Phase 2 – mid-2020’s
~1 EB science data
LHC – 2016
50 PB raw data
Facebook
uploads
180 PB
Google
Internet archive
~15 EB
Yearly data volumes
10 Billion of these
14/03/2019 Tim Bell 31
32. Medical Data Deluge
• “150 EBytes of
medical data in the
US, growing 48%
annually” [1]
• Cost of instruments
and laboratory
equipment
decreasing fast (e.g.
sub-1k$ genomic
sequencers)
• Medical and fitness
wearable devices on
the rise, projected
data produced in
2020 335 PB/month
[2]
Wearable
devices
Instruments
Images
Publications,
EHR, notes
Clinical trials
Simulations
[1] Esteva A. et al., A Guide to Deep Learning in Healthcare, in Nature – Medicine, Vol. 25, Jan 2019, 24-29
[2] https://www.statista.com/statistics/292837/global-wearable-device-mobile-data-traffic/
34. “data lake” Concept
Idea is to localize bulk
data in a cloud service
(Tier 1’s data lake):
minimize replication,
assure availability
Serve data to remote
(or local) compute –
grid, cloud, HPC, ???
Simple caching is all
that is needed at
compute site
Works at national,
regional, global scales
37. Using Supercomputers?
• Can we use supercomputers in the various national laboratories for
LHC computing?
• Large scale super computer resources optimized for tightly coupled computing
are being used for more HEP applications
• Many of the same techniques needed to burst jobs to clouds and handle
distributed storage are the same needed to burst to high scale on centralized
HPC resources
• HPC resources have many cores but generally less memory per core
• Applications have been modified to be better suited to HPC
• Smaller memory footprints, more use of parallel algorithms, modifications in
IO
3714/03/2019 Tim Bell
39. New methods
14/03/2019 Tim Bell 39
Data acquisition
• Real time event categorization
• Data monitoring &
certification
• Fast inference for trigger
systems
Data Reconstruction
• Calorimeter reconstruction
• Boosted object jet tagging
Data Processing
• Computing resource
optimization
• Predicting data popularity
• Intelligent networking
Data Simulation
• Adversarial networks
• Fast simulation
Data Analysis
• Knowledge base
• Data reduction
• Searches for new physics
๏ Fut ur e det ect or s wi l l be 3D ar r ays of sensor s wi t h r egul ar
geomet r y
๏ I t woul d be i deal t o qui ckl y r econst r uct par t i cl es di r ect l y
f r om t he i mage ( whi ch i s what Deep Lear ni ng became f amous f or )
P a r t ic le r e c o n s t r u c t io n a s im a g e d e t e c t io n
12
Deep Learning for Imaging Calorimet ry
Vitoria Barin Pacela,⇤ Jean-Roch Vlimant, Maurizio Pierini, and Maria Spiropulu
California Institute of Technology and
CMS
Weinvestigateparticlereconstruction using Deep Learning, based on a dataset consisting of single-
particle energy showers in a highly-granular Linear Collider Detector calorimeter with a regular 3D
array of cells. We perform energy regression on photons, electrons, neutral and charged pions, and
discuss the performance of our model in each particle dataset.
I . I N T ROD U CT I ON
One the greatest challenges at the LHC at
CERN is to collect and analyse data efficiently.
Sophisticated machine learning methods have
been researched to tackle this problem, such as
boosted decision trees and deep learning. In
this project, we are using deep neural networks
(DNN) [1] [2] to recognize images originated by
the collisions in the Linear Collider Detector
(LCD) calorimeter [3] [4], designed to operate
at the Compact Linear Collider (CLIC).
Preliminary studies have explored the possi-
bility of reconstructing particlesfrom calorimet-
ric deposits using image recognition techniques
based on convolutional neural networks, using
a dataset of simulated hits of individual par-
ticles on the LCD surface. The dataset con-
sists of calorimetric showers produced by sin-
gle particles (pions, electrons or photons) hit-
ting the surface of an electromagnetic calorime-
FIG. 1. Visualization of the data. Charged pion
event displayed in the ECAL and HCAL. Every hit
isshown in itsrespectivecell in each of thecalorime-
ters. Warmer colors (like orange and pink) repre-
sent higher energies, as 420 GeV, whereas colder
colors, like blue, represent lower energies, as 50
GeV.[5]
I I . M ET H OD S
The datasets were simulated as close as pos-
40. CERN openlab
Evaluate state of the art technologies in
collaboration with companies to address CERN’s
extreme computing challenges
14/03/2019 Tim Bell 40
41. High Luminosity LHC until 2035
• Ten times more collisions than
the original design
Studies in progress:
Compact Linear Collider (CLIC)
• Up to 50Km long
• Linear e+e- collider √s up to 3 TeV
Future Circular Collider (FCC)
• ~100 Km circumference
• New technology magnets
100 TeV pp collisions in 100km ring
• e+e- collider (FCC-ee) as 1st step?
European Strategy for Particle Physics
• Preparing next update in 2020
Future of particle physics ?
14/03/2019 Tim Bell 41
43. Summary
• CERN’s physics program will challenge storage,
networking and compute technology
• Collaborations with industry, open source, open
data and outreach drive CERN’s missions with
significant benefits outside High Energy Physics
and research
Further information at
• http://home.cern
• http://techblog.web.cern.ch
• http://lhcathome.web.cern.ch/
• http://opendata.cern.ch/
14/03/2019 Tim Bell 43
46. ESFRI Science Projects
HL-LHC SKA
FAIR CTA
KM3Net JIVE-ERIC
ELT EST
EURO-VO EGO-VIRGO
(LSST) (CERN,ESO)
Goals:
Prototype an infrastructure for the European
Open Science Cloud that is adapted to the
Exabyte-scale needs of the large ESFRI science
projects.
Ensure that the science communities drive the
development of the EOSC.
Has to address FAIR data management, long term
preservation, open access, open science, and
contribute to the EOSC catalogue of services.
• HL-LHC
• Square Kilometer Array (SKA)
• Facility for Antiproton and Ion Research (FAIR)
• Cubic Kilometre Neutrino Telescope
(KM3NET)
• Cherenkov Telescope Array (CTA)
• Extremely Large Telescope (ELT)
• European Solar Telescope (EST)
• European Gravitational Observatory (EGO)
Editor's Notes
4
5
6
12
Takes weeks and involves a big central operation team and large user community. Data is touched several times and by different sets of teams
90% of compute resources are now allocated on the cloud
ESCAPE (European Science Cluster of Astronomy & Particle physics ESFRI research infrastructures) aims to address the Open Science challenges shared by ESFRI facilities (CTA, ELT, EST, FAIR, HL-LHC, KM3NeT, SKA) as well as other pan-European research infrastructures (CERN, ESO, JIV-ERIC, EGO-Virgo) in astronomy and particle physics research domains.
ESFRI https://www.esfri.eu/
FAIR is in GSI Darmstadt, Germany
KM3NET - http://www.km3net.org/ in the med
ELT –