This document summarizes modeling efforts of stellar explosions using astrophysics simulation codes like Maestro, Castro, and BoxLib. It discusses the multiscale challenges of modeling convection, turbulence, nuclear burning and rotation in stars. It also summarizes the temporal challenges of modeling processes on timescales from millions to seconds. The document outlines the types of approximations commonly used and the diversity of codes in the field. It provides details on the hydrodynamic schemes, grids, and divergence constraints used in different codes. Finally, it summarizes some results on modeling type Ia supernovae, helium convection in sub-Chandra white dwarfs, and white dwarf mergers.
The document summarizes research on finding electromagnetic counterparts to gravitational wave sources detected by LIGO and Virgo. It discusses that neutron star mergers are a promising source of both gravitational waves and short gamma-ray bursts. Numerical simulations show neutron star mergers produce neutron-rich debris ejected at high velocities, which could power a luminous "kilonova" lasting several days. Future wide-field optical surveys like LSST could detect such kilonova emissions from neutron star mergers within the gravitational wave detection range of advanced LIGO and Virgo, helping associate gravitational wave sources with electromagnetic events.
Nikola Tesla had extraordinary ideas about energy and space that were ahead of his time. He believed that space contained a medium that could be used as an energy source, and that longitudinal waves could be used to transmit energy wirelessly over long distances. Tesla experimented extensively with plasma and discovered effects that are only now being explained, such as invoking resonance to extract energy from the environment. His ideas about extracting energy from space, if realized, could revolutionize energy technologies and solve global energy problems. However, Tesla's work was misunderstood and his ideas require further research to be implemented.
Sliding motion and adhesion control through magnetic domaminsAndrea Benassi
Actuation and control of motion in micro mechanical systems are technological challenges, since they are accompanied by mechanical friction and wear, principal and well known sources of device lifetime reduction. In this theoretical work we propose a non-contact motion control technique based on the introduction of a tunable magnetic interaction. The latter is realized by coating two non-touching sliding bodies with ferromagnetic films. The resulting dynamics is determined by shape, size and ordering of magnetic domains arising in the films below the Curie temperature. We demonstrate that the domain behavior can be tailored by acting on handles like ferromagnetic coating preparation, external magnetic fields and the finite distance between the plates. In this way, motion control can be achieved without mechanical contact. Moreover, we discuss how such handles can disclose a variety of sliding regimes. Finally, we propose how to practically implement the proposed model sliding system.
Distributed Data Processing using Spark by Panos Labropoulos_and Sarod Yataw...Spark Summit
Spark can help with distributed data processing for radio astronomy in three key ways:
1. It allows for in-memory distributed processing of very large datasets across clusters in a fault-tolerant manner, avoiding unnecessary data movement. This is crucial for processing the exabytes of data expected from projects like the Square Kilometer Array.
2. Spark supports iterative algorithms well through its Resilient Distributed Datasets (RDDs) abstraction, which is important for techniques like calibration and deconvolution.
3. Spark can implement consensus-based distributed optimization algorithms to help with calibration, allowing information to be collectively optimized from data distributed across a network.
The document summarizes research on finding electromagnetic counterparts to gravitational wave sources detected by LIGO and Virgo. It discusses that neutron star mergers are a promising source of both gravitational waves and short gamma-ray bursts. Numerical simulations show neutron star mergers produce neutron-rich debris ejected at high velocities, which could power a luminous "kilonova" lasting several days. Future wide-field optical surveys like LSST could detect such kilonova emissions from neutron star mergers within the gravitational wave detection range of advanced LIGO and Virgo, helping associate gravitational wave sources with electromagnetic events.
Nikola Tesla had extraordinary ideas about energy and space that were ahead of his time. He believed that space contained a medium that could be used as an energy source, and that longitudinal waves could be used to transmit energy wirelessly over long distances. Tesla experimented extensively with plasma and discovered effects that are only now being explained, such as invoking resonance to extract energy from the environment. His ideas about extracting energy from space, if realized, could revolutionize energy technologies and solve global energy problems. However, Tesla's work was misunderstood and his ideas require further research to be implemented.
Sliding motion and adhesion control through magnetic domaminsAndrea Benassi
Actuation and control of motion in micro mechanical systems are technological challenges, since they are accompanied by mechanical friction and wear, principal and well known sources of device lifetime reduction. In this theoretical work we propose a non-contact motion control technique based on the introduction of a tunable magnetic interaction. The latter is realized by coating two non-touching sliding bodies with ferromagnetic films. The resulting dynamics is determined by shape, size and ordering of magnetic domains arising in the films below the Curie temperature. We demonstrate that the domain behavior can be tailored by acting on handles like ferromagnetic coating preparation, external magnetic fields and the finite distance between the plates. In this way, motion control can be achieved without mechanical contact. Moreover, we discuss how such handles can disclose a variety of sliding regimes. Finally, we propose how to practically implement the proposed model sliding system.
Distributed Data Processing using Spark by Panos Labropoulos_and Sarod Yataw...Spark Summit
Spark can help with distributed data processing for radio astronomy in three key ways:
1. It allows for in-memory distributed processing of very large datasets across clusters in a fault-tolerant manner, avoiding unnecessary data movement. This is crucial for processing the exabytes of data expected from projects like the Square Kilometer Array.
2. Spark supports iterative algorithms well through its Resilient Distributed Datasets (RDDs) abstraction, which is important for techniques like calibration and deconvolution.
3. Spark can implement consensus-based distributed optimization algorithms to help with calibration, allowing information to be collectively optimized from data distributed across a network.
This document provides information about a Physics 201 course covering topics like kinematics, dynamics, statics, fluids, and oscillations. It discusses the textbook, homework assignments on WebAssign, labs, discussions, and teaching assistants. Physics is described as the basic science that includes concepts from mechanics, optics, electricity and magnetism, atomic and nuclear physics. Examples are given of how scientific theories are developed from observations and experiments, and how Eratosthenes calculated the diameter of the Earth in the 3rd century BC. The document also covers units in the SI system, prefixes, conversions between units, derived quantities, dimensions, and measurement with significant figures.
Turbulence - computational overview and CO2 transferFabio Fonseca
This document provides an overview of turbulence and computational fluid dynamics. It discusses the history and challenges of modeling turbulent flow using the Navier-Stokes equations. Computational simulations require discretizing these equations on a grid and solving them iteratively over time. More complex simulations modeling additional physical effects require greater computational resources. The document concludes by discussing an application measuring carbon dioxide transfer in the equatorial Atlantic ocean through coupling turbulence models with gas transfer algorithms.
In this deck from the DOE CSGF Program Review meeting, Nicholas Frontiere from the University of Chicago presents: HACC - Fitting the Universe Inside a Supercomputer.
"In response to the plethora of data from current and future large-scale structure surveys of the universe, sophisticated simulations are required to obtain commensurate theoretical predictions. We have developed the Hardware/Hybrid Accelerated Cosmology Code (HACC), capable of sustained performance on powerful and architecturally diverse supercomputers to address this numerical challenge. We will investigate the numerical methods utilized to solve a problem that evolves trillions of particles, with a dynamic range of a million to one."
Watch the video: https://wp.me/p3RLHQ-i4l
Learn more: https://www.krellinst.org/csgf/conf/2017/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Improving Physical Parametrizations in Climate Models using Machine LearningNoah Brenowitz
This document discusses using machine learning to improve climate model physical parameterizations. It begins by introducing common issues with coarse resolution climate models, such as biases in mean climate states and variability. It then presents machine learning as a way to build flexible parameterization functions from high-resolution model data. Specifically, it explores using neural networks trained on single-column outputs from a global cloud-resolving model to provide tendencies like heating and moistening. While initial diagnostic tests showed skill, prognostic tests revealed issues with stability. The document discusses various approaches to address these issues, such as incorporating momentum tendencies and using stochastic and data assimilation methods. Overall, the work aims to develop machine learning parameterizations for climate models that are numerically stable and accurately represent sub
Lecture 4: Introduction to Quantum Chemical Simulation graduate course taught at MIT in Fall 2014 by Heather Kulik. This course covers: wavefunction theory, density functional theory, force fields and molecular dynamics and sampling.
Nucleation and avalanches in film with labyrintine magnetic domainsAndrea Benassi
This document summarizes a phase field model used to study nucleation and avalanches in films with labyrinthine magnetic domains. The model uses a phase field approach with a power expansion energy functional to simulate the system. It produces two different limit behaviors depending on film thickness and disorder strength: multiple nucleation and coalescence, or expansion by a single branching domain. The model is used to analyze characteristic lengths, domain shapes, and avalanche statistics under variations of parameters like thickness, disorder strength, and temperature. Adding random field and anisotropy terms introduces additional parameters for random field strength and temperature noise.
This document summarizes a presentation on numerical simulation of hydrogen detonations and their application to enclosed environments. It discusses the requirements for simulating high-speed reactive flows, including chemical kinetics models, treatment of shocks and expansions, and conservation. It then provides two examples of simulations: a hydrogen detonation in an electrolyzer and in a tunnel. Both examples show the pressure and temperature gradients over time, though the simulations have limitations like resolution and boundary conditions.
Computational fluid dynamics (CFD) is a numerical method used to analyze and solve fluid flow problems. CFD uses the mathematical equations that govern fluid motion and heat transfer to simulate the behavior of fluids. It provides a comprehensive examination of systems through modeling of velocity, pressure, temperature, and other properties without extensive physical testing. CFD has advantages of being relatively low cost, fast, and able to simulate real conditions. Limitations include accuracy depending on physical models and numerical errors from discretization. CFD is commonly used in engineering applications like aerodynamics, automotive, and electronics design.
Materials Modelling: From theory to solar cells (Lecture 1)cdtpv
This document provides an overview of a mini-module on materials modelling for solar energy applications. It introduces the lecturers and outlines the course structure, which includes lectures on modelling, interfaces, and multi-scale approaches. It also describes a literature review activity where students will present a research paper using materials modelling in photovoltaics. Recommended textbooks are provided on topics like bonding in solids, computational chemistry, and density functional theory for solids.
Gwendolyn Eadie: A New Method for Time Series Analysis in Astronomy with an A...JeremyHeyl
1. The document describes a new method called Multitaper + Lomb-Scargle (MTLS) for estimating the power spectrum of unevenly sampled time series data, such as that from astronomical observations.
2. MTLS combines the Multitaper method, which reduces spectral leakage and variance, with the Lomb-Scargle method, which can handle uneven sampling times.
3. The document provides an example application of MTLS to estimate the power spectrum of a red giant star using Kepler telescope lightcurve data, finding an improvement over the standard Lomb-Scargle method.
In the first part of the talk, we will present a sensitivity analysis of a novel sea ice model. neXtSIM is a continuous Lagrangian numerical model that uses an elastobrittle rheology to simulate the ice response to external forces. The response of the model is evaluated in terms of simulated ice drift distances from its initial position and from the mean position of the ensemble. The simulated ice drift is decomposed into advective and diffusive parts that are characterized separately both spatially and temporally and compared to what is obtained with a free-drift model, i.e. when the ice rheology does not play any role. Overall the large-scale response of neXtSIM is correlated to the ice thickness and the wind velocity fields while the free-drift model response is mostly correlated to the wind velocity pattern only. The seasonal variability of the model sensitivity shows the role of the ice compactness and rheology at both local and Arctic scales. Indeed, the ice drift simulated by neXtSIM in summer is close to the free-drift model, while the more compact and solid ice pack is showing a significantly different mechanical and drift behavior in winter. In contrast of the free-drift model, neXtSIM reproduces the sea ice Lagrangian diffusion regimes as found from observed trajectories. The forecast capability of neXtSIM is also evaluated using a large set of real buoy’s trajectories. We found that neXtSIM performs better in simulating sea ice drift, both in terms of forecast error and as a tool to assist search-and-rescue operations. Adaptive meshes, as the one used in neXtSIM, are used to model a wide variety of physical phenomena. Some of these models, in particular those of sea ice movement, use a remeshing process to remove and insert mesh points at various points in their evolution. This represents a challenge in developing compatible data assimilation schemes, as the dimension of the state space we wish to estimate can change over time when these remeshings occur.
In the second part of the talk, we highlight the challenges that such a modeling framework represents for data assimilation setup. We then describe a remeshing scheme for an adaptive mesh in one dimension. The development of advanced data assimilation methods that are appropriate for such a moving and remeshed grid is presented. Finally we discuss the extension of these techniques to two-dimensional models, like neXtSIM.
The document discusses various computational models for semiconductor device transport simulation. It begins by describing semiclassical transport theory and approaches like drift-diffusion, hydrodynamics, and particle-based Monte Carlo methods. It then covers topics like inclusion of tunneling effects, quantum corrections, and particle-based and quantum transport simulations. Specific models are discussed for generation-recombination mechanisms, low-field and field-dependent mobility, inversion layer mobility, and the hydrodynamic approach for including velocity overshoot effects.
ALMA will deliver exciting opportunities to advance our understanding of solar prominences and filaments, and constrain models of prominence fine structures.
1. The document discusses barriers to scaling electronic structure methods to large systems, such as the inability of sparse matrix multiplication kernels to access strong parallel scaling and entrenched data structures that limit innovation.
2. It proposes a fast, generic, and data local N-body solver approach using new mathematics that is not constrained by row-column data structures and allows a single programming model.
3. Key aspects of this approach include exploiting locality in higher dimensional product volumes through techniques like occlusion-culling, resolving identity iteratively to compress matrices by orders of magnitude, and developing optimized sparse matrix multiplication kernels.
Researched improvements on increasing efficiency of organic solar cells by utilizing and modifying the Purdue University researchers NanoMOS MATLAB simulations
https://nanohub.org/resources/1305?rev=1
The selection of guide stars for giant telescopes using Virtual Observatory t...Hector Quintero Casanova
The document discusses developing a tool to select guide stars for giant telescopes using virtual observatory technologies. It outlines key challenges in finding enough bright, isolated stars within a small field of view. The tool will search astronomical catalogs to identify candidate guide stars, prioritizing reliability over availability if needed. The project plan involves building the search capabilities incrementally, with early milestones including basic point source searching and integrating multiple catalogs.
Students will explore simple, series, and parallel electric circuits using batteries, bulbs, buzzers, switches, and wires. They will experiment to understand how circuits work and the differences between series and parallel circuits. Key characteristics include: in series circuits, if one component fails the entire circuit fails, while in parallel circuits each component's circuit is independent. Students will record their findings and identify patterns in how circuits behave when different components are added or removed.
Physical processes in the earth system are modeled with mathematical representations called parameterizations. This talk will describe some of the conceptual approaches and mathematics used do describe physical parameterizations focusing on cloud parameterizations. This includes tracing physical laws to discrete representations in coarse scale models. Clouds illustrate several of the complexities and techniques common to many physical parameterizations. This includes the problem of different scales, sub-grid scale variability. Discussions of mathematical methods for dealing with the sub-grid scale will be discussed. In-exactness or indeterminate problems for both weather and climate will be discussed, including the problems of indeterminate parameterizations, and inexact initial conditions. Different mathematical methods, including the use of stochastic methods, will be described and discussed, with examples from contemporary earth system models.
This document provides information about a Physics 201 course covering topics like kinematics, dynamics, statics, fluids, and oscillations. It discusses the textbook, homework assignments on WebAssign, labs, discussions, and teaching assistants. Physics is described as the basic science that includes concepts from mechanics, optics, electricity and magnetism, atomic and nuclear physics. Examples are given of how scientific theories are developed from observations and experiments, and how Eratosthenes calculated the diameter of the Earth in the 3rd century BC. The document also covers units in the SI system, prefixes, conversions between units, derived quantities, dimensions, and measurement with significant figures.
Turbulence - computational overview and CO2 transferFabio Fonseca
This document provides an overview of turbulence and computational fluid dynamics. It discusses the history and challenges of modeling turbulent flow using the Navier-Stokes equations. Computational simulations require discretizing these equations on a grid and solving them iteratively over time. More complex simulations modeling additional physical effects require greater computational resources. The document concludes by discussing an application measuring carbon dioxide transfer in the equatorial Atlantic ocean through coupling turbulence models with gas transfer algorithms.
In this deck from the DOE CSGF Program Review meeting, Nicholas Frontiere from the University of Chicago presents: HACC - Fitting the Universe Inside a Supercomputer.
"In response to the plethora of data from current and future large-scale structure surveys of the universe, sophisticated simulations are required to obtain commensurate theoretical predictions. We have developed the Hardware/Hybrid Accelerated Cosmology Code (HACC), capable of sustained performance on powerful and architecturally diverse supercomputers to address this numerical challenge. We will investigate the numerical methods utilized to solve a problem that evolves trillions of particles, with a dynamic range of a million to one."
Watch the video: https://wp.me/p3RLHQ-i4l
Learn more: https://www.krellinst.org/csgf/conf/2017/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Improving Physical Parametrizations in Climate Models using Machine LearningNoah Brenowitz
This document discusses using machine learning to improve climate model physical parameterizations. It begins by introducing common issues with coarse resolution climate models, such as biases in mean climate states and variability. It then presents machine learning as a way to build flexible parameterization functions from high-resolution model data. Specifically, it explores using neural networks trained on single-column outputs from a global cloud-resolving model to provide tendencies like heating and moistening. While initial diagnostic tests showed skill, prognostic tests revealed issues with stability. The document discusses various approaches to address these issues, such as incorporating momentum tendencies and using stochastic and data assimilation methods. Overall, the work aims to develop machine learning parameterizations for climate models that are numerically stable and accurately represent sub
Lecture 4: Introduction to Quantum Chemical Simulation graduate course taught at MIT in Fall 2014 by Heather Kulik. This course covers: wavefunction theory, density functional theory, force fields and molecular dynamics and sampling.
Nucleation and avalanches in film with labyrintine magnetic domainsAndrea Benassi
This document summarizes a phase field model used to study nucleation and avalanches in films with labyrinthine magnetic domains. The model uses a phase field approach with a power expansion energy functional to simulate the system. It produces two different limit behaviors depending on film thickness and disorder strength: multiple nucleation and coalescence, or expansion by a single branching domain. The model is used to analyze characteristic lengths, domain shapes, and avalanche statistics under variations of parameters like thickness, disorder strength, and temperature. Adding random field and anisotropy terms introduces additional parameters for random field strength and temperature noise.
This document summarizes a presentation on numerical simulation of hydrogen detonations and their application to enclosed environments. It discusses the requirements for simulating high-speed reactive flows, including chemical kinetics models, treatment of shocks and expansions, and conservation. It then provides two examples of simulations: a hydrogen detonation in an electrolyzer and in a tunnel. Both examples show the pressure and temperature gradients over time, though the simulations have limitations like resolution and boundary conditions.
Computational fluid dynamics (CFD) is a numerical method used to analyze and solve fluid flow problems. CFD uses the mathematical equations that govern fluid motion and heat transfer to simulate the behavior of fluids. It provides a comprehensive examination of systems through modeling of velocity, pressure, temperature, and other properties without extensive physical testing. CFD has advantages of being relatively low cost, fast, and able to simulate real conditions. Limitations include accuracy depending on physical models and numerical errors from discretization. CFD is commonly used in engineering applications like aerodynamics, automotive, and electronics design.
Materials Modelling: From theory to solar cells (Lecture 1)cdtpv
This document provides an overview of a mini-module on materials modelling for solar energy applications. It introduces the lecturers and outlines the course structure, which includes lectures on modelling, interfaces, and multi-scale approaches. It also describes a literature review activity where students will present a research paper using materials modelling in photovoltaics. Recommended textbooks are provided on topics like bonding in solids, computational chemistry, and density functional theory for solids.
Gwendolyn Eadie: A New Method for Time Series Analysis in Astronomy with an A...JeremyHeyl
1. The document describes a new method called Multitaper + Lomb-Scargle (MTLS) for estimating the power spectrum of unevenly sampled time series data, such as that from astronomical observations.
2. MTLS combines the Multitaper method, which reduces spectral leakage and variance, with the Lomb-Scargle method, which can handle uneven sampling times.
3. The document provides an example application of MTLS to estimate the power spectrum of a red giant star using Kepler telescope lightcurve data, finding an improvement over the standard Lomb-Scargle method.
In the first part of the talk, we will present a sensitivity analysis of a novel sea ice model. neXtSIM is a continuous Lagrangian numerical model that uses an elastobrittle rheology to simulate the ice response to external forces. The response of the model is evaluated in terms of simulated ice drift distances from its initial position and from the mean position of the ensemble. The simulated ice drift is decomposed into advective and diffusive parts that are characterized separately both spatially and temporally and compared to what is obtained with a free-drift model, i.e. when the ice rheology does not play any role. Overall the large-scale response of neXtSIM is correlated to the ice thickness and the wind velocity fields while the free-drift model response is mostly correlated to the wind velocity pattern only. The seasonal variability of the model sensitivity shows the role of the ice compactness and rheology at both local and Arctic scales. Indeed, the ice drift simulated by neXtSIM in summer is close to the free-drift model, while the more compact and solid ice pack is showing a significantly different mechanical and drift behavior in winter. In contrast of the free-drift model, neXtSIM reproduces the sea ice Lagrangian diffusion regimes as found from observed trajectories. The forecast capability of neXtSIM is also evaluated using a large set of real buoy’s trajectories. We found that neXtSIM performs better in simulating sea ice drift, both in terms of forecast error and as a tool to assist search-and-rescue operations. Adaptive meshes, as the one used in neXtSIM, are used to model a wide variety of physical phenomena. Some of these models, in particular those of sea ice movement, use a remeshing process to remove and insert mesh points at various points in their evolution. This represents a challenge in developing compatible data assimilation schemes, as the dimension of the state space we wish to estimate can change over time when these remeshings occur.
In the second part of the talk, we highlight the challenges that such a modeling framework represents for data assimilation setup. We then describe a remeshing scheme for an adaptive mesh in one dimension. The development of advanced data assimilation methods that are appropriate for such a moving and remeshed grid is presented. Finally we discuss the extension of these techniques to two-dimensional models, like neXtSIM.
The document discusses various computational models for semiconductor device transport simulation. It begins by describing semiclassical transport theory and approaches like drift-diffusion, hydrodynamics, and particle-based Monte Carlo methods. It then covers topics like inclusion of tunneling effects, quantum corrections, and particle-based and quantum transport simulations. Specific models are discussed for generation-recombination mechanisms, low-field and field-dependent mobility, inversion layer mobility, and the hydrodynamic approach for including velocity overshoot effects.
ALMA will deliver exciting opportunities to advance our understanding of solar prominences and filaments, and constrain models of prominence fine structures.
1. The document discusses barriers to scaling electronic structure methods to large systems, such as the inability of sparse matrix multiplication kernels to access strong parallel scaling and entrenched data structures that limit innovation.
2. It proposes a fast, generic, and data local N-body solver approach using new mathematics that is not constrained by row-column data structures and allows a single programming model.
3. Key aspects of this approach include exploiting locality in higher dimensional product volumes through techniques like occlusion-culling, resolving identity iteratively to compress matrices by orders of magnitude, and developing optimized sparse matrix multiplication kernels.
Researched improvements on increasing efficiency of organic solar cells by utilizing and modifying the Purdue University researchers NanoMOS MATLAB simulations
https://nanohub.org/resources/1305?rev=1
The selection of guide stars for giant telescopes using Virtual Observatory t...Hector Quintero Casanova
The document discusses developing a tool to select guide stars for giant telescopes using virtual observatory technologies. It outlines key challenges in finding enough bright, isolated stars within a small field of view. The tool will search astronomical catalogs to identify candidate guide stars, prioritizing reliability over availability if needed. The project plan involves building the search capabilities incrementally, with early milestones including basic point source searching and integrating multiple catalogs.
Students will explore simple, series, and parallel electric circuits using batteries, bulbs, buzzers, switches, and wires. They will experiment to understand how circuits work and the differences between series and parallel circuits. Key characteristics include: in series circuits, if one component fails the entire circuit fails, while in parallel circuits each component's circuit is independent. Students will record their findings and identify patterns in how circuits behave when different components are added or removed.
Physical processes in the earth system are modeled with mathematical representations called parameterizations. This talk will describe some of the conceptual approaches and mathematics used do describe physical parameterizations focusing on cloud parameterizations. This includes tracing physical laws to discrete representations in coarse scale models. Clouds illustrate several of the complexities and techniques common to many physical parameterizations. This includes the problem of different scales, sub-grid scale variability. Discussions of mathematical methods for dealing with the sub-grid scale will be discussed. In-exactness or indeterminate problems for both weather and climate will be discussed, including the problems of indeterminate parameterizations, and inexact initial conditions. Different mathematical methods, including the use of stochastic methods, will be described and discussed, with examples from contemporary earth system models.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
1. Modeling Stellar Explosions with
Maestro, Castro, and the BoxLib
Astrophysics Suite
Michael Zingale
(Stony Brook University)
in collaboration with
Ann Almgren, John Bell, Andy Nonaka, Weiqun Zhang (LBNL),
Chris Malone (LANL),
Adam Jacobs (MSU),
Max Katz (Nvidia/Stony Brook),
Maria Barrios Sazo, Alan Calder, Doug Swesty, Don Willcox (Stony Brook),
Stan Woosley (UCSC)
Support from DOE Office of Nuclear Physics, NSF Astronomy & Astrophysics.
Computer time via DOE INCITE @ OLCF/ORNL and NERSC/LBNL
2. Multiscale Challenges
● Nature is 3-d
– Convection driven by
nuclear energy release
– Fluid instabilities /
turbulence
– Localized burning /
runaway
– Rotation
● Range of lengthscales can
be enormous
● Solutions (?)
– Adaptive mesh
refinement
– Subgrid scale models
3. Temporal Challenges
● Many astrophysical explosions exhibit a range of relevant timescales
– Stellar evolution up to point of explosion / remnant formation ~
millions to 10s of billions of years
– Simmering convective phase ~ millenia to days/hours
– Explosion ~ seconds to hours
– Radiation transport ~ weeks to months
● No single algorithm can model a star from start to finish
4. Common Astrophysical Approximations
● 1-d stellar evolution codes
– Still the workhorse for understanding stellar evolution and the stages
leading up to explosion
– Parameter-rich (what does rotation, convection, or turbulence mean in
1-d?)
● Low speed hydrodynamics approximations
– Developed for atmospheric flows initially
● Compressible (magneto-)hydrodynamics
– Viscous scales are usually not resolvable
● (Multigroup) Flux limited diffusion or M1 radiation hydrodynamics
– Full multi-angle discretization of radiation can be prohibitively
expensive
5. The diversity of codes used in 3-d
stellar hydrodynamics is a strength
of our community
6. Hydrodynamic Schemes
Full Implicit Hydro Low Mach Hydro Explicit Hydro
● Can capture all
compressibility effects
● Timestep acts as filter
● Valid for all Mach #s
● Analytically filters
soundwaves from system
● Single linear implicit
equation to solve
● HSE analytically enforced
● Can capture all
compressibility effects
● Timestep is limited by
soundcrossing time
● All nearest neighbor
communication
● Requires solution of
coupled nonlinear system
● Only some
compressibility effects
retained
● Validity breaks down when
dynamic pressure becomes
large (M ~ 0.2 – 0.3)
● HSE and low Mach flows
can be challenging
● Long timescale evolution
not possible
● Implicit and low Mach can take steps at advection timescale
– Usually larger than 1/M since peak sound speed is at center, but highest
Mach number is usually at stellar surface
7. Grids
Spherical Cartesian Mapped
● Stars are spherical
● Outer boundary is easier
● No central coordinate
singularity
● Uniform resolution
throughout
● Can get spherical outer
boundary and Cartesian
inner boundary
● Rotation?
● Cells have varying
resolution
● Cannot model flow through
the center
● Grid effects from modeling
spheres
● Outer boundary can be
challenging
● Linear solvers can have
trouble with irregular cells
● Adaptive mesh refinement can be used with any of these grid types
● Also very nice work that is being done with spectral codes,
especially, e.g., dedalus and various anelastic codes
8. Low Speed Divergence Constraints
Incompressible: density of a fluid
element doesn’t change as it is
advected
Low Mach combustion: heat release in
fluid element is a source of
divergence, fluid element expands Anelastic: fluid element
adiabatically expands as it
buoyantly rises
9. Maestro: Low Mach Hydro
● Reformulation of compressible Euler equations
– Retain compressibility effects due to heating and stratification
– Asymptotic expansion in Mach number decomposes pressure into
thermodynamic and dynamic parts
– Hydrostatic equilibrium analytically enforced:
● Elliptic constraint on velocity field:
– ¯0 is a density-like variable
– S represents heating sources
● Self-consistent evolution of base state
● Timestep based on bulk fluid velocity, not sound speed
● Brings ideas from the atmospheric, combustion, and applied math
communities to nuclear astrophysics
11. Scaling
● Good strong scaling
behavior
● Developments in
progress:
– Tiling approach to
OpenMP
– GPUs for microphysics
12. Castro
● Castro is the fully compressible counterpart to Maestro
– 1-, 2-, and 3-dimensional unsplit, 2nd-order hydrodynamics
– Multigroup flux-limited diffusion radiation hydrodynamics, including
terms to O(v/c)
– Adaptive mesh refinement with subcycling in time; jumps of 2x and 4x
between levels
– Arbitrary equation of state
– General nuclear reaction networks
– Explicit thermal diffusion
– Full Poisson gravity (with isolated boundary conditions), conservative
flux formulation
– Rotation (in the co-rotating frame) in 2-d axisymmetric and 3-d
● Ability to restart from a Maestro calculation to bring it into the
compressible regime Refs:
Almgren et al. 2010
Zhang et al. 2011
Zhang et al. 2013
13. AMReX
● AMReX (formerly BoxLib) handles the grid management and parallel
distribution.
– C++/Fortran or pure Fortran library
● Hybrid parallelism model based on
MPI and OpenMP
– Tiling to improve manycore performance
● Efficient cell-centered and node-
centered geometric multigrid solvers
● Nightly regression test of all the codes
● Sidecar: separate processor group can
do runtime analysis while the
simulation is running
● Extensive performance and memory profiling tools built-in
● Highly portable
14. StarKiller
● New github organization: StarKiller-astro
– https://github.com/starkiller-astro/
– Holds common microphysics for our codes
● Collaborators at OLCF will do interfaces for other codes (including
Chimera and Flash)
● The hope is to have a community-focused repo for microphysics
● Some work on offloading to GPUs has been done, more underway
– e.g. OpenACC port of helmeos was 4x faster on GPU than OMP on
cores on titan
– OLCF did an OpenMP device offloading version
15. Tiling
● AMReX implements hybrid MPI +
OpenMP parallelism
● Logical tiling
– Move the OpenMP pragmas out of
the low level code and divide the
AMR blocks into 3-d tiles at a high
level
– OpenMP threads can work now on
the same AMR patch or across
patches simultaneously
– Experiments at LBNL show great
scaling on 80+ cores (see Zhang et al. 2015,
submitted)
16. Open Science
● Every line of code needed to rerun the simulations shown (SN Ia
convection, sub-Ch convection, WD mergers, & XRB) is in our public
github repos
– http://github.com/AMReX-Astro
– Including inputs files, analysis scripts, submission scripts, etc...
– repos: MAESTRO, Castro, BoxLib, Microphysics (common astrophysical
routines), wdmerger
– These are our actual development repos
● All output files store the git hashes of the source, the machine name,
compiler versions and flags, values of all runtime parameter, ...
– Most papers include the github hash of the repos used for simulations
– Reproducibility is part of the scientific method
● Consider making your simulation codes + infrastructure open source
– Perhaps counter-intuitively, the person who benefits the most is you,
not your “competitors”
17. Type Ia Supernovae
● No H; strong Si, Ca, Fe lines
● Occur in old populations
● Bright as host galaxy, L ~1043 erg s-1
● 56Ni powers the lightcurve
● Act as standard candles
● General consensus: thermonuclear explosion of a
carbon/oxygen white dwarf
– What progenitor?
SN1994D(High-ZSNSearchteam)
(David A. Hardy & PPARC)
18. Variations in SNe Ia
● Chandra model:
– Massive WD accretes from companion to Chandra mass; simmering of C
begins at center, burning front ignition consumes star
– Does nature make massive Wds?
– Does the burning front remain subsonic?
● Double degenerates:
– Two WDs inspiral, explosion either prompt or after (long term?) accretion
– Can we avoid the accretion induced collapse?
– Can we get an explosion that looks like a SNe Ia?
● Sub-Chandra model:
– Double detonation: ignite in He layer on surface of WD, shock converges at
center of underlying C/O WD and detonates inside out
– Can we hide the He?
– Can we make normal SNe Ia?
● What gives the variation in the peak brightness of events?
19. Convection Preceding Chandra Model
● Explosion in Chandra model
for SN Ia preceded by
centuries of simmering /
convection
– Sets explosion initial
conditions
● Dipole feature seen in
previous calculations better
described as a jet
– Asymmetry in radial
velocity field
● Direction changes rapidly
Radial velocity field (red = outflow; blue = inflow) in an
11523
non-rotating WD simulation.
Refs:
Zingale et al. 2009
Zingale et al. 2011
Nonaka et al. 2012
20. On To Explosion...
● Mach number gets large (ignition): restart
in our compressible code, Castro
– Same underlying BoxLib discretization
– Same microphysics
– Solves the fully compressible Euler
equations using an unsplit PPM method
● Basic findings:
– Off-center ignition: background turbulence
doesn't strongly affect flame propagation.
– Central ignition: convective turbulence can
push the flame off-center.
– Single-degenerate model almost always
produces an asymmetric explosion
– Single spot = small amount of burned mass
= less expansion = higher density when
DDT occurs
(Malone et al. 2014)
22. Sub-Chandra He Convection
● Variations for different WD and He layer masses (Adam Jacobs’s thesis)
● Cellular pattern forums
– Length scale converged with resolution
– Hot spots rise up and expand
● Potentially multiple hot spots simultaneously
● Three types of outcomes
– Localize runaway on short timescale
– Nova-like convective burning
– Quasi-equilibrium (?)
Visualization using yt
Refs:
Zingale et al. 2013
Jacobs et al. 2016
Next up: where does the
burning ignite? And can it
seed a detonation?
23. Mergers
● We are also working on WD mergers (Max Katz thesis)
● Focus is comparison of high-resolution grid-based simulations to
other studies to understand numerical challenges
Visualization using yt
Refs:
Katz et al. 2016
24. X-ray Bursts
● Thermonuclear runaway in thin
accreted H/He layer on surface of
a neutron star
● Accretion timescale ~ hours to
days
● Runaway timescale ~ seconds
● > 70 sources known, some with
10s or more individual bursts.
● Potential site for rp-process
nucleosynthesis
Strohmayer et al., 1996, ApJ, 469:L9
25. Outstanding XRB Questions
● How does the fuel spread over the surface?
● How does the ignition begin?
● Is the burning localized?
● Does convection modify the nucleosynthesis?
● What are the effects of rotation?
● Does convection bring ash to the surface?
These are all multi-dimensional effects
26. X-ray Bursts
● Current calculations:
– 512 × 512 × 768 zones
– 6 cm resolution
– 11 nuclei network
● Captures H burning
(hot CNO), 3-α, rp-
process breakout
– T increase over 109 K,
evolve for 0.02 s
● Next steps:
– Bigger domains
– Variety of initial models
Visualization using yt
Refs:
Malone et al. 2011
Malone et al. 2014
Zingale et al. 2015
27. We Need 3-d!
● Convection requires 3-d
● Turbulence and instabilities are only properly realized in 3-d
– We'll never resolve dissipation scale—Re ~ 1014 for some of these
systems
turbulent kinetic energy spectrum in
Maestro XRB calculations
Note that capturing turbulence
requires a minimum of 512
zones across in our experience.
If turbulence is important to
your problem, you really need
to do high resolution.
28. Convective Urca
● Current Maestro application: convective Urca
Slice through center of WD, shaded
according to energy generation
rate (green/magenta) with line
integral convolution of in-plane
velocities (black).
Simulation time: 10000 seconds
29. Community Efforts?
● Diagnostics
– Some discussion of this yesterday
● Many of us implement diagnostics (e.g. measuring extend of convective
region, entrainment rates, …)
● yt provides a natural framework to implement stellar hydro diagnostics
– Power spectrum for stratified atmospheres—is there density
weighting?
● Microphysics efforts already mentioned
● Code comparisons / benchmark problems
– We should develop some standard problems to define convective
boundary mixing, and allow for code comparisons, development,
improvement
30. White Paper, volume I
● An outcome we discussed
is the generation of a white
paper
● Proposal:
– Setup a git repo on github
in astro-whitepapers
org
– Create a template based
on our discussion
– Constantly pester
everyone
– Fall deadline?
● Success depends on
building on inertia from this
meeting
List of 3d stellar hydro problems from Eggleton et al.
2003 in the Djehuty code intro.
31. Training Students
● To grow the field, we need to train our students in the numerical
(and stellar) techniques
● It is also essential for the students who leave the field to develop
employable skills, e.g., programming, HPC, analysis, ...
● Students should get exposure to codes and learn how to implement
new ideas
● Credit should be given for development—this includes, e.g., git log /
github activity
32. Pyro: Hydro by Example
● A “python hydro” code designed with clarity in mind to teach students about
simulation techniques
● 2-d solvers for:
– Linear advection
– Compressible hydrodynamics
– Elliptic equations (via multigrid)
– Implicit diffusion
– Incompressible hydrodynamics
– Low Mach number atmospheric flows
– Coming soon: gray flux limited diffusion radiation hydrodynamics
● BSD-3 licensed, up on github: https://github.com/zingale/pyro2
33. Intro to Comp Hydro for Astro
● Current contents:
– Simulation Overview
– Classification of PDEs
– Finite Volume Grids
– Advection
– Burgers’ Equation
– Euler Equations: Theory
– Euler Equations: Numerical Methods
– Elliptic Equations and Multigrid
– Diffusion
– Multiphysics Applications
– Reacting Flows
– Planning a Simulation
– Incompressible Flow and Projection
Methods
– Low Mach Number Methods
● Freely available under a CC
license
● Code for every figure and
method shown is provided
● Contributions welcomed via
github issues and PRs
https://github.com/Open-Astrophysics-Bookshelf/numerical_exercises/
34. pyreaclib
● Python interface for ReacLib rates
– https://github.com/pyreaclib/
● Allows for interactive exploration of rates and collections (networks)
in Jupyter
● Outputs the full righthand side routine in:
– Python
– Cython (soon)
– Fortran (standalone with Sundials)
– Fortran for BoxLib Microphysics
35. Summary/Future
● Astrophysical modeling requires the cooperation of many different domain
scientists
● Chandra SNe Ia:
– single-point, off-center ignition
– convection calculations in the Chandra used for explosion
– Urca process calculations are starting
● Sub-Ch SNe Ia: variety, likely single-point…
● WD mergers: infrastructure developed to allow for accurate, high-resolution
modeling
● XRBs: well-developed convective field realized
– Working on understanding flame propagation now
● Maestro development directions: rotation, higher-order, acoustics, MHD,
rotation, ???
● Castro development: improved radiation, better coupling, GPU hydro
– Black widow pulsar systems are a new focus
● Releasing simulation codes / problem files is part of scientific reproducibility