Evaluation and selection of polycrystalline microstructures for
fatigue resistance through computational means is hampered by the high cost of CPFEM for elastic-plastic analysis. In this work, novel approaches are employed to compare the projected HCF (and LCF) resistance of alpha-beta titanium microstructures with a variety of textures and boundary conditions based on mesoscopic FIPs. Specifically, a materials knowledge system approach for modeling of local grain responses based on spatial statistics is developed to quickly evaluate strain fields for a set of statistical volume elements (SVEs) representing a particular microstructure. Then, an explicit integration scheme (or a calibrated function) is developed to estimate the plastic strain in each voxel, allowing for the calculation of FIPs for each SVE and the evaluation of the robustness of each microstructure for HCF applications. This data science approach is orders of magnitude faster than traditional CPFEM methods, making it possible to compare large numbers of microstructures and identify those most suitable.
This document summarizes research on developing autonomous experimental systems for materials characterization and phase diagram mapping. Key points discussed include:
- Using active clustering algorithms and Gaussian process classification to analyze x-ray diffraction data and autonomously map phase diagrams without human labeling or supervision.
- Developing infrastructure for autonomous experiments involving autonomous data analysis, selection of new experimental conditions based on analysis, and control of experimental equipment to acquire new data.
- Demonstrating this approach on systems like VNbO2 and VWO2 to map phase diagrams and metal-insulator transition temperatures as a function of composition and temperature.
A Machine Learning Framework for Materials Knowledge Systemsaimsnist
- The document describes a machine learning framework for developing artificial intelligence-based materials knowledge systems (MKS) to support accelerated materials discovery and development.
- The MKS would have main functions of diagnosing materials problems, predicting materials behaviors, and recommending materials selections or process adjustments.
- It would utilize a Bayesian statistical approach to curate process-structure-property linkages for all materials classes and length scales, accounting for uncertainty in the knowledge, and allow continuous updates from new information sources.
[poster] A Compare-Aggregate Model with Latent Clustering for Answer SelectionSeoul National University
CIKM 2019
In this paper, we propose a novel method for a sentence-level answer-selection task that is one of the fundamental problems in natural language processing. First, we explore the effect of additional information by adopting a pretrained language model to compute the vector representation of the input text and by applying transfer learning from a large-scale corpus. Second, we enhance the compare-aggregate model by proposing a novel latent clustering method to compute additional information within the target corpus and by changing the objective function from listwise to pointwise. To evaluate the performance of the proposed approaches, experiments are performed with the WikiQA and TRECQA datasets. The empirical results demonstrate the superiority of our proposed approach, which achieve state-of-the-art performance on both datasets.
Waste concrete is one of the most usable and economic kind of concrete which is used in many civil projects all around the world, and its importance is undeniable. Also, the explanation of constructional process and destruction of them cause the extensive growth of irreversible waste to the industry cycle, which can be as one of the main damaging factors to the economy. In this investigation, with using of constructional waste included concrete waste, brick, ceramic and tile and stone new aggregate was made. Also it was used with different weight ratios of cement in the mix design. The results of laboratory studies showed that the using of the ratio of sand to cement 1 and waste aggregate with 20% weight ratio (W20), replacing of normal aggregate, increased the 28 days compressive strength to the maximum stage 45.23 MPa. In the next stage, in order to develop the experimental results backpropagation neural network was used. This network with about 91% regression, 0.24 error, and 1.41 seconds, is a proper method for estimating results.
Automated Generation of High-accuracy Interatomic Potentials Using Quantum Dataaimsnist
Sandia National Laboratories is developing SNAP (Spectral Neighbor Analysis Potential) potentials for molecular dynamics simulations. SNAP potentials are fitted to quantum mechanical data using bispectrum components that describe the local atomic environments. SNAP potentials have been shown to accurately reproduce properties of tantalum, including liquid structure and screw dislocation behavior not included in the training data. Work is ongoing to develop multi-element SNAP potentials, including for tungsten-beryllium alloys relevant to modeling plasma-surface interactions in nuclear fusion reactors.
This document presents a novel protocol for developing structure-property linkages for polycrystalline materials. It generates synthetic microstructure datasets and uses finite element simulations to calculate their elastic properties. Principal component analysis is used to reduce the dimensionality of the microstructure representations. Initial regression analyses show promising results in establishing structure-property linkages for elastic response, though further improvement is needed. The approach provides a compact and continuous representation of crystal orientations using generalized spherical harmonics.
The document discusses ferrous material structure and binary alloy systems. It provides information on:
1) Metal production industries such as automated welding equipment, metal cleaning equipment, foundry equipment, CNC machine refit, plasma and laser cutting, press room equipment, die monitoring and control, spin forming, precision winding, and remote control systems.
2) The content of iron ore, which is usually rich in iron oxides and includes magnetite, hematite, goethite, limonite, and siderite.
3) The process of iron production using a blast furnace, where iron oxide, limestone, coke, and oxygen are injected and react to produce pure liquid iron and slag
This document summarizes research on developing autonomous experimental systems for materials characterization and phase diagram mapping. Key points discussed include:
- Using active clustering algorithms and Gaussian process classification to analyze x-ray diffraction data and autonomously map phase diagrams without human labeling or supervision.
- Developing infrastructure for autonomous experiments involving autonomous data analysis, selection of new experimental conditions based on analysis, and control of experimental equipment to acquire new data.
- Demonstrating this approach on systems like VNbO2 and VWO2 to map phase diagrams and metal-insulator transition temperatures as a function of composition and temperature.
A Machine Learning Framework for Materials Knowledge Systemsaimsnist
- The document describes a machine learning framework for developing artificial intelligence-based materials knowledge systems (MKS) to support accelerated materials discovery and development.
- The MKS would have main functions of diagnosing materials problems, predicting materials behaviors, and recommending materials selections or process adjustments.
- It would utilize a Bayesian statistical approach to curate process-structure-property linkages for all materials classes and length scales, accounting for uncertainty in the knowledge, and allow continuous updates from new information sources.
[poster] A Compare-Aggregate Model with Latent Clustering for Answer SelectionSeoul National University
CIKM 2019
In this paper, we propose a novel method for a sentence-level answer-selection task that is one of the fundamental problems in natural language processing. First, we explore the effect of additional information by adopting a pretrained language model to compute the vector representation of the input text and by applying transfer learning from a large-scale corpus. Second, we enhance the compare-aggregate model by proposing a novel latent clustering method to compute additional information within the target corpus and by changing the objective function from listwise to pointwise. To evaluate the performance of the proposed approaches, experiments are performed with the WikiQA and TRECQA datasets. The empirical results demonstrate the superiority of our proposed approach, which achieve state-of-the-art performance on both datasets.
Waste concrete is one of the most usable and economic kind of concrete which is used in many civil projects all around the world, and its importance is undeniable. Also, the explanation of constructional process and destruction of them cause the extensive growth of irreversible waste to the industry cycle, which can be as one of the main damaging factors to the economy. In this investigation, with using of constructional waste included concrete waste, brick, ceramic and tile and stone new aggregate was made. Also it was used with different weight ratios of cement in the mix design. The results of laboratory studies showed that the using of the ratio of sand to cement 1 and waste aggregate with 20% weight ratio (W20), replacing of normal aggregate, increased the 28 days compressive strength to the maximum stage 45.23 MPa. In the next stage, in order to develop the experimental results backpropagation neural network was used. This network with about 91% regression, 0.24 error, and 1.41 seconds, is a proper method for estimating results.
Automated Generation of High-accuracy Interatomic Potentials Using Quantum Dataaimsnist
Sandia National Laboratories is developing SNAP (Spectral Neighbor Analysis Potential) potentials for molecular dynamics simulations. SNAP potentials are fitted to quantum mechanical data using bispectrum components that describe the local atomic environments. SNAP potentials have been shown to accurately reproduce properties of tantalum, including liquid structure and screw dislocation behavior not included in the training data. Work is ongoing to develop multi-element SNAP potentials, including for tungsten-beryllium alloys relevant to modeling plasma-surface interactions in nuclear fusion reactors.
This document presents a novel protocol for developing structure-property linkages for polycrystalline materials. It generates synthetic microstructure datasets and uses finite element simulations to calculate their elastic properties. Principal component analysis is used to reduce the dimensionality of the microstructure representations. Initial regression analyses show promising results in establishing structure-property linkages for elastic response, though further improvement is needed. The approach provides a compact and continuous representation of crystal orientations using generalized spherical harmonics.
The document discusses ferrous material structure and binary alloy systems. It provides information on:
1) Metal production industries such as automated welding equipment, metal cleaning equipment, foundry equipment, CNC machine refit, plasma and laser cutting, press room equipment, die monitoring and control, spin forming, precision winding, and remote control systems.
2) The content of iron ore, which is usually rich in iron oxides and includes magnetite, hematite, goethite, limonite, and siderite.
3) The process of iron production using a blast furnace, where iron oxide, limestone, coke, and oxygen are injected and react to produce pure liquid iron and slag
This document discusses transformers, including:
- Transformers change AC electrical power at one voltage level into another voltage level through magnetic fields, without changing frequency.
- They have two coils, a primary and secondary, that are magnetically linked but electrically isolated.
- Transformers can either step up or step down voltage depending on the ratio of turns in the primary and secondary coils.
- The main types are core-type transformers, which have cylindrical coils around a central core, and shell-type transformers, which have disc-shaped coil layers stacked together.
The document discusses key concepts in material technology including:
1. It defines the basic structure of atoms and different types of materials including elements, mixtures, and compounds.
2. It describes atomic structure including atomic number, atomic mass, and atomic orbits. The periodic table is introduced as a way to classify and understand elements and their properties.
3. Different types of crystal structures are defined including body centered cubic, face centered cubic, and hexagonal close packed. Bonding types such as covalent, metallic, and ionic are also introduced.
4. Terminology used in phase diagrams is defined including phases, equilibrium, composition, liquidus, and solidus. Binary alloy systems containing two components are also
This document summarizes the deformation behavior of single crystals and polycrystalline materials under tensile stress. It explains that in single crystals, plastic deformation occurs through slip along specific crystallographic planes and directions known as slip systems. Schmid's law describes the relationship between applied stress and critical resolved shear stress required for slip. In polycrystalline materials, deformation is more complex due to interactions between randomly oriented grains. Neighboring grains constrain each other's deformation, resulting in higher strength compared to single crystals.
Chapter 1: Material Structure and Binary Alloy Systemsyar 2604
This is an introduction to material structure and periodic table system. This topic also describes microstructure of the metals and alloys solidification.
The document summarizes different mechanical testing methods and non-destructive testing techniques. It discusses various hardness tests including Brinell, Vickers, Rockwell, and Knoop tests. Impact/toughness tests like Izod and Charpy tests are also covered. Non-destructive methods such as liquid penetrant, magnetic particle, ultrasonic, radiographic, and eddy current inspections are described along with their principles and purposes.
This chapter discusses dislocation theory and behavior in metals. Key topics covered include:
- Observation techniques for dislocations like etching and transmission electron microscopy
- Burgers vectors and dislocation loops that describe the geometry and movement of dislocations
- Dislocation behavior depends on the crystal structure, including dissociation in FCC into Shockley partials and easy cross-slip
- Dislocations interact through stress fields and forces, which influence deformation and strengthening mechanisms in metals
The document discusses materials and their properties. It defines raw materials and technical materials, and gives examples like wood, plastics, metals, stone, textiles, and composites. It then describes various properties of materials including physical properties like density and conductivity, mechanical properties like hardness and elasticity, chemical properties like oxidation, and ecological properties like recyclability. It stresses that the properties of materials determine their uses. It concludes that selecting a material requires considering factors like price, properties, production possibilities, availability, and environmental impact.
The document discusses the structure of crystalline solids. It explains that crystalline materials have orderly repeating atomic patterns extending in three dimensions, which gives rise to different crystal structures like FCC, BCC, and HCP. The crystal structure determines properties and affects whether a material is isotropic or anisotropic. Polycrystalline materials tend to have isotropic properties while single crystals are anisotropic.
1. Solidification occurs when a liquid metal cools and transforms into a solid below its melting point, through the process of nucleation and crystal growth.
2. During nucleation, small clusters of atoms (nuclei) form in the undercooled liquid, which must reach a critical size to become stable crystals.
3. Once stable nuclei form, the crystals grow through addition of atoms from the liquid until they impinge on neighboring crystals. Cooling curves can be used to study phase changes during solidification of pure metals and alloys.
This document provides an overview of materials technology and mechanical properties. It discusses how metals, plastics, and ceramics have different properties requiring different production technologies. Key mechanical properties like stress, strain, elasticity, strength, and toughness are defined. The document also summarizes different types of material deformation including elastic, plastic, viscoelastic, and superplastic deformation. Different strengthening mechanisms are described such as work hardening, solid solutioning, and dispersion hardening.
Corrosion is the destruction of metals through chemical or electrochemical reaction with the environment. There are two main types of corrosion: general/uniform corrosion, which occurs at the same rate over the entire metal surface, and localized corrosion, which occurs in specific areas like crevices or grain boundaries. Corrosion can be controlled through methods like cathodic protection, coating, material selection, and design considerations.
The section will cover the behaviour of materials by introducing the stress-strain curve. The concepts of elastic and plastic deformation will be covered. This will then lead to a discussion of the micro-structure of materials and a physical explanation of what is happening to a polycrystalline material as it is loaded to failure.
Presentation on machine learning and materials science at Computing in Engineering Forum 2018, Machine Ground Interaction Consortium (MaGIC) 2018, Wisconsin, Madison, December 4, 2018
Going Smart and Deep on Materials at ALCFIan Foster
As we acquire large quantities of science data from experiment and simulation, it becomes possible to apply machine learning (ML) to those data to build predictive models and to guide future simulations and experiments. Leadership Computing Facilities need to make it easy to assemble such data collections and to develop, deploy, and run associated ML models.
We describe and demonstrate here how we are realizing such capabilities at the Argonne Leadership Computing Facility. In our demonstration, we use large quantities of time-dependent density functional theory (TDDFT) data on proton stopping power in various materials maintained in the Materials Data Facility (MDF) to build machine learning models, ranging from simple linear models to complex artificial neural networks, that are then employed to manage computations, improving their accuracy and reducing their cost. We highlight the use of new services being prototyped at Argonne to organize and assemble large data collections (MDF in this case), associate ML models with data collections, discover available data and models, work with these data and models in an interactive Jupyter environment, and launch new computations on ALCF resources.
This document summarizes the Particle Swarm Optimization (PSO) algorithm. PSO is a population-based stochastic optimization technique inspired by bird flocking. It works by having a population of candidate solutions, called particles, that fly through the problem space, with the movements of each particle influenced by its local best known position as well as the global best known position. The document provides an overview of PSO and its applications, describes the basic PSO algorithm and several variants, and discusses parallel and structural optimization implementations of PSO.
This document summarizes Frances Kuo's work applying Quasi-Monte Carlo (QMC) methods to solve partial differential equations (PDEs) with random coefficients. It introduces a motivating example of modeling groundwater flow with uncertainty in porous medium properties. It then provides an overview of QMC methods, including advantages over Monte Carlo, construction techniques like lattice rules, and three theoretical settings for applying QMC to PDEs with random coefficients. The document outlines Kuo's collaborations applying QMC to problems with uniform and lognormal random coefficients under these different settings.
The document discusses research on learning to improve the efficiency of machine learning algorithms through speedup learning. It provides three key points:
1) Early work on explanation-based learning for speedup had limited success, but techniques like memoization and clause learning led to major improvements in SAT solvers.
2) More recent approaches use machine learning to build predictive models of problem instances and solver behavior, in order to inform strategies like automatic noise setting and randomized restart policies.
3) Case studies demonstrate these learning-based approaches can outperform traditional techniques and fixed policies by customizing resource allocation and reformulation based on problem structure and solver progress.
The document discusses research on learning to improve the efficiency of machine learning algorithms through speedup learning. It provides three key points:
1) Early work on explanation-based learning for speedup had limited success, but techniques like memoization and clause learning led to major improvements in SAT solvers.
2) More recent approaches use predictive models trained on dynamic features to learn optimal policies for controlling search algorithms, like setting noise levels or restart policies.
3) Open problems remain in developing optimal predictive policies with partial information and approximations, to continue improving search and reasoning performance.
Available methods for predicting materials synthesizability using computation...Anubhav Jain
This document summarizes a talk about computational and machine learning approaches for predicting materials synthesizability. It discusses how machine learning algorithms are generating millions of potential stable compound predictions, far more than can be experimentally tested. It also examines ways to better prioritize candidate materials for synthesis, such as by assessing their likelihood of dynamical stability and calculating their finite-temperature Gibbs free energies more efficiently using machine-learned interatomic force constants. Finally, it describes efforts to integrate literature knowledge using natural language processing to further guide experimental exploration and reduce the number of experiments needed to synthesize predicted materials.
The Society of Petroleum Engineers Distinguished Lecturer Program provides funding through member donations and industry support to bring expert lecturers to discuss emerging topics. This lecture discusses how big data analytics can help petroleum engineers and geoscientists reduce costs, improve productivity and efficiency by analyzing large datasets to find patterns and relationships. Case studies demonstrate applications in reservoir modeling, production optimization, and predictive maintenance.
This document discusses transformers, including:
- Transformers change AC electrical power at one voltage level into another voltage level through magnetic fields, without changing frequency.
- They have two coils, a primary and secondary, that are magnetically linked but electrically isolated.
- Transformers can either step up or step down voltage depending on the ratio of turns in the primary and secondary coils.
- The main types are core-type transformers, which have cylindrical coils around a central core, and shell-type transformers, which have disc-shaped coil layers stacked together.
The document discusses key concepts in material technology including:
1. It defines the basic structure of atoms and different types of materials including elements, mixtures, and compounds.
2. It describes atomic structure including atomic number, atomic mass, and atomic orbits. The periodic table is introduced as a way to classify and understand elements and their properties.
3. Different types of crystal structures are defined including body centered cubic, face centered cubic, and hexagonal close packed. Bonding types such as covalent, metallic, and ionic are also introduced.
4. Terminology used in phase diagrams is defined including phases, equilibrium, composition, liquidus, and solidus. Binary alloy systems containing two components are also
This document summarizes the deformation behavior of single crystals and polycrystalline materials under tensile stress. It explains that in single crystals, plastic deformation occurs through slip along specific crystallographic planes and directions known as slip systems. Schmid's law describes the relationship between applied stress and critical resolved shear stress required for slip. In polycrystalline materials, deformation is more complex due to interactions between randomly oriented grains. Neighboring grains constrain each other's deformation, resulting in higher strength compared to single crystals.
Chapter 1: Material Structure and Binary Alloy Systemsyar 2604
This is an introduction to material structure and periodic table system. This topic also describes microstructure of the metals and alloys solidification.
The document summarizes different mechanical testing methods and non-destructive testing techniques. It discusses various hardness tests including Brinell, Vickers, Rockwell, and Knoop tests. Impact/toughness tests like Izod and Charpy tests are also covered. Non-destructive methods such as liquid penetrant, magnetic particle, ultrasonic, radiographic, and eddy current inspections are described along with their principles and purposes.
This chapter discusses dislocation theory and behavior in metals. Key topics covered include:
- Observation techniques for dislocations like etching and transmission electron microscopy
- Burgers vectors and dislocation loops that describe the geometry and movement of dislocations
- Dislocation behavior depends on the crystal structure, including dissociation in FCC into Shockley partials and easy cross-slip
- Dislocations interact through stress fields and forces, which influence deformation and strengthening mechanisms in metals
The document discusses materials and their properties. It defines raw materials and technical materials, and gives examples like wood, plastics, metals, stone, textiles, and composites. It then describes various properties of materials including physical properties like density and conductivity, mechanical properties like hardness and elasticity, chemical properties like oxidation, and ecological properties like recyclability. It stresses that the properties of materials determine their uses. It concludes that selecting a material requires considering factors like price, properties, production possibilities, availability, and environmental impact.
The document discusses the structure of crystalline solids. It explains that crystalline materials have orderly repeating atomic patterns extending in three dimensions, which gives rise to different crystal structures like FCC, BCC, and HCP. The crystal structure determines properties and affects whether a material is isotropic or anisotropic. Polycrystalline materials tend to have isotropic properties while single crystals are anisotropic.
1. Solidification occurs when a liquid metal cools and transforms into a solid below its melting point, through the process of nucleation and crystal growth.
2. During nucleation, small clusters of atoms (nuclei) form in the undercooled liquid, which must reach a critical size to become stable crystals.
3. Once stable nuclei form, the crystals grow through addition of atoms from the liquid until they impinge on neighboring crystals. Cooling curves can be used to study phase changes during solidification of pure metals and alloys.
This document provides an overview of materials technology and mechanical properties. It discusses how metals, plastics, and ceramics have different properties requiring different production technologies. Key mechanical properties like stress, strain, elasticity, strength, and toughness are defined. The document also summarizes different types of material deformation including elastic, plastic, viscoelastic, and superplastic deformation. Different strengthening mechanisms are described such as work hardening, solid solutioning, and dispersion hardening.
Corrosion is the destruction of metals through chemical or electrochemical reaction with the environment. There are two main types of corrosion: general/uniform corrosion, which occurs at the same rate over the entire metal surface, and localized corrosion, which occurs in specific areas like crevices or grain boundaries. Corrosion can be controlled through methods like cathodic protection, coating, material selection, and design considerations.
The section will cover the behaviour of materials by introducing the stress-strain curve. The concepts of elastic and plastic deformation will be covered. This will then lead to a discussion of the micro-structure of materials and a physical explanation of what is happening to a polycrystalline material as it is loaded to failure.
Presentation on machine learning and materials science at Computing in Engineering Forum 2018, Machine Ground Interaction Consortium (MaGIC) 2018, Wisconsin, Madison, December 4, 2018
Going Smart and Deep on Materials at ALCFIan Foster
As we acquire large quantities of science data from experiment and simulation, it becomes possible to apply machine learning (ML) to those data to build predictive models and to guide future simulations and experiments. Leadership Computing Facilities need to make it easy to assemble such data collections and to develop, deploy, and run associated ML models.
We describe and demonstrate here how we are realizing such capabilities at the Argonne Leadership Computing Facility. In our demonstration, we use large quantities of time-dependent density functional theory (TDDFT) data on proton stopping power in various materials maintained in the Materials Data Facility (MDF) to build machine learning models, ranging from simple linear models to complex artificial neural networks, that are then employed to manage computations, improving their accuracy and reducing their cost. We highlight the use of new services being prototyped at Argonne to organize and assemble large data collections (MDF in this case), associate ML models with data collections, discover available data and models, work with these data and models in an interactive Jupyter environment, and launch new computations on ALCF resources.
This document summarizes the Particle Swarm Optimization (PSO) algorithm. PSO is a population-based stochastic optimization technique inspired by bird flocking. It works by having a population of candidate solutions, called particles, that fly through the problem space, with the movements of each particle influenced by its local best known position as well as the global best known position. The document provides an overview of PSO and its applications, describes the basic PSO algorithm and several variants, and discusses parallel and structural optimization implementations of PSO.
This document summarizes Frances Kuo's work applying Quasi-Monte Carlo (QMC) methods to solve partial differential equations (PDEs) with random coefficients. It introduces a motivating example of modeling groundwater flow with uncertainty in porous medium properties. It then provides an overview of QMC methods, including advantages over Monte Carlo, construction techniques like lattice rules, and three theoretical settings for applying QMC to PDEs with random coefficients. The document outlines Kuo's collaborations applying QMC to problems with uniform and lognormal random coefficients under these different settings.
The document discusses research on learning to improve the efficiency of machine learning algorithms through speedup learning. It provides three key points:
1) Early work on explanation-based learning for speedup had limited success, but techniques like memoization and clause learning led to major improvements in SAT solvers.
2) More recent approaches use machine learning to build predictive models of problem instances and solver behavior, in order to inform strategies like automatic noise setting and randomized restart policies.
3) Case studies demonstrate these learning-based approaches can outperform traditional techniques and fixed policies by customizing resource allocation and reformulation based on problem structure and solver progress.
The document discusses research on learning to improve the efficiency of machine learning algorithms through speedup learning. It provides three key points:
1) Early work on explanation-based learning for speedup had limited success, but techniques like memoization and clause learning led to major improvements in SAT solvers.
2) More recent approaches use predictive models trained on dynamic features to learn optimal policies for controlling search algorithms, like setting noise levels or restart policies.
3) Open problems remain in developing optimal predictive policies with partial information and approximations, to continue improving search and reasoning performance.
Available methods for predicting materials synthesizability using computation...Anubhav Jain
This document summarizes a talk about computational and machine learning approaches for predicting materials synthesizability. It discusses how machine learning algorithms are generating millions of potential stable compound predictions, far more than can be experimentally tested. It also examines ways to better prioritize candidate materials for synthesis, such as by assessing their likelihood of dynamical stability and calculating their finite-temperature Gibbs free energies more efficiently using machine-learned interatomic force constants. Finally, it describes efforts to integrate literature knowledge using natural language processing to further guide experimental exploration and reduce the number of experiments needed to synthesize predicted materials.
The Society of Petroleum Engineers Distinguished Lecturer Program provides funding through member donations and industry support to bring expert lecturers to discuss emerging topics. This lecture discusses how big data analytics can help petroleum engineers and geoscientists reduce costs, improve productivity and efficiency by analyzing large datasets to find patterns and relationships. Case studies demonstrate applications in reservoir modeling, production optimization, and predictive maintenance.
This document discusses multidimensional model order selection techniques. It begins by motivating the need for model order selection in applications such as analyzing stock market data, ultraviolet-visible spectrometry data, and sound source localization data. It then introduces tensor calculus and one-dimensional model order selection techniques before discussing novel contributions to multidimensional model order selection, including the R-D Exponential Fitting Test and Closed-Form PARAFAC based model order selection, which outperform existing techniques. Comparisons to other state-of-the-art methods are also discussed.
Computational materials design with high-throughput and machine learning methodsAnubhav Jain
Computational materials design with high-throughput and machine learning methods was presented. The presentation discussed (1) using density functional theory and high-throughput screening to rapidly generate data on many materials, (2) developing data mining approaches like matminer and matbench to extract useful information and connect to machine learning algorithms from the large volumes of data, and (3) concluded with a discussion of using these methods to accelerate materials innovation.
The Materials Project: An Electronic Structure Database for Community-Based M...Anubhav Jain
The document summarizes the Materials Project, an electronic structure database for materials design maintained by Lawrence Berkeley National Laboratory. It describes how the Materials Project uses high-throughput density functional theory calculations to compute properties of over 50,000 materials in its database. Users can search for materials, analyze computed properties, and design new materials using tools on the project's website.
Prediction Of Bioactivity From Chemical StructureJeremy Besnard
The document provides an overview of quantitative structure-activity relationship (QSAR) modeling for predicting bioactivity from chemical structure. It discusses different types of QSAR models for continuous and categorical activity predictions. It also covers topics like molecular descriptors, model validation, and various statistical methods used in QSAR like linear regression, recursive partitioning, naïve Bayesian classifiers, and more. The document aims to give a practical introduction to key concepts in chemoinformatics and QSAR modeling.
The document discusses JARVIS-ML, an AI system for fast and accurate screening of materials properties. It uses machine learning models trained on a large dataset of materials properties calculated using density functional theory. Some key points:
- JARVIS-ML uses gradient boosting decision trees to predict properties like formation energies, bandgaps, and elastic moduli, achieving good accuracy compared to DFT calculations.
- Feature selection is important, and JARVIS-ML uses over 1,500 descriptors of atomic structure. Chemical features are most important for predictions.
- The models can screen thousands of materials in seconds, much faster than DFT. This enables large-scale materials discovery tasks like genetic algorithm searches.
Combining density functional theory calculations, supercomputing, and data-dr...Anubhav Jain
The document summarizes how computational materials science using density functional theory (DFT) calculations, supercomputing, and data-driven methods can help design new materials faster than traditional experimental approaches. It describes how high-throughput DFT calculations are run on supercomputers to screen large numbers of potential materials. The results are compiled in open databases like the Materials Project to be shared and reused by researchers. While computational limitations remain, combining computation and data is helping accelerate the discovery of new materials with improved properties for applications like batteries, thermoelectrics, and carbon capture.
Automated Machine Learning Applied to Diverse Materials Design ProblemsAnubhav Jain
Automated Machine Learning Applied to Diverse Materials Design Problems
Anubhav Jain presented on developing standardized benchmark datasets and algorithms for automated machine learning in materials science. Matbench provides a diverse set of materials design problems for evaluating ML algorithms, including classification and regression tasks of varying sizes from experiments and DFT. Automatminer is a "black box" ML algorithm that uses genetic algorithms to automatically generate features, select models, and tune hyperparameters on a given dataset, performing comparably to specialized literature methods on small datasets but less well on large datasets. Standardized evaluations can help accelerate progress in automated ML for materials design.
Ash Abel completed a summer internship with Clemson University's Mechanical Engineering department. Their main project involved vibration and acoustic testing of a 7.5MW test bench that emitted uncomfortable noise levels. Through research, testing, and analysis, Ash identified possible causes such as loose connections and resonance. They designed tests using accelerometers and microphones to analyze vibration and acoustic data. Potential solutions like shortening the support beam or changing its cross-section were modeled and tested. Analysis showed modifying connections and the beam's design could effectively reduce noise levels. Ash also assisted engineers and created support materials during their internship.
Similar to Computationally Efficient Protocols to Evaluate the Fatigue Resistance of Polycrystalline Materials (20)
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Farming systems analysis: what have we learnt?.pptx
Computationally Efficient Protocols to Evaluate the Fatigue Resistance of Polycrystalline Materials
1. Computationally Efficient Protocols to
Evaluate the Fatigue Resistance of
Polycrystalline Materials
Noah H. Paulson, Matthew W. Priddy, Surya R. Kalidindi, and
David L. McDowell
4. Material Property Representation
SVE Concept
Kanit, et al. (2003).
Numerous samples are needed to capture the statistics of the
properties of the material. Let us call these samples statistical
volume elements (SVEs)
4
RVE SVE set
vs.
5. Background
Fatigue Indicator Parameters
FIPs are a surrogate measure of
driving force for fatigue crack
formation and growth
Critical Plane Approach
• Fatemi-Socie Parameter
𝐹𝐼𝑃 𝐹𝑆 =
∆𝛾 𝑚𝑎𝑥
𝑝
2
1 + 𝑘
𝜎 𝑚𝑎𝑥
𝑛
𝜎 𝑦
max
n
2
Crack formation due to intense
shear along the slip band of Ti-
6Al-4V Le Biavant, et al. (2001).
5
Fatemi, et al. (1988).
6. Problem Statement
Computational Burdens
Hypothetical: Rank HCF
resistance of the 12 heat
treatments of Ti-64
𝐶𝑃𝑈𝑡𝑖𝑚𝑒 = 12 microstructures ∗
100 SVEs
microstructure
∗
1.5 hours ∗ 4 processors
𝑆𝑉𝐸
= 𝟕𝟐𝟎𝟎 𝐡𝐨𝐮𝐫𝐬
A more efficient approach is needed to make computational
fatigue analysis feasible for industrial applications
6
7. HCF Study MKS Approach
MKS: Predict
local 𝜺 𝑡𝑜𝑡. field
Generate SVE
set
(Statistical Volume
Element)
get FIP fields +
FIP EVDs
Evaluate HCF
resistance
Estimate 𝜺 𝑝𝑙
from 𝜺 𝑡𝑜𝑡.
𝜺 𝑒𝑙
𝝈
𝜏 𝛼 Integrate
flow rule
𝛾 𝛼
𝜺 𝑝𝑙
𝜺 𝑡𝑜𝑡. ≅ 𝜺 𝑒𝑙
7
9. HCF Study MKS Approach
MKS: Predict
local 𝜺 𝑡𝑜𝑡. field
Generate SVE
set
(Statistical Volume
Element)
get FIP fields +
FIP EVDs
Evaluate HCF
resistance
Calibrate MKS
Influence
Coefficients
Generate SVE
set for training
FEM: Calculate
local 𝜺 𝑡𝑜𝑡. field
Estimate 𝜺 𝑝𝑙
from 𝜺 𝑡𝑜𝑡.
𝜺 𝑒𝑙
𝝈
𝜏 𝛼 Integrate
flow rule
𝛾 𝛼
𝜺 𝑝𝑙
𝜺 𝑡𝑜𝑡. ≅ 𝜺 𝑒𝑙
9
11. HCF Study
SVEs and Loading
DREAM.3D input
information
– Grain size distribution
• Avg. Grain Size: 43
elements
– Misorientation
distribution
– Texture
x
y
z
ε
t
Fully-reversed cyclic loading
• x-, y-, and z-direction uniaxial strain
• Periodic boundary conditions
11
12. HCF Study
MKS Results (α-Ti Basal Texture)
ε11 mean error: 0.22%, ε11 max error: 1.3%
𝑒𝑟𝑟 ≡
𝜀𝑖𝑗
𝐹𝐸𝑀
− 𝜀𝑖𝑗
𝑀𝐾𝑆
𝜀𝑖𝑗
𝐹𝐸𝑀
12
15. HCF Study Results
15
Gumbel distribution
• New protocol 240X faster than traditional protocols
• Traditional Protocol: 1.5 hours on 4 processors per SVE
• New Protocol: 90 seconds on 1 processor per SVE
16. • Protocols have been developed
to evaluate the HCF and LCF
resistance of polycrystalline
materials
𝛾 𝛼 = 𝛾 𝛼 𝜏 𝛼 , 𝜺 𝑝𝑙 =
𝛼=1
𝑁
𝛾 𝛼 𝑷 𝛼
• HCF Study: New protocol
240X faster than traditional
protocols
HCF/LCF Study
Conclusions
16
17. Acknowledgements
Also thanks to Donald S. Shih (Boeing), Yuksel C. Yabansu
(GT), Dipen Patel (GT), and David Brough (GT)
GOALIFunding provided by:
19. References
• Alharbi HF, Kalidindi SR. Int J Plasticity 2015;66:71.
• Adams BL, Kalidindi SR, Fullwood DT. Microstructure Sensitive Design for Performance
Optimization: Elsevier Science, 2012.
• Bunge HJ, Moris PR. Texture Analysis in Materials Science: Butterworth & Co, 1982
• Fast T, Kalidindi SR. Acta Mater 2011;59:4595.
• Kalidindi SR. ISRN Mater Sci 2012;2012:13.
• Kröner E. J Mech Phy Solids 1977;25:137.
• Landi G, Niezgoda SR, Kalidindi SR. Acta Mater 2010;58:2716.
• Przybyla C., Prasannavenkatesan R., Salajegheh N., McDowell D.L. Microstructure-sensitive
modeling of high cycle fatigue. International Journal of Fatigue, Vol. 32, Iss. 3, (2010) pg.
512-525
• Przybyla C.P., McDowell D.L. Simulation-based extreme value marked correlations in fatigue
of advanced engineering alloys. Procedia Engineering, Vol. 2, Iss. 1, (2010) pg. 1045-1056
• Smith BD. Masters Thesis 2013.
• Yabansu YC, Patel DK, Kalidindi SR. Acta Mater 2014;81:151.
19
Editor's Notes
Numerous possibilities for material microstructure and properties
It is expensive to experimentally evaluate new materials for strength, fatigue life, etc.
Computational tools provide a more efficient way to explore the space of materials!
In recent publications fatigue life of various materials have been compared using a computational approach
Talk about this procedure in moderate detail
Unfortunately, CPFEM is a major computational effort. In (Smith 2013) 5 materials were represented by 100 SVEs each. Each of these SVEs would likely take several hours on a supercomputer to calculate.
In this work we propose a procedure to compute these plastic strains hundreds of times faster.
This approach allows for the comparison of many more materials with the elimination of the CPFEM bottleneck.
For each microstructure we want to perform simulations and evaluate some homogenized property, in this case the HCF resistance
Traditionally we would generate a representative volume element (RVE), i.e. an instantiation of microstructure large enough that it captures the statistical variation of the property of interest (if we made the RVE larger, the homogenized property would not change). Unfortunately this is a problem because big RVEs require even bigger computational resources
The other option is to build a set of numerous smaller microstructure instantiations called statistical volume elements (SVEs). Each SVE takes much less time to evaluate so this approach is more computationally efficient.
“Fatigue crack formation in alpha/beta Ti alloys is primarily associated with the development of crystallographic facets at the grain scale” [Pryzbyla 2010]
7200 Hours is approximately 10 months
𝑚 𝑥,𝑛 (microstructure function): probability density of finding local state 𝒏+𝒅𝒏 at 𝒙+𝒅𝒙
𝑚 𝑥,𝑛 𝑑𝑥𝑑𝑛: probability of finding local state 𝒏+𝒅𝒏 at 𝒙+𝒅𝒙
𝜶 𝑟,𝑛 (influence function): contribution to local response at current spatial location from local state 𝒏±𝒅𝒏 existing at spatial location 𝒓+𝒅𝒓 away
𝐻: set of all possible distinct local states in materials system
𝜞 𝑟 a has singularity at r = 0, and the convergence of the series is sensitive to the choice of 𝑪 𝑹 .
Instead of using 𝑪 𝑹 the MKS localization relationships are calibrated through a linear regression of microstructures and their local responses.
Once calibrated, the MKS predicts the response of any microstructure in the materials system with low computational cost
Both microstructure function and influence function can be expressed as linear combination of orthonormal basis functions in local state and spatial location
Indicator function is chosen for spatial location to make regular grid. This is needed so that DFT may be used to decouple convolution in MKS series summation
The contribution to local strain in the influence function decreases with an increasing distance r.
The influence coefficients in the image decay very quickly
This is consistent as the material system is a two phase composite with low contrast (1.5) for the elastic moduli between the two phases
Talk about computational issues with the local state space for polycrystals, explain why some complicated mathematics may be required…
Question from Dr. Georges Cailletaud: You have used a voxelated mesh, how do you know that this is a good assumption in terms of the eventual prediction of FIPs? Will the non-smooth surface be an issue? How about the representation of small grains?Matthew Priddy’s Answer: I believe Craig Przybyla looked at element size, but I am not sure if anyone has looked at element type or grain boundary shapes. I have seen a couple of papers in the literature that look at staircase (voxelated) versus smooth grain boundaries, but I don't believe I read anything definitive that made me believe the voxelated mesh wasn't sufficient.
Question from Dr. Georges Cailletaud: How come you don’t have a free surface for your BCs? From my understanding fatigue cracks often initiate from the surface as the stress state is more extreme.
Matthew Priddy’s Answer: “In titanium alloys, there is a transition from surface to subsurface fatigue crack initiation between 10^6 and 10^9 cycles (Przybyla thesis, page 36). Subsurface fatigue crack formation has also been seen on pyramidal planes in Ti-64 (Przybyla, 172). He also looked at surface nucleation (removing the PBCs) and found that they could be in competition with the subsurface formation. But, there is a question here of whether we are simulating a large enough volume for the surface effects to be accurate. I think that might be the more relevant item in this discussion. In order to accurately capture the surface effects, you would need a larger volume, I believe.”
My Comments: The HCF resistance for both textures is highest for z-axis loading (direction 1) because the material is stiffer in that direction (both have strong basal texture such that the c-axes of many grains align with the loading direction) therefore the maximum plastic strain range is much lower.
Question by Dr. Georges Cailletaud: While the plastic strain range is lower for loading in the z-direction, the normal stress will be higher. How then are you sure that with the FS-FIP that z-axis loading will display a better response?
My Answer to question: While the normal stress will increase for z-axis loading, (maybe from 850-1000 MPa), the plastic strain range will decrease by orders of magnitude. Therefore the FS-FIP will still be lower.
My Comments: The HCF resistance for both textures is highest for z-axis loading (direction 1) because the material is stiffer in that direction (both have strong basal texture such that the c-axes of many grains align with the loading direction) therefore the maximum plastic strain range is much lower.
Question by Dr. Georges Cailletaud: While the plastic strain range is lower for loading in the z-direction, the normal stress will be higher. How then are you sure that with the FS-FIP that z-axis loading will display a better response?
My Answer to question: While the normal stress will increase for z-axis loading, (maybe from 850-1000 MPa), the plastic strain range will decrease by orders of magnitude. Therefore the FS-FIP will still be lower.
To determine the stress, each slip system of the material is then evaluated for plastic deformation, which is dislocation slip in this case. The flow rule used to describe the plastic shear strain rate of each slip system is a power-law flow rule. The drag, threshold, and back stress describe the non-directional and directional hardening aspects of the material, respectively. And the McCauley brackets require the numerator to be greater than zero for gamma dot to be a non-zero value. Now, Ti-64 can either be made up of primary-alpha grains, which are solely an HCP crystal structure, or it can be made up of alpha and beta regions, which is a combination of HCP and BCC crystal structures. This model can handle either. If the grain is deemed a “colony” grain, then we consider it as a homogenized lamellar structure similar to the one shown. The Burger’s orientation relation (BOR) is used to describe the BCC crystal orientation relative to the HCP crystal orientation. Additionally, the CRSS for the basal and selective prismatic slip systems are increased. And the diameter term in the threshold stress is modified to reflect the lath width of the lamellar structure.
*Power-law Flow Rule. Backstress evolves according to an Armstrong-Frederick direct hardening/dynamic recovery relation.
*Threshold stress is combination of a Hall-Petch strengthening term and a softening term. The Hall-Petch diameter is the mean slip distance in the alpha-phase, whether it is pure alpha or a colony structure. The softening term follows a dynamic recovery.
*Drag stress is a function of the CRSS and the initial threshold stress.
*The CRSS values are strengthened for colony grains.
*The CRSS in compression is also modified to account for the tension-compression asymmetry that has been observed, experimentally. Possibly due to prismatic dislocations dissociating into pyramidal planes.