International Conference of Fracture 2017: A procedure to determine the fracture properties of nuclear cladding from ring compression tests. A parametric study
IMAC 2010 Presentation: Error Quantification in Calibration of AFM Probes Due...frentrup
Presentation at the International Modal Analysis Conference 2010 in Jacksonville, FL (1-4 Feb 2010) in the Uncertainty Quantification Division.
The presentation deals with improving calibration techniques for Atomic Force Microscopes by taking the variation in thickness of the instrument's micro-cantilever beam into account. It highlights the possible error in not accounting for this variation for two common calibration techniques and
IMAC 2010 Presentation: Error Quantification in Calibration of AFM Probes Due...frentrup
Presentation at the International Modal Analysis Conference 2010 in Jacksonville, FL (1-4 Feb 2010) in the Uncertainty Quantification Division.
The presentation deals with improving calibration techniques for Atomic Force Microscopes by taking the variation in thickness of the instrument's micro-cantilever beam into account. It highlights the possible error in not accounting for this variation for two common calibration techniques and
High Speed Parameter Estimation for a Homogenized Energy Model- Doctoral Defe...Jon Ernstberger
I used this presentation when making my final doctoral defense at NC State University in June 2008. My defense was entitled "High Speed Parameter Estimation for a Homogenized Energy Model". Dr. Ralph C. Smith was my advisor.
Nuclear Material Verification Based on MCNP and ISOCSTM Techniques for Safegu...IOSRJAP
Recently, Mathematical techniques such as Monte Carlo and ISOCSTM software are being increasingly employed in the absolute efficiency calibration of gamma ray detector. Monte Carlo simulations and Canberra ISOCSTM software bring the possibility to establish absolute efficiency curve for desired energy range based on numerical simulation, with use of known or guessed geometry and chemical composition, of measured item. Broad-energy germanium (BEGe) detector was employed to perform the NDA measurements to five standard reference nuclear material (NBS, SNM-969). MC calculations were performed to calculate some factors (attenuation, geometry and efficiency) which affect the uranium isotope mass estimation. 235U and 238U masses are calculated based on MCNPX modeling calibration and also upon spectra analysis using ISOCSTM Calibration Software. The obtained results from the two different efficiency calibration methods were compared with each other and with the declared value for each sample. The obtained results are in agreements with the declared values within the estimated relative accuracy (ranges between -2.81 to 1.83%). The obtained results indicate that the techniques could be applied for the purposes of NM verification and characterization where closely matching NM standards are not available.
Computational and experimental investigation of aerodynamics of flapping aero...Lahiru Dilshan
Renewal interest on the exploitation of flapping flight motions to attain high propulsion efficiency of air vehicles is inspired by the aerodynamics of birds’ and insects’ flights. The flapping characteristics can be majorly used to develop micro aerial vehicles (MAV) as this is a lucrative method to generate lift and thrust simultaneously. In this project, the variation of the flow properties and the thrust generation of an airfoil in a flapping (plunging) motion, is evaluated using both computational and experimental methods. The NACA 2412 airfoil was selected for the study and, the computational method was carried out using an inviscid flow model and computational fluid dynamics (CFD) simulations, simultaneously to obtain and compare the variation of properties.
The inviscid model was developed using conformal mapping and potential flow theories, and it is capable of producing results for any arbitrary aerofoil. Steady-state results were compared and validated in both CFD and inviscid flow modelling as the computational framework along with flow visualisation and force sensing as the experimental framework. The validated CFD and inviscid models have been developed to produce a plunging motion to the aerofoil and obtain the variation of drag and lift coefficients with time. The experimental setup was designed to obtain the forces acting on the airfoil, and the flow characteristics were visually observed using a flow visualization technique. The force calculations were done through a developed and optimized load cell arrangement. The developed smoke flow visualisation technique is capable of successfully capturing streamline patterns, flow separation regions. These results were compared along with wake development between computational and experimental models. The Level of agreement and limitations of each method have been discussed in this report.
Examination of Tensile Test Specimens Produced in Three-Dimensional Printer by Fuat Kartal* in Crimson Publishers: Applied mechanical engineering
In this study, the effect of different parameters on tensile test specimens produced by joint manufacturing with open source code and equipment using PLA type filament was investigated experimentally. Tensile specimens were designed and manufactured according to ASTM IV type tensile test standards. The test design was based on the L9 orthogonal array of the Taguchi Method and experiments was designed according to this plan. According to the results, Parameters of layer thickness and filling scan range parameters were found to provide significant improvement in the tensile strength increase
Objective of the experiment:
1 - Study the relationship between the force (P) and
elongation (ΔL).
2 - Stability and study the relationship between strain (ε)
and stress (σ).
3 - Study the concept of the mechanical properties of solids.
4 - Establish a modulus of elasticity (E)
Use of the PerkinElmer TMA 4000 to Perform Standard Test Methods in the Elect...PerkinElmer, Inc.
This application note demonstrates the PerkinElmer TMA 4000 as a method to measure thermal expansion in electronics using standard test methods.
Learn more about the TMA 4000: http://bit.ly/1kwGYbY
Transient three dimensional cfd modelling of ceilng fanLahiru Dilshan
Ceiling fans are used to get thermal comfort, especially in tropical countries. With the increment of the usage of air conditioners, the emission of CO2 is increased. But ceiling fans are a limited solution, that saves much energy compared to air conditioners. Ceiling fans generate a non-uniform velocity profile, so that, there is a non-uniform thermal environment. That non-uniform environment does not imply lower thermal comfort, that will give enough thermal comfort with low energy cost by air velocity. Hence, there will be difficulties of analysing with simple modelling techniques in that environment. So, to predict the performance of the ceiling fan required more accurate models.
The accurate model of a ceiling fan will generate complex geometry that makes difficulties for the simulation process and requires higher computational power. Because of that, there are several methods used to predict the performance of the ceiling fan using mathematical techniques but that will give an estimated value of properties in the surrounding.
Correlation between Residual Stress and hardness response generated by Laser ...Dr. Suraiya Zabeen
We have investigated the hardening response, residual stress generation and microstructural changes in aluminium alloy 2624 owing to laser shock peening. The alloy was studied in two heat treatment conditions, T351 and T39, that have 20% difference in yield strength: hence the effects of laser power density and multiple peen impacts on materials with nominally identical physical properties but with different hardening responses has been studied. Hardness was characterised by nanoindentation, and residual stresses were measured by incremental hole drilling.
The magnitude and the depth of the peak compressive residual stresses increase with increasing power densities as well as the number of laser impacts, before reaching a saturation point above which loss of surface compression occurs. Maximum compressive residual stresses were around −350 MPa, and maximum hardness increase was around 22%. The treatment has a noticeable effect in changing the microstructures of the T351 temper while the T39 remained almost unchanged.
Determination Of Geometric Stress Intensity Factor For A Photoelastic Compac...Anupam Dhyani
Experimental and analytical studies with finite elements was done on a polycarbonate transparent material as a forerunner to a similar study on transparent glass -epoxy composites
High Speed Parameter Estimation for a Homogenized Energy Model- Doctoral Defe...Jon Ernstberger
I used this presentation when making my final doctoral defense at NC State University in June 2008. My defense was entitled "High Speed Parameter Estimation for a Homogenized Energy Model". Dr. Ralph C. Smith was my advisor.
Nuclear Material Verification Based on MCNP and ISOCSTM Techniques for Safegu...IOSRJAP
Recently, Mathematical techniques such as Monte Carlo and ISOCSTM software are being increasingly employed in the absolute efficiency calibration of gamma ray detector. Monte Carlo simulations and Canberra ISOCSTM software bring the possibility to establish absolute efficiency curve for desired energy range based on numerical simulation, with use of known or guessed geometry and chemical composition, of measured item. Broad-energy germanium (BEGe) detector was employed to perform the NDA measurements to five standard reference nuclear material (NBS, SNM-969). MC calculations were performed to calculate some factors (attenuation, geometry and efficiency) which affect the uranium isotope mass estimation. 235U and 238U masses are calculated based on MCNPX modeling calibration and also upon spectra analysis using ISOCSTM Calibration Software. The obtained results from the two different efficiency calibration methods were compared with each other and with the declared value for each sample. The obtained results are in agreements with the declared values within the estimated relative accuracy (ranges between -2.81 to 1.83%). The obtained results indicate that the techniques could be applied for the purposes of NM verification and characterization where closely matching NM standards are not available.
Computational and experimental investigation of aerodynamics of flapping aero...Lahiru Dilshan
Renewal interest on the exploitation of flapping flight motions to attain high propulsion efficiency of air vehicles is inspired by the aerodynamics of birds’ and insects’ flights. The flapping characteristics can be majorly used to develop micro aerial vehicles (MAV) as this is a lucrative method to generate lift and thrust simultaneously. In this project, the variation of the flow properties and the thrust generation of an airfoil in a flapping (plunging) motion, is evaluated using both computational and experimental methods. The NACA 2412 airfoil was selected for the study and, the computational method was carried out using an inviscid flow model and computational fluid dynamics (CFD) simulations, simultaneously to obtain and compare the variation of properties.
The inviscid model was developed using conformal mapping and potential flow theories, and it is capable of producing results for any arbitrary aerofoil. Steady-state results were compared and validated in both CFD and inviscid flow modelling as the computational framework along with flow visualisation and force sensing as the experimental framework. The validated CFD and inviscid models have been developed to produce a plunging motion to the aerofoil and obtain the variation of drag and lift coefficients with time. The experimental setup was designed to obtain the forces acting on the airfoil, and the flow characteristics were visually observed using a flow visualization technique. The force calculations were done through a developed and optimized load cell arrangement. The developed smoke flow visualisation technique is capable of successfully capturing streamline patterns, flow separation regions. These results were compared along with wake development between computational and experimental models. The Level of agreement and limitations of each method have been discussed in this report.
Examination of Tensile Test Specimens Produced in Three-Dimensional Printer by Fuat Kartal* in Crimson Publishers: Applied mechanical engineering
In this study, the effect of different parameters on tensile test specimens produced by joint manufacturing with open source code and equipment using PLA type filament was investigated experimentally. Tensile specimens were designed and manufactured according to ASTM IV type tensile test standards. The test design was based on the L9 orthogonal array of the Taguchi Method and experiments was designed according to this plan. According to the results, Parameters of layer thickness and filling scan range parameters were found to provide significant improvement in the tensile strength increase
Objective of the experiment:
1 - Study the relationship between the force (P) and
elongation (ΔL).
2 - Stability and study the relationship between strain (ε)
and stress (σ).
3 - Study the concept of the mechanical properties of solids.
4 - Establish a modulus of elasticity (E)
Use of the PerkinElmer TMA 4000 to Perform Standard Test Methods in the Elect...PerkinElmer, Inc.
This application note demonstrates the PerkinElmer TMA 4000 as a method to measure thermal expansion in electronics using standard test methods.
Learn more about the TMA 4000: http://bit.ly/1kwGYbY
Transient three dimensional cfd modelling of ceilng fanLahiru Dilshan
Ceiling fans are used to get thermal comfort, especially in tropical countries. With the increment of the usage of air conditioners, the emission of CO2 is increased. But ceiling fans are a limited solution, that saves much energy compared to air conditioners. Ceiling fans generate a non-uniform velocity profile, so that, there is a non-uniform thermal environment. That non-uniform environment does not imply lower thermal comfort, that will give enough thermal comfort with low energy cost by air velocity. Hence, there will be difficulties of analysing with simple modelling techniques in that environment. So, to predict the performance of the ceiling fan required more accurate models.
The accurate model of a ceiling fan will generate complex geometry that makes difficulties for the simulation process and requires higher computational power. Because of that, there are several methods used to predict the performance of the ceiling fan using mathematical techniques but that will give an estimated value of properties in the surrounding.
Correlation between Residual Stress and hardness response generated by Laser ...Dr. Suraiya Zabeen
We have investigated the hardening response, residual stress generation and microstructural changes in aluminium alloy 2624 owing to laser shock peening. The alloy was studied in two heat treatment conditions, T351 and T39, that have 20% difference in yield strength: hence the effects of laser power density and multiple peen impacts on materials with nominally identical physical properties but with different hardening responses has been studied. Hardness was characterised by nanoindentation, and residual stresses were measured by incremental hole drilling.
The magnitude and the depth of the peak compressive residual stresses increase with increasing power densities as well as the number of laser impacts, before reaching a saturation point above which loss of surface compression occurs. Maximum compressive residual stresses were around −350 MPa, and maximum hardness increase was around 22%. The treatment has a noticeable effect in changing the microstructures of the T351 temper while the T39 remained almost unchanged.
Determination Of Geometric Stress Intensity Factor For A Photoelastic Compac...Anupam Dhyani
Experimental and analytical studies with finite elements was done on a polycarbonate transparent material as a forerunner to a similar study on transparent glass -epoxy composites
The COREMA system allow for non-destructive resistivity testing of semi-insulating wafers made with materials such as SiC, GaN, GaAs and CdZnTe. The range is 1E5- 1E12 ohm-cm.
Fracture behaviour and damage characterisation in composite impact panels by ...Fabien Léonard
Presentation made by Dr Arthur Wilkinson at the Thermosets 2013 conference in Berlin, Germany (September 18-20).
This work presents how single edge notch bend (SENB) fracture, Mode-I ILFT and computed tomography (CT) can be employed to characterise the fracture and impact behaviour of composite panels.
Data Teknis Gossen Metrawatt Ground Tester : GEOHM PRO & GEOHM XTRAPT. Siwali Swantika
Pemesanan produk, hubungi PT Siwali Swantika melalui WhatsApp, Jakarta : 0811-1519-949 (chat only) | Surabaya : 0811-1519-948 (chat only). Kunjungi website kami di www.siwali.com, untuk detail informasi spesifikasi dan model alat.
The Innovative Laser Technologies in Cooperation with Ukrainian UniversitiesLvivPolytechnic
Presentation: The Innovative Laser Technologies in Cooperation with Ukrainian Universities
Presented by: Bogdan Antoszewski,
Kielce University of Technology
For: Ukrainian-Polish Forum «Technical Education for the Future of Europe»
Lviv, Ukraine, November, 6-9, 2014
Improvement of Surface Roughness of Nickel Alloy Specimen by Removing Recast ...IJMER
In this investigation, experimental work and computational work are combined to obtain
improvement in the surface roughness of nickel alloy specimen, the machining is carried out by means
of CNC wire electric discharge machining (WEDM). Brass wire is used as the tool electrode and nickel
alloy (Inconel600) is used as the work piece material. The machining parameters such as Pulse-On time
(Ton), Pulse-Off time (Toff), Peak Current (Ip), and Bed speed are considered as input parameters for this
project. Surface roughness and Recast layer are considered the output parameters. The experiments
with the pre-planned set of input parameters are designed based on Taguchi’s orthogonal array. The
surface roughness is measured using stylus type roughness tester and the thickness of the Recast layer
is measured using Scanning Electron Microscope (SEM). The results obtained from the experiments are
fed to the Minitab software and optimum input parameters for the desired output parameters are
identified. The software uses the concept of analysis of variance (ANOVA) and indicates the nature of
effect of input parameters on the output parameters and confirmation is done by validation
experiments. Once the recast layer thickness is obtained Chemical Etching and abrasive blasting is
performed in order to remove the recast layer and again the surface roughness is measured by using
stylus type roughness tester. Finally from the obtained results it was found that there was significant
improvement in the Surface roughness of the nickel alloy material. In addition using regression
analysis this work is stimulated by computational method and the results are obtained.
KEMET Webinar - C4AQ/C4AF power box film capacitorsIvana Ivanovska
Join us on the webinar to learn more about the applications where our C4AQ and C4AF series can be used.
Our engineers will speak about the performances of the series
Datasheet Metrel MI 3102H BT. Hubungi PT. Siwali Swantika 021-45850618PT. Siwali Swantika
Datasheet Metrel Multifunctional Electrical Installation Tester. Informasi lebih detail hubungi PT. Siwali Swantika, Jakarta Office : 021-45850618 atau Surabaya Office : 031-8421264.
First results from the full-scale prototype for the Fluorescence detector Arr...Toshihiro FUJII
The Fluorescence detector Array of Single-pixel Telescopes (FAST) is a design concept for the next generation of ultrahigh-energy cosmic ray (UHECR) observatories, addressing the requirements for a large-area, low-cost detector suitable for measuring the properties of the highest energy cosmic rays. In the FAST design, a large field of view is covered by a few pixels at the focal plane of a mirror or Fresnel lens. Motivated by the successful detection of UHECRs using a prototype comprised of a single 200 mm photomultiplier-tube and a 1 m2 Fresnel lens system [Astropart.Phys. 74 (2016) 64-72], we have developed a new full-scale prototype consisting of four 200 mm photomultiplier-tubes at the focus of a segmented mirror of 1.6 m in diameter. In October 2016 we installed the full-scale prototype at the Telescope Array site in central Utah, USA, and began steady data taking. We report on first results of the full-scale FAST prototype, including measurements of artificial light sources, distant ultraviolet lasers, and UHECRs.
35th International Cosmic Ray Conference — ICRC2017 18th July, 2017
Bexco, Busan, Korea
Fast Thermo-Optic Optimization of High-Order SOI Microring Optical Filters be...TylerJamesZimmerling
We experimentally demonstrated a fast optimization algorithm based on the method of gradient descent for
achieving optimum spectral response of high-order silicon microring optical filters. The filter optimization was
performed on a 4th-order serially-coupled silicon microring filter by thermo-optically tuning the microring resonances using Ti/W heaters. Three different optimization objective functions were used to obtain the optimum
filter shape, namely, the single-wavelength method, the dual-wavelength method, and the total transmitted power
method. The efficacy of each optimization method was evaluated and compared based on the number of required
iterations, the ideality of optimized response, and the wavelength tuning accuracy.
Improvement of Surface Roughness of Nickel Alloy Specimen by Removing Recast ...IJMER
Abstract: In this investigation, experimental work and computational work are combined to obtain improvement in the surface roughness of nickel alloy specimen, the machining is carried out by means of CNC wire electric discharge machining (WEDM). Brass wire is used as the tool electrode and nickel alloy (Inconel600) is used as the work piece material. The machining parameters such as Pulse-On time (Ton), Pulse-Off time (Toff), Peak Current (Ip), and Bed speed are considered as input parameters for this project. Surface roughness and Recast layer are considered the output parameters. The experiments
with the pre-planned set of input parameters are designed based on Taguchi’s orthogonal array. The surface roughness is measured using stylus type roughness tester and the thickness of the Recast layer is measured using Scanning Electron Microscope (SEM). The results obtained from the experiments are fed to the Minitab software and optimum input parameters for the desired output parameters are identified. The software uses the concept of analysis of variance (ANOVA) and indicates the nature of effect of input parameters on the output parameters and confirmation is done by validation
experiments. Once the recast layer thickness is obtained Chemical Etching and abrasive blasting is performed in order to remove the recast layer and again the surface roughness is measured by using stylus type roughness tester. Finally from the obtained results it was found that there was significant improvement in the Surface roughness of the nickel alloy material. In addition using regression analysis this work is stimulated by computational method and the results are obtained
KEMET Webinar - Long Life and Humidity Grade New Film Box AC FiltersMarkus Trautz
AC-Filter capacitors are used in power electronic equipment like UPS, Motor Drives, Inverters or Battery Chargers in order to absorb the harmonic current generated in the different power conversion stages.
Film dielectric has the best performance in term of safety thanks to its self-healing property particularly important in AC application.
The new C4AF series, available in 250Vac,310Vac and 400Vac voltage ratings, combines strong performances in current thanks to the low dissipation along with a heavy-duty characterization, being able to withstand 85°C / 85% RH test at rated voltage.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
ICF 2017
1. Determination of Fracture Toughness of Nuclear
Fuel Cladding from Ring Compression Tests
F.J. Gomez(1), M.A. Martin-Rengel(2), J. Ruiz-Hervias(2)
• (1) ADVANCED MATERIAL SIMULATION, Bilbao Spain
• (2) Universidad Politecnica de Madrid, Spain
www.amsimulation.com
E-mail: javier.gomez@amsimulation.com
6. Numerical modelling
Finite elements
• ABAQUS
• 6-8 node elements
• 5 mm size
• NLGEOM
• Contact at upper surface →rigid
surface
• Friction 0.125
• Cohesive zone model
• Linear softening curve
• User subroutine UEL
wc
ft
Displacement, w
Stress,σ
Gf
dwwfG
cw
f
0
17. Error estimation
• The fitting procedure introduces an error.
• Similar P-d curves correspond a range of Fracture Toughness
• There exists a relationship between Err and Fracture Toughness error
• The final error in the Fracture Toughness depends on the quality of the adjustment
400
600
800
1000
1200
1400
0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
f
t
(MPa)
w
c
/2 (mm)
K
IC
= 75 MPam
0.5
K
IC
= 61 MPam
0.5
duPPErr num
finu
u
2
exp
_
max_
18. Error estimation
The relation Err – DKIC is determined numerically by using the optimum as the
reference curve
duPPErr num
finu
u
2
exp
_
max_
0
5
10
15
0 0.0005 0.001 0.0015 0.002
DK/K
IC,ref
(%)
Error
Coarse mesh calculation + POD-BRF interpolation
400
600
800
1000
1200
1400
0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
f
t
(MPa)
w
c
/2 (mm)
K
IC
= 75 MPam
0.5
K
IC
= 61 MPam
0.5
19. Error estimation
The relation Err – DKIC is determined numerically by using the optimum as the
reference curve
duPPErr num
finu
u
2
exp
_
max_
1. Fixed a value for Err
2. Determine the set of points
(wc, ft) where the error is less or
equal tan Err* relative to the
reference curve
3. Calculate K*IC,max and K*IC,min,
4. DK* 400
600
800
1000
1200
1400
0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
f
t
(MPa)
w
c
/2 (mm)
K
IC
= 75 MPam
0.5
K
IC
= 61 MPam
0.5
20. Error estimation
duPPErr num
finu
u
2
exp
_
max_
0
5
10
15
0 0.0005 0.001 0.0015 0.002
DK/K
J,ref
(%)
Error
The relation Err – DKIC is determined numerically by using the optimum as the
reference curve
22. Conclusions
• The proposed procedure combines experimental tests, finite
element simulations and an optimization algorithm to determine
the Fracture Toughness.
• The method has been applied successfully to zirlo tested at three
temperatures, two velocities and six hydride contents.
• Numerical calculations significally fit experimental data.
• The inverse method proposed introduced an error that has been
estimated numerically.
• The final output is KIC ± DKIC
24. Background
Material → ZIRLO
Ring compression tests
Cilinders: 10 mm height, 9.5 mm diameter (ext)
0.57 mm thickness
Hydrides
Cathodic charging in KOH aqueous solutions + thermal treatment
150 wppm of H
Hydrogen concentrations studied 0, 150, 250, 500, 1200 and 2000 ppm
500 wppm of H 1200 wppm of H
25. Background
0
0.2
0.4
0.6
0.8
1
0 1 2 3 4 5 6 7
P(kN)
d (mm)
Experimental
Numerical
135ºC20ºC
300ºC
0
200
400
600
800
1000
1200
0 0.05 0.1 0.15 0.2 0.25 0.3
Stress(MPa)
Plastic strain
300ºC
135ºC
20ºC
wc
ft
Displacement, w
Stress,σ
Gf
dwwfG
cw
f
0
• Linear softening
• User element subroutine UEL
26. Background
duPPErr num
finu
u
2
exp
_
max_
1. Exhaustive search NxN matrix - coarse
2. Critical region analysis - coarse
3. Nelder-Mead Downhill Simplex – coarse
4. Nelder-Mead Downhill Simplex - fine